id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2302.00170
|
Diffusion Models for High-Resolution Solar Forecasts
|
Forecasting future weather and climate is inherently difficult. Machine
learning offers new approaches to increase the accuracy and computational
efficiency of forecasts, but current methods are unable to accurately model
uncertainty in high-dimensional predictions. Score-based diffusion models offer
a new approach to modeling probability distributions over many dependent
variables, and in this work, we demonstrate how they provide probabilistic
forecasts of weather and climate variables at unprecedented resolution, speed,
and accuracy. We apply the technique to day-ahead solar irradiance forecasts by
generating many samples from a diffusion model trained to super-resolve
coarse-resolution numerical weather predictions to high-resolution weather
satellite observations.
|
Yusuke Hatanaka, Yannik Glaser, Geoff Galgon, Giuseppe Torri, Peter Sadowski
|
2023-02-01T01:32:25Z
|
http://arxiv.org/abs/2302.00170v1
|
# Diffusion Models for High-Resolution Solar Forecasts
###### Abstract
Forecasting future weather and climate is inherently difficult. Machine learning offers new approaches to increase the accuracy and computational efficiency of forecasts, but current methods are unable to accurately model uncertainty in high-dimensional predictions. Score-based diffusion models offer a new approach to modeling probability distributions over many dependent variables, and in this work, we demonstrate how they provide probabilistic forecasts of weather and climate variables at unprecedented resolution, speed, and accuracy. We apply the technique to day-ahead solar irradiance forecasts by generating many samples from a diffusion model trained to super-resolve coarse-resolution numerical weather predictions to high-resolution weather satellite observations.
Machine Learning, ICML, ICML
## 1 Introduction
Current methods for forecasting weather and climate rely on numerical simulations of the Earth's atmosphere. These simulations characterize the atmosphere in terms of coarse three-dimensional (3D) grid cells and use physical models to describe the time evolution of atmospheric variables such as temperature, pressure, and water vapor. The only obvious way to improve these models is to perform the simulations at higher spatiotemporal resolution at an ever-increasing computational cost. Forecast uncertainties are estimated by running a simulation multiple times with perturbed inputs to generate an ensemble of potential outcomes. An ensemble typically consists of tens of samples -- enough to estimate the variance of a forecast, but not enough to estimate the risk of rare events. For example, the National Oceanic and Atmospheric Administration's Global Ensemble Forecast System (NOAA, 2022) comprises 21 samples.
Machine learning can improve these forecast models in two ways: (1) by improving the forecast models directly by correcting limitations of the physics-based models, and (2) via super-resolution of coarse-grained forecast model outputs, an approach known as _downscaling_ in the geophysical sciences. Large quantities of data exist for training these models, from both observations and simulations, with video-like spatiotemporal structures that make them amenable to modeling with deep convolutional neural networks. Recent examples from the literature include precipitation forecasting over local regions using a mix of radar and satellite data (Sonderby et al., 2020; Ravuri et al., 2021), and forecasting global atmospheric variables using reanalysis data (Pathak et al., 2022; Bi et al., 2022; Lam et al., 2022).
However, the high degree of aleatoric uncertainty, or inherent unpredictability, presents a challenge for machine learning models. Small differences in the initial atmospheric conditions can cause large differences in outcomes, so limited-precision computational models need a way of characterizing uncertainty over a high-dimensional joint distribution of dependent variables. The most common modeling approach is to assume the predicted variables are each conditionally independent given the input of initial conditions, but this approach can fail in modeling the risks of important weather events resulting in part from the variables' joint relation
Figure 1: Instantaneous cloud cover over the Hawaiian island of Oahu, sampled from a score-based diffusion model trained on satellite data with 0.5 km resolution. The high cloud density on the windward (east) side of the Koolau mountain range (center), is characteristic of mountainous tropical islands at these latitudes.
ships. For example, a model might accurately predict the rainfall at each individual location in a region reasonably well (i.e. the marginal distributions) but fail to quantify the risk of flooding which requires modeling the _joint_ distribution of rainfall over a watershed integrated over time. Some work has attempted to model this uncertainty using variational autoencoders (VAEs) (Kingma and Welling, 2013) or generative adversarial networks (GANs) (Ravuri et al., 2021), but in this work, we make the case for using diffusion models to model uncertainty in weather forecasts.
Score-based diffusion models have emerged as a remarkably effective approach to approximating distributions over natural images. Prominent examples include Dall-E 2 (Ramesh et al., 2021), GLIDE (Nichol et al., 2021), and Imagen (Saharia et al., 2022), which generate realistic images from text captions by combining large language models with diffusion models that sample (conditionally) from a high-dimensional image distribution. In this work, score-based diffusion models are trained on satellite images to perform super-resolution of numerical weather forecasts. A probabilistic forecast is generated by rapidly sampling from the conditional distribution defined by the diffusion model. Experimental results on day-ahead solar irradiance forecasting for the Hawaiian island of Oahu demonstrate increased accuracy along with quantified uncertainty. Our results suggest that this approach could be useful for a wide range of applications in the geophysical sciences, such as predicting precipitation and evapotranspiration or the effects of climate change on local weather patterns.
## 2 Related Work
### Score-Based Diffusion Models
Diffusion models are _generative_ in that they describe a probability distribution \(p(\mathbf{x})\), where \(\mathbf{x}\) can be a high-dimensional random variable, for example, an image or the state of a physical system. In general, learning a probability distribution over a high-dimensional space from data is extremely challenging because the number of required samples grows exponentially with dimensionality. However, learning is possible when the data lives on low-dimensional manifolds, such as natural images and other data with spatial and temporal structures.
The last decade has seen a number of innovative deep-learning approaches to address the problem of learning a generative model in high dimensions. These include autoregressive models (Uria et al., 2016), GANs, VAEs, and flow-based models (Rezende and Mohamed, 2015; Dinh et al., 2016), all of which have been used heavily in scientific applications. Score-based diffusion models are a special case of _reversible generative models_, which parameterize a one-to-one mapping between a known distribution and the data distribution. Previous reversible generative models such as NICE (Dinh et al., 2014), Real NVP (Dinh et al., 2016), and GLOW (Kingma and Dhariwal, 2018) can be trained by directly maximizing the data likelihood, but require significant computation for each parameter training update and scale poorly to large datasets. _Score-based_ diffusion models provide an efficient stochastic gradient learning algorithm by approximating the score function (the gradient of the log probability density \(\nabla_{\mathbf{x}}\log p(\mathbf{x})\)) rather than the probability density function \(p(\mathbf{x})\). The score function is easier to learn because local updates can be made during training without the need to ensure the probability density function integrates to one.
A score-based diffusion model consists of a neural network representing a time-dependent function with the same input and output dimensions, \(\mathbf{s}_{\theta}(\mathbf{x},t):(\mathbb{R}^{D},[0,T])\rightarrow\mathbb{R} ^{D}\), parameterized by \(\theta\). For image data, this is typically implemented as a U-Net to model the local structure in images. The data-generating process can be defined by sampling \(\mathbf{x}_{T}\) from a standard multivariate normal distribution and then solving the following ordinary differential equation in the reverse time direction to generate a sample at \(t=0\), \(\mathbf{x}_{0}\):
\[\frac{d\mathbf{x}}{dt}=-\mathbf{s}_{\theta}(\mathbf{x},t)a(t)+\mathbf{b}( \mathbf{x},t), \tag{1}\]
where \(a(t)\) and \(\mathbf{b}(\mathbf{x},t)\) are pre-specified _drift_ and _diffusion_ coefficients that determine the shape of the prior. Because the data-generating process is reversible, we can compute the likelihood \(p_{\theta}(\mathbf{x})\) of any data point \(\mathbf{x}\). But rather than maximizing the training data likelihood directly, it is more efficient to optimize a score-based objective such as the "denoising" objective suggested by Song et al. (2021):
\[\mathcal{J}=\mathbb{E}_{\mathbf{x}_{0}}\mathbb{E}_{t}\mathbb{E}_{\mathbf{x}_ {t}}\left[a(t)\|\mathbf{s}_{\theta}(\mathbf{x}_{t},t)-\nabla_{x_{t}}\log p( \mathbf{x}_{t}|\mathbf{x}_{0})\|_{2}^{2}\right] \tag{2}\]
where the outer expectation is over the training data set, the second expectation is over time \(t\sim\mathcal{U}(0,T)\), and the inner expectation is over a distribution of sample corruptions in which the training sample \(\mathbf{x}_{0}\) is propagated through a stochastic noising process for time \(t\) (in practice, this is just \(\mathbf{x}_{0}\) plus some Gaussian noise which increases with \(t\)). The objective is efficient to optimize using stochastic gradient descent. At each iteration, samples are drawn from each of the three distributions, and parameters are updated with gradient descent. The result is a simple, intuitive, training algorithm: Gaussian noise is added to data samples and the parameterized model is trained to denoise the samples. This objective has been shown to upper bound the negative log-likelihood of the training data (Song et al., 2021) and thus the model learns to approximate a high-dimensional joint distribution. In this work, we generate many samples from such a model to provide a probabilistic forecast.
### Solar Irradiance Forecasting
Solar irradiance is a particularly interesting forecasting application since better forecasting can reduce the risks to electricity grid stability that are associated with increasing grid penetration of non-dispatchable solar photovoltaics (PV). The last few years have seen a number of applications of computer vision techniques to solar irradiance forecasting, including optical flow models (Wood-Bradley et al., 2012) or more recently deep learning (Zhang et al., 2018; Sun et al., 2018; Hart et al., 2020; Berthomier et al., 2020; Kellerhals et al., 2022), including GANs (Nie et al., 2021). However, none of these approaches provide fully-probabilistic predictions that have the ability to estimate the risk of many PV sites being cloudy at once; this work is the first to propose a method for estimating this risk in a tractable way.
We evaluate our approach on the Hawaiian island of Oahu (population 1,000,000 and the site of ICML 2023), where the problems and opportunities of renewable energy are amplified. The isolation of each Hawaiian island's electrical grid and a dependence on imported petroleum result in high electricity prices, with Hawaii consumers paying $0.30/kWh in 2021, twice that of customers in California (geopolitical events in May 2022 caused this to increase 34% to $0.42/kWh (Bureau of Labor Statistics, U.S. Department of Labor, 2022)). On the other hand, Hawaii's low latitude makes solar an attractive alternative energy source, and Honolulu has more PV production per capita than any other city in the United States, with PV systems installed on nearly a third of single-family residential homes (Penn and Fremson, 2022; Hawaiian Electric, 2022). This results in large spikes in energy demand when clouds pass over solar generation sites and dense population areas. Predicting these events could enable grid operators to mitigate their impact through demand response strategies such as starting new generators, charging batteries, and turning off water heaters and HVAC systems.
## 3 Methods
### Model
In experiments we use two cascaded diffusion models (Ho et al., 2021), where the first model generates \(64\times 64\) pixel images, then a second model performs super-resolution to produce \(128\times 128\) pixel images (Fig. 3). Both diffusion models are conditioned on the same coarse-resolution atmospheric variables output by numerical simulations. At training time, the model is conditioned on atmospheric variables from ERA5 reanalysis data. The trained model is then evaluated for two different applications: (1) _historical_ cloud cover prediction by conditioning on held-out reanalysis data; (2) _future_ cloud cover prediction by conditioning on forecasts from GFS. GFS and ERA5 use the same underlying physics models, so the same atmospheric variables are available at approximately the same resolution.
A U-Net architecture (Ronneberger et al., 2015) is used for each diffusion model (Figure 4). It utilizes the skip connections to preserve the context at different spacial levels, while the network progressively encodes the image to capture information on wider range. The specific U-net configurations largely follow details provided in Ho et al. (2021) for the 64\(\times\)64\(\rightarrow\)128\(\times\)128 ImageNet model. Briefly, each downsampling and upsampling block adjusts the spatial dimensionality by a factor of 2 while the channel count is modified by a multiplier \(M_{n}\). The atmospheric conditioning information is injected as a flat vector into each block in the network. Our full cascaded diffusion model implementation is based on the publicly available code by Wang (2022).
### Data
Three types of data are used for experiments. The model is trained to predict cloud cover images derived from GOES satellite data from reanalysis data. Then this trained model is used to super-resolve GFS forecasts. Model training was performed on data from January 2019 - June 2021, and all model evaluations were performed on data from July 2021 - June 2022.
Figure 2: A score-based diffusion model samples from a high-dimensional data distribution by first sampling from a reference distribution, then solving the reverse-time ODE defined by Eq. 1 to obtain a sample from the learned distribution. In this example, a \(128\times 128\) pixel “noise” image is sampled from a multivariate Gaussian (Iteration 0) and iteratively refined by a neural network to obtain a sample of a realistic-looking weather satellite radiance image of Oahu (Iteration 800).
#### 3.2.1 GOES satellite data
The National Oceanographic and Atmospheric Administration's GOES-17 (GOES-West) satellite provides high-resolution atmospheric measurements over the Pacific Ocean in near real-time with the explicit goal of improving weather forecasting capabilities. Its Advanced Baseline Imager conducts full-disk observations once every 10 minutes, measuring 16 spectral bands from visible to long-wave infrared. We use Band 2 (\(0.60-0.68\) um), which has 0.5 km spatial resolution and matches the peak absorption range of crystalline silicon photovoltaic (PV) cells. Data from 2019-2022 was downloaded from the Google Cloud Platform.
From the GOES-17 measurements of solar radiance, we estimated total cloud cover for each pixel using a flat-fielding algorithm. Working with this representation of the data has three advantages. First, total cloud cover does not drastically change throughout the day, as is the case with solar radiance or ground irradiance. Second, total cloud cover can be combined with data-driven clear-sky radiation models such as Perez et al. (1990) to predict solar irradiance on the ground. This approach is more accurate than the non-data-driven estimates of global horizontal solar irradiance provided by numerical weather prediction. Third, total cloud cover is a variable provided by both GFS and ERA5, making it easy to compare our predictions with those predictions.
For each timestamp, the zenith solar angle was calculated relative to the center of the island of Oahu and removed if its value is more than 80 degrees, in order to avoid noisy data points. This resulted in a dataset of 51,830 training data and 22,807 test data. See Fig. 7 for some of the random data examples from the training dataset. Both the training and the test set include hours between 6 am and 6 pm, although the range varies depending on the season.
#### 3.2.2 ERA5 reanalysis data
Historical reanalysis data was taken from the European Center for Medium-range Weather Forecasts (ECMWF) Reanalysis v5 (ERA5) (Hersbach et al., 2020). Reanalysis data are produced through a blend of observational data and short-range weather forecasts that get assimilated and the result is incorporated into one regularly spaced grid. While most reanalysis products are available at spatial and temporal resolutions much coarser than typical short-range weather forecasts, they can nevertheless provide an overall understanding of past weather conditions, and they can be used to initialize numerical simulations. ERA5 is available at a grid resolution of 31 km and a temporal resolution of 1 hour, and they cover a period from 1950 to the present.
The diffusion model is conditioned on five atmospheric variables over a \(3\times 3\) grid covering the island of Oahu, for a total of 45 values. The five atmospheric variables describe the cloud structure and moisture flux: (1-3) cloud cover at different levels of the atmosphere (low, medium, and high cloud cover), (4) the total cloud cover (which is simply the sum of low, medium, and high), and (5) the vertically-integrated eastward water vapor flux. For each
Figure 4: Both diffusion models use a U-net architecture. Vertical lines are downsampling/ upsampling steps, and dotted horizontal lines are concatenations. \(R_{n}\) and \(M_{n}\) denote the number of residual blocks and the channel-dimension multiplier, respectively. \(N\) is the dimension of the input image, i.e., \(N=64\) for the first diffusion model, and \(N=128\) for the super-resolution diffusion model.
Figure 3: Coarse resolution output from a numerical weather prediction model is converted to a high resolution cloud cover image using two diffusion models. The first model takes the atmospheric variables and generates a \(64\times 64\) pixel cloud cover image. The second model is conditioned on the same atmospheric variables, and increases the resolution to \(128\times 128\) pixels.
of the variables, we extract the \(3\times 3\) region that covers the island of Oahu.
#### 3.2.3 GFS historical forecasts
Historical GFS forecasts were downloaded from the NCAR Research Data Archive. We used the same-day forecast issued at 12:00 UTC (2:00 am Hawaii Standard Time) for the instantaneous total cloud cover at 21:00 UTC (11:00 am HST). This is a scenario highly relevant to decisions on whether to discharge energy from batteries during the morning energy demand peak depending on subsequent expected PV generation at midday. These forecasts have a coarse resolution of \(0.25^{\circ}\) (\(\sim\)27 km in Hawaii), and we use the same 5 atmospheric features over the same \(3\times 3\) grid covering Oahu for input to the diffusion model in the forecasting experiments.
## 4 Results
### ERA5 super-resolution
On the held out test set, we evaluated predictions from the diffusion model using the total cloud cover derived from satellite data as ground-truth. Diffusion model predictions were produced by sampling many cloud cover images for a given timestep (N=45), conditioning on the atmospheric variables provided by ERA5, and taking the mean image. The root mean squared error (RMSE) was used to measure prediction error.
The diffusion model prediction error was compared to two baseline prediction methods: (1) the coarse resolution total cloud cover from ERA5, and (2) a high resolution mean over all satellite-derived cloud cover images in the training set from the same time of day; we refer to this as the _persistence_ model. The diffusion model takes the former as an input, but can improve upon this prediction by incorporating learned patterns that account for local topography. This can be clearly seen in Figure 6, where the samples tend to have clouds accumulated at the mountainous regions of the island. The persistence model also includes these features, but is unable to account for any variations in weather. The diffusion model accounts for both of these factors and has lower RMSE (Table 1). A paired t-test between the RMSEs of the diffusion model and those of the persistence model was conducted because the mean RMSE values were close; the differences were statistically significant (\(p<.01\)) in both experiments, since the test dataset is a large sample.
### GFS forecasts
The diffusion model was then conditioned on GFS data to generate high-resolution forecasts for 11 am local time each day. In this experiment, we used N=90 samples from the diffusion model, as additional samples continued to increase prediction accuracy (Figure 5), and samples are computationally inexpensive, taking only seconds to generate. Once again the mean of the diffusion samples had lower RMSE than both the coarse resolution forecast it was conditioned on (GFS), and the time-conditioned persistence model.
## 5 Conclusion
Machine learning methods developed for super-resolution or video-prediction are well-suited for problems in weather and climate forecasting. We have proposed a method for using score-based diffusion models to super-resolve the output of numerical weather models and provide fully-probabilistic predictions of both historical and future weather patterns. In experiments, we evaluated this approach for day-ahead solar forecasts on the island of Oahu, and showed that the learned diffusion model exhibits three desirable qualities: (1) each sample is a realistic-looking image; (2) the samples are diverse; and (3) the mean of the samples is good point-estimate with lower RMSE than both coarse resolution numerical weather prediction alone and a high-resolution persistence baseline.
In follow-up work, we are evaluating this approach for estimating the risks that would otherwise require running a large ensemble of numerical simulations. Sampling from a diffusion model takes only seconds, and can learn patterns in data that are inaccessible to coarse-grained simulations, so if the model is accurate enough it could provide a powerful tool for estimating the risk of rare events. Such a tool could be useful at multiple scales in weather and climate modeling.
## 6 Acknowledgements
Support for this work comes from NSF #OIA-2149133. Technical support and advanced computing resources from
Figure 5: Prediction error in RMSE vs. the number of samples (N) from the diffusion model. The prediction error is an average over the held-out test set.
University of Hawaii Information Technology Services - Cyberinfrastructure, funded in part by the National Science Foundation CC* awards #2201428 and #2232862 are gratefully acknowledged.
|
2304.04098
|
Overview of processing techniques for surface electromyography signals
|
Surface electromyography (sEMG) is a technology to assess muscle activation,
which is an important component in applications related to diagnosis,
treatment, progression assessment, and rehabilitation of specific individuals'
conditions. Recently, sEMG potential has been shown, since it can be used in a
non-invasive manner; nevertheless, it requires careful signal analysis to
support health professionals reliably. This paper briefly described the basic
concepts involved in the sEMG, such as the physiology of the muscles, the data
acquisition, the signal processing techniques, and classification methods that
may be used to identify disorders or signs of abnormalities according to
muscular patterns. Specifically, classification methods encompass digital
signal processing techniques and machine learning with high potential in the
field. We hope that this work serves as an introduction to researchers
interested in this field.
|
Alejandra Manjarres-Triana, Juan Acevedo-Serna, Andrés A. Ramírez-Duque, Mario F. Jiménez, Edith Pulido-Herrera, John J. Villarejo Mayor
|
2023-04-08T20:24:27Z
|
http://arxiv.org/abs/2304.04098v1
|
# Overview of processing techniques for surface electromyography signals
###### Abstract
Surface electromyography (sEMG) is a technology to assess muscle activation, which is an important component in applications related to diagnosis, treatment, progression assessment, and rehabilitation of specific individuals' conditions. Recently, sEMG potential has been shown, since it can be used in a non-invasive manner; nevertheless, it requires careful signal analysis to support health professionals reliably. This paper briefly described the basic concepts involved in the sEMG, such as the physiology of the muscles, the data acquisition, the signal processing techniques, and classification methods that may be used to identify disorders or signs of abnormalities according to muscular patterns. Specifically, classification methods encompass digital signal processing techniques and machine learning with high potential in the field. We hope that this work serves as an introduction to researchers interested in this field.
Surface EMG Electromyography Signal processing Motor units Machine learning Tutorial
## 1 Introduction
Electromyography allows for assessing neuromuscular activity and muscle activation, making it appealing for applications in sports, rehabilitation, or biofeedback, among others [19]. Surface Electromyography (sEMG) is a non-invasive technique to measure muscle activation, however, given its complexity, it has hardly been utilized in clinical practice and rehabilitation [28]. On the other hand, studies have shown the potential of the sEMG to identify musculoskeletal disorders through different sets of muscles by generating discriminant electrical patterns [24]. In order to leverage the sEMG to overcome its barriers, clinicians and researchers require training and knowledge of good practices to use this technology [28, 21]. In addition, for an accurate diagnosis, it is required a good characterization of the muscle patterns according to the disease's patient [42]. In this sense, sEMG allows conducting of quantitative evaluation of muscle
patterns, allowing discriminating intrinsic relationships from the information provided that facilitate the patient status muscle [7].
In this work, we provide an overview of the basic topics associated with sEMG processing data to conduct quantitative analysis. At first, we describe concepts of the neuromuscular system used in the sEMG systems (see section 2.2), and the EMG acquisition basic aspects are described in section 1.2. Preprocessing techniques of EMG signals, such as the filters, epochs, and frequency spectral density, among others, are described in section 2. Next, relevant features are presented (see section 3) to be used in classification techniques including statistical methods and machine learning, and lastly, some examples are presented. By covering all these topics, we aim to provide a general overview to interested researchers in the topic.
### Origin of electromyographic signals
In the neuromuscular system, the central nervous system (CNS) controls muscle contractions through nerve signals from the brain via the spinal cord. Final motor commands are related to the Motor Unit (MU), the smallest functional unit of the nervous system. A MU is composed of muscle fibers innervated by a single motor neuron (see Fig. 1); therefore, when recruited results in a muscular contraction.
Producing precise or gross motion patterns is related to a few or a high number of muscle fibers innervated. The number of MUs per muscle in humans can range from around \(100\) for a hand muscle to \(1000\) or more for large limb muscles. MUs have also been shown to vary greatly in force-generating ability, with a one hundred times difference or more in force of contraction. Three types of motor units are defined based on physiological properties such as conduction speed and muscle fatigue: (1) Fast-twitch, fatigue (FF or type IIb); (2) fast-twitch, fatigue-resistant (FR or type IIa); and (3) slow-twitch (S or type I), which is more resistant to fatigue [12].
The functional unit of skeletal muscle contraction is the sarcomere, which is located between two Z disks (see Fig. 2). Skeletal muscle is made up of bundles, fibers, and myofibrils; myofibrils are divided into two types: actin (thin filament) and myosin (thick filament) [12].
During muscle contraction, the myosin head must be activated before the step-by-step mechanism of contraction begins. This occurs when adenosine triphosphate (ATP) binds to the myosin head and undergoes hydrolysis, leaving adenosine diphosphate (ADP) and inorganic phosphate. The energy released from ATP hydrolysis activates the myosin head. The activated myosin head binds to actin and inorganic phosphate is released (the binding becomes stronger), then ADP is liberated and the myosin head is displaced, causing the actin filament to move toward the midline. Another ATP comes and binds to the myosin head (the binding is weakened) and separates from actin, then, the myosin head is activated again (see Fig. 3). The mechanism ends when calcium is pumped into the sarcoplasmic reticulum and the tropomyosin returns to its original place, so the myosin head no longer has a place to join [12].
### Acquisition of electromyographic signals
The EMG signal is a representation of the electrical activity of the active muscle fibers during a contraction. The tissues act as spatial low-pass filters in the distribution of potential. Its detection can be intramuscular or superficial. The surface EMG (sEMG) technique is a non-invasive manner to capture myoelectric activity since it uses electrodes placed on the surface of the skin, in contrast with intramuscular EMG (iEMG) which uses invasive needles. For sEMG, the tissue acts as a conductive volume between the muscles and the electrodes. Therefore, tissue properties influence signal characteristics in terms of frequency content and distance beyond which the signal cannot be detected [18].
Figure 1: Neuromuscular system, the longitudinal section between the single axon terminal and the muscle fiber membrane [12].
Regarding electrical properties, sEMG signals have an amplitude of \(0,\!05-10\) mV and a frequency range of \(2-500\) Hz [11]. SEMG signals are random, non-stationary, and nonlinear (no linear relationship between muscle activity and sEMG signal pattern) and are not generated by periodic phenomena. However, they are susceptible to analysis with linear techniques considering small time windows (\(<500\) ms) where they can be considered stationary [18].[22].
#### 1.2.1 Factors influencing the EMG signal
The detection of EMG signals can be affected by different factors that alter their shape and characteristics, which may affect signal processing and analysis.
As the surface electrodes capture the electric field generated by the depolarization of the muscle fiber, electrical conduction is affected by the tissue characteristics, since a great amount of adipose tissue influences the signals amplitude [14].
Moreover, nearby muscles influence the EMG signal from a target muscle known as crosstalk. Then, tight arrangements within muscle groups need special care. Also, the signal produced by the depolarization of the muscle fibers of the heart infers with the recording of the EMG signal, mainly when analyzing muscles of the upper part of the trunk [14].
Changes in geometry between the muscle and the electrodes are also a factor to be taken into account. In dynamic muscle evaluations, elongation of the skin is generally present, which may increase the distance between the source of the signal and the detection site [14].
The environment artifacts contaminate the EMG signals, being the interference of the electrical line noise at \(50\) or \(60\) Hz is the most recurrent due to incorrect grounding of other external devices [14].
## 2 Preprocessing
Preprocessing techniques are useful tools to compress and highlight relevant information from biological signals, to increase their correlation. This section contains filters, epochs, frequency spectral density, and normalization of the maximum voluntary contractions of each subject [17].
Figure 2: Skeletal muscle parts, the muscles that connect to your bones and allow you to perform a wide range of movements and functions [39]
### Filters and offset removal
Digital filters are usually applied to remove artifacts. The Butterworth filter has been widely used due to that it can maintain the frequency response as flat as possible in the passband. Also, this filter maintains the shape for higher orders[3]. In electromyography, it is recommended to use a 4th-order Butterworth filter between \(15-400\) Hz, which has the characteristic of transmitting a range of frequencies and rejecting two frequency bands [31]. The noise frequency of the electrical network can be attenuated by implementing an adaptive filter at \(60\) Hz and its harmonics because there is less impact on the response of the filter due to variations in its components. It is also important to eliminate the offset of each signal. This is done by subtracting its respective mean since if the signal is centered at the origin, the offset level is said to be \(0\). If it is displaced upwards, the offset is positive, while if it is displaced downwards, it will be negative [36].
### Epochs and overlap
It is important to select the muscle electromyography signal when the exercise is being performed, therefore a segmentation of the signal must be carried out where only the segment is taken into account during the test. The electromyography signal is not stationary, therefore there are biases related to the spectral and temporal estimation.
Figure 4: Bipolar EMG measurements, two adjacent electrodes are applied between innervation zone and tendon [20].
Figure 3: Muscle contraction, at the molecular level, muscle contraction is defined by myosin molecules pulling actin filaments [39].
Therefore, it is necessary to use epochs (windows) to reduce the bias. With epochs of \(500\) ms the signal can be considered as stationary [18]. To reduce information loss, it is necessary to use windows with \(50\,\%\) overlap (see Figure 5).
### Power spectral density
During myoelectric fatigue, there is a reduction in the conduction velocity of the muscle fibers, which results in compression and a shift toward low frequencies. For this reason, it is more convenient to characterize fatigue using the different frequency parameters of the sEMG signal through the power spectral density [4].
The Welch periodogram method is an approach to determine the power spectral density of a signal at different frequencies, which uses periodogram spectrum estimations. These are obtained from signal conversion from the time domain to frequency domain [41]. The periodogram or sample spectrum is calculated directly from N samples of the electromyographic signal segment [2].
### Normalization by Maximum Voluntary Contraction (MVC)
Normalization methods remove the influence of the log condition, whereas data are rescaled according to the maximum voluntary contraction. These methods also allow to carry out of a quantitative comparison of EMG signals between subjects [14].
### Maximum voluntary contraction
To determine the maximum voluntary contraction (MVC), static resistance exercises are performed with positions that allow maximum contraction. These tests must be carried out separately for each muscle investigated; if necessary, repeat the exercises three times for five seconds. Subsequently, the average of each MVC must be calculated, which represents the maximum amplitude that the muscle can have during a contraction [14]; importantly, this signal must be rectified and smoothed.
### Signal smoothing
Signal smoothing allows evaluating only the signal that comes from muscle contraction, without considering contamination by mechanical movements or electromagnetic signals. The signal is smoothed by at first rectifying negative values, which are transformed into positive values. Then, with the rectified signal, smoothing is calculated, which creates a linear envelope of the signal. For smoothing there are different methods such as the moving average or to calculate the \(RMS\)[1].
## 3 Feature Extraction
In order to characterize muscular patterns associated with specific neuromuscular or musculoskeletal disorders, feature extraction is essential for a good classification process based on EMG signals [27]. In this section, we present feature extraction associated with the fatigue and coactivation indexes.
Figure 5: \(500\) ms windows with \(50\,\%\) overlap.
### Fatigue index
Myoelectric fatigue is the inability to continue generating a given level of force or exercise intensity [10], this is associated with decreased sensitivity and release of calcium ions [15]. Myoelectric fatigue is detected from different indexes in both the time and frequency domains, for example, motor action potential conduction velocity, root mean square, zero crossings, mean, and median of the frequency spectrum. Figure 6 shows an example of the graph of myoelectric fatigue detected from different indices in both the time and frequency domains (MNF, ARV and CV), observed during an isometric contraction of the anterior scalene muscle contracting the \(2\,\%\) of maximal voluntary contraction [20].
EMG signals from moving muscles are expressed as time-bound waves. These waves can be used to determine time domain variables, such as the mean rectified value and zero crossings, and frequency domain variables, such as the mean and median [13].
#### 3.1.1 Root Mean Square (RMS)
The amplitude of the EMG signal is stochastic (random) with a Gaussian distribution that is related to the constant force [30] that varies from \(0\) to \(10\) mV (peak to peak). Different parameters are commonly used to measure the amplitude and find information on the energy of the time of the analyzed signal, such as the root mean square (\(RMS\)); which represents the square root of the average power of the EMG signal during a given period of time (see equation 1).
\[RMS=\sqrt{\frac{\sum_{n=0}^{N}EMG_{n}^{2}}{N}} \tag{1}\]
Where \(RMS\) is the root mean square, \(EMGn\) is the specific signal of a rectified channel, and \(N\) is the number of samples.
#### 3.1.2 Median Rectified Value (ARV)
ARV is used to estimate the variation in amplitude of the EMG signal, whose coefficient of variation is lower than the \(RMS\):
\[ARV=\frac{\sum_{i=1}^{N}|x_{i}|}{N} \tag{2}\]
Where \(ARV\) is the average rectified value, \(X\) is the specific signal of a rectified channel and \(N\) is the number of samples [23]
#### 3.1.3 Zero Crossings
Zero crossing measures the signal frequency, determining the number of times it crosses zero. A threshold is required to reduce the number of noise-induced zero crossings (see equation 3) and is selected per the signal voltage [34].
\[ZC=EMG(n)>0\quad\&\quad EMG(n+1)<0\quad\|\quad EMG(n)<0\quad\&\quad EMG(n+1)>0 \tag{3}\]
Figure 6: Parameters for muscle fatigue detection, where \(ARV\) is the average rectified value, \(CV\) the conduction velocity and \(MNF\) the mean frequency, [20].
Where \(ZC\) are the zero crossings, \(EMG\) is the electromyographic signal, and n is a subset of integers related to the number of samples, \(n\).
#### 3.1.4 Mean and median frequency
The mean and median frequencies, MNF and MDF, respectively, provide basic information about the signal spectrum and its changes as a function of time. They agree that the signal spectrum is symmetric about its center line, while their difference reflects a spectral bias. A tail in the high-frequency region implies higher MNF than MDF [18]. Therefore, any reference spectral frequency can be used as an estimator of spectral compression. These parameters will be necessary to determine quantitative indices of the muscle activation pattern of the recorded signals. Namely, MNF represents the average (see equation 4), and the MDF the signal's central position from the lowest to the highest values (see equation 5), [18].
\[f_{mean}=\frac{\int_{0}^{\frac{f_{m}}{2}}f_{EMG}(f)}{\int_{0}^{\frac{f_{m}}{2 }}EMG(f)}\quad\text{\emph{df}} \tag{4}\]
\[f_{median}=\frac{\int_{0}^{\frac{f_{m}}{2}}(f-f_{mean})^{k}*EMG(f)}{\int_{0} ^{\frac{f_{m}}{2}}EMG(f)}\quad\text{\emph{df}} \tag{5}\]
Where \(f_{mean}\) and \(f_{median}\) represent the MNF and MDF, respectively, \(f_{EMG}\) is the frequency of a specific electromyography channel, \(f_{m}\) is the sampling frequency, and \(k\) is the amount of data available.
### Coactivation index
The coactivation index is a measure of the activation of the agonist muscles, which exert an action in the same direction of movement, and antagonist muscles, which exert an action opposite to that of the other. To calculate muscle coactivation it is necessary to know the muscle activity during contraction [6]. The percentage activation of each muscle during isometric contraction is calculated from the coactivation index, as follows:
\[CI=\frac{\int_{0}^{100}nEMGx\quad dt}{iEMG_{M1}+iEMG_{M2}+...+iEMG_{Mn}} \tag{6}\]
Where \(CI\) is the coactivation index, \(nEMGx\) is the rectified linear envelope of the EMG signal of the muscle to be compared, and \(iEMGMn\) is the integral of each muscle.
#### 3.2.1 Envelope with moving average
The moving average is an arithmetic measure used to analyze a set of \(N\) data in discrete time. This allows the creation of a series of averages, which can be simple or weighted. In the frequency domain, the moving average has a response of a low pass filter [32]. For this purpose, the envelope with the moving average is calculated, as follows [29]:
\[MA(N)=\frac{EMN(n)+EMNG(n-1)+...+EMG(n-(N-1))}{N} \tag{7}\]
Where \(MA\) is the moving average, \(EMG\) is the electromyography signal of a channel, and \(N\) is the number of samples.
## 4 Statistical methods
Classification section use methods consecutive artificial intelligence for data analysis, pattern classification, data mining and medical informatics through different Information visualization methods we can see the classification [38].
### Information visualization methods
The visualization of the information allows understanding the behavior of the recorded data.
#### 4.1.1 Box plot
It is a type of graph that allows visualizing the locality, diffusion and asymmetry groups of a data set through its quartiles (\(Q\)). This diagram (see Figure 7) is composed of a maximum (Q4) and a minimum (\(Q0\)) which is the upper and lower exclusion limit of the outliers, respectively, the median (\(Q2\)) and the upper (\(Q3\)) and lower (\(Q1\)) quartiles that represent the median of the upper and lower half [8].
#### 4.1.2 Histogram
The histogram is a method of data reduction by separating the information into bars, where each bar represents the amount in a frequency range for certain values. This diagram allows visualizing the distribution of the population [35].
### Statistical tests
Statistical tests are used to conclude a hypothesis using sample probability. It is composed of descriptive and inferential analysis. Inferential analysis aims to describe the population taking into account the data obtained from a sample. On the other hand, the descriptive analysis explains the trends of the sample. Three aspects must be taken into account when choosing the statistical test: the population, the sample size and the measurement scale. Statistical tests are divided into 2 sets: parametric and non-parametric [9].
#### 4.2.1 Parametric
Parametric tests can only be used if the data shows a normal distribution [9].
#### Shapiro-Wilk test
Shapiro-Wilk test is a parametric test based on a probability plot, in which the regression of the observations on the expected values of the hypothesized distribution is considered. The \(W\) statistic represents the ratio of two estimates of the variance of a normal distribution (see equation 8). This test has generally shown adequate results compared to the classic tests, but especially when working with short-tailed distributions and with a sample size of less than \(30\), since it shows high variability when both the symmetry and the size are modified sample size of the distribution, especially between \(20\) and \(50\) participants. Its statistic is defined as [26]:
\[W=\frac{(\sum_{i=1}^{n}a_{i}x_{(i)})^{2}}{\sum_{i=1}^{n}(x_{i}-\overline{x})^{ 2}} \tag{8}\]
where \(x(i)\) is the number that occupies the \(i-th\) position in the sample (with the sample ordered from the highest value). The variables \(a_{i}\) are calculated, as follows:
\[(a_{1},...,a_{n})=\frac{m^{T}V^{-1}}{(m^{T}V^{-1}V^{-1}m)^{1/2}} \tag{9}\]
where
\[m=(m_{1},...,m_{n})^{T} \tag{10}\]
Figure 7: Box plot of the longissimus muscle coactivation index (IC Long). Where Q1 represents \(25\,\%\) of the data, Q2 \(50\,\%\) and Q3 \(75\,\%\).
where \(m_{n}\) are the mean values of the ordered statistic of independent and identically distributed random variables sampled from normal distributions and \(V\) denotes the covariance matrix of that order statistic. The null hypothesis will be rejected if \(W\) is too small. The value of \(W\) can range from \(0\) to \(1\).
_Interpretation_: being the null hypothesis that the population is normally distributed, if the \(p\)-value is less than \(\alpha\) (significance level) then the null hypothesis is rejected, i.e., data do not have a normal distribution. If the \(p\)-value is greater than \(\alpha\), it is concluded that this hypothesis cannot be rejected [37].
### D'Agostino test
The objective of D'Agostino test is to establish whether or not the given sample comes from a normally distributed population (see equation 11). The test is based on transformations of the kurtosis and the skewness of the sample.
\[DA=\frac{\sum_{i=1}^{n}(i-(\frac{n+1}{2}))X_{i}}{n^{2}\sigma_{n}} \tag{11}\]
where: \(X_{i}\) indicates the data that appeared in place i in the sample, \(X_{i}\)* are the ordered data in the sample, \(n\): indicates the amount of data in the sample, and \(\sigma_{n}\): is calculated using the equation 12:
\[\sigma_{n}=\sqrt{\frac{\sum_{i=1}^{n}(X_{i}-\overline{X}_{n})^{2}}{n}} \tag{12}\]
In the table of the distribution of the D'Agostino statistic, for each level of significance, the calculated \(D\) is compared; if it is less than the first member of the pair but greater than the second, then the null hypothesis of population normality is rejected [5].
#### 4.2.2 Nonparametric
Non-parametric tests are based on random data, which seeks to identify behavior through different tests [9].
#### Kolmogorov Smirnov test
The Kolmogorov test is a goodness-of-fit test, that is, the degree to which the observed distribution differs from another distribution (see equations 13). It is used when the number of data is small [2].
\[KS=max_{x}|F_{1}(x)-F_{2}(x))| \tag{13}\]
\[if\quad KS>VC_{\alpha}\quad is\quad rejected\quad H_{0}\]
\[if\quad KS<VC_{\alpha}\quad is\quad not\quad rejected\quad H_{0}\]
Where \(KS\) is the test statistic, \(max_{x}\) is the maximum difference between both distributions, the absolute value is used so the order of the operators does not alter the result.
## 5 Summary
sEMG has a great potential to support clinicians with objective and quantifiable information about individuals' health conditions affected by muscle disorders. For instance, lateral epicondylits (LE) or tennis elbow in patients has been investigated by evaluating the coactivation of the forearm extensor muscles, given that it has been suggested that pain at the insertion of the forearm muscles at the epicondyle, affecting daily life activities (pressure effort, shaking hands or lifting a cup) [33]. Other examples include measuring the resistance of the trunk muscles to evaluate low back pain [2, 40]. Given the complexity of the sEMG processing, this paper summarized relevant concepts that beginners in the field can use as a starting point in their studies. Specifically, the reader can get important insights into how to address a statistical analysis of sEMG signals in any application. For instance, at first, basic concepts of the muscle anatomy are indicated, to then introduce the reader to signal processing concepts, such as preprocessing signal, unique features of EMG, and statistical concepts that allow to analyze EMG data to be able to identify muscular patterns of diseases.
## 6 Acknowledgments
This work was supported by the Ministry of Science, Technology and Innovation (Minciencias) under Grant Cto. 489-2021 and the Universidad El Bosque under Grant PCI2019-10784. The authors thank Socrates Becerra for helping in the edition tasks.
|
2307.14903
|
Lipid bilayer fluidity and degree of order regulates small EVs
adsorption on model cell membrane
|
Small extracellular vesicles (sEVs) are known to play an important role in
the communication between distant cells and to deliver biological information
throughout the body. To date, many studies have focused on the role of sEVs
characteristics such as cell origin, surface composition, and molecular cargo
on the resulting uptake by the recipient cell. Yet, a full understanding of the
sEV fusion process with recipient cells and in particular the role of cell
membrane physical properties on the uptake are still lacking. Here we explore
this problem using sEVs from a cellular model of triple-negative breast cancer
fusing to a range of synthetic planar lipid bilayers both with and without
cholesterol, and designed to mimic the formation of raft-like nanodomains in
cell membranes. Using time-resolved Atomic Force Microscopy we were able to
track the sEVs interaction with the different model membranes, showing the
process to be strongly dependent on the local membrane fluidity. The strongest
interaction and fusion is observed over the less fluid regions, with sEVs even
able to disrupt ordered domains at sufficiently high cholesterol concentration.
Our findings suggest the biophysical characteristics of recipient cell
membranes to be crucial for sEVs uptake regulation.
|
Carolina Paba, Virginia Dorigo, Beatrice Senigagliesi, Nicolò Tormena, Pietro Parisse, Kislon Voitchovsky, Loredana Casalis
|
2023-07-27T14:47:06Z
|
http://arxiv.org/abs/2307.14903v1
|
# Lipid bilayer fluidity and degree of order regulates small EVs adsorption on model cell membrane
###### Abstract
Small extracellular vesicles (sEVs) are known to play an important role in the communication between distant cells and to deliver biological information throughout the body. To date, many studies have focused on the role of sEVs characteristics such as cell origin, surface composition, and molecular cargo on the resulting uptake by the recipient cell. Yet, a full understanding of the sEV fusion process with recipient cells and in particular the role of cell membrane physical properties on the uptake are still lacking. Here we explore this problem using sEVs from a cellular model of triple-negative breast cancer fusing to a range of synthetic planar lipid bilayers both with and without cholesterol, and designed to mimic the formation of 'raft'-like nanodomains in cell membranes. Using time-resolved Atomic Force Microscopy we were
able to track the sEVs interaction with the different model membranes, showing the process to be strongly dependent on the local membrane fluidity. The strongest interaction and fusion is observed over the less fluid regions, with sEVs even able to disrupt ordered domains at sufficiently high cholesterol concentration. Our findings suggest the biophysical characteristics of recipient cell membranes to be crucial for sEVs uptake regulation.
Extracellular vesicles, model membrane, uptake, fluidity, Atomic Force Microscopy.
## 1 Introduction
The plasma membrane has an essential role in maintaining cell homeostasis [1], and can actively regulate its molecular composition and shape in response to external stimuli [2, 3]. In particular, it is known that some dynamic membrane microdomains such as lipid rafts and caveolae contribute in regulating many cellular functions including cell proliferation, survival, and intracellular signaling through the constant local redistribution of membrane lipids [4, 5, 6]. This has major implications for protein sorting and molecular trafficking across the membrane [6]. Among the different molecular species involved, cholesterol is known to play a fundamental role for the correct functioning of membrane domains, regulating membrane fluidity and the structural integrity of lipid rafts [7, 8]. Cholesterol depletion can lead to an increased permeability to external pathogens, signaling molecules release, and in the worst case, lipid rafts disruption with alteration of membrane thickness. This, in turn, affects the signaling pathways and can even induce programmed cell death. Cholesterol accumulation in lipid rafts is equally problematic, creating a higher sensitivity to apoptosis [9]. Finally, cholesterol modulates both the cell membrane local composition and its lateral molecular organisation, fluidity, and hence intercellular communication processes including endocytic pathways [10]. One important aspect of lipid rafts - and indirectly cholesterol - is their involvement in the
release and uptake of a particular class of cell-derived vesicles called extracellular vesicles (EVs). EVs are now widely accepted as nanocarriers involved in cell-cell communication [11, 12, 13] and have been shown to take part in many pathophysiological processes such as cancer progression and metastasis formation, cell proliferation, and stimulation of the adaptive and innate immune system [14, 15]. EVs are typically divided into two sub-classes, microvesicles (MVs) and exosomes which differ from their biogenesis pathway: budding of the membrane for MVs and endocytic pathway for exosomes [16]. Their respective size range is slightly different with MVs (\(100-1000\ nm\)) and exosomes (\(30-150\ nm\)) [17, 18]. Since the different vesicles isolation methods rely on size-based separation, vesicles ranging from \(30\) to \(200\ nm\), (enriched in exosomes but also including small MVs) are often referred to as small Extracellular Vesicles (sEVs). sEVs have emerged as potential cancer biomarkers as their molecular composition reflects that of the originating cells; they are also considered optimal delivering nanocarriers as they mediate the communication between tumor and tumor-associated cells, escaping the immune response [19]. This landscape is further complicated by the presence of different pathways for sEVs to deliver their molecular cargo. One of these pathways is lipid raft mediated endocytosis, where the rafts are continuously assembling/disassembling to maintain cell homeostasis and regulate vesicle trafficking [20]. Delivery of the cargo can also occur through a mechanism initiated by some degree of fusion with the target membrane, a pathway mostly adopted by viruses. This last pathway induces a level of mixing of the sEV and target membranes which become contiguous, something recently observed with umbilical cord mesenchymal stem cells (UC-MSC) sEVs from GMP production and a model membrane containing lipid rafts [21]. However, details on the dynamics of the interaction pathways of sEVs with target cells and on the specific role of each molecular player are still scarce and highly debated in the literature [12, 22, 23]. This is related to the small size and the high heterogeneity of sEVs [24], as well as to the high spatial and temporal resolution required for the detection of lipid rafts dynamics. Several recent studies have investigated the role of the biophysical properties of the cell membrane on vesicle fusion and agglomeration rate,
showing that the mechanical properties such as membrane curvature, fluidity and rigidity, all highly regulated by cholesterol content, can affect vesicle fusion kinetics [21, 22, 25, 26]. In this study, we follow up on the raft-based pathway for regulating vesicle uptake and investigate the interaction of single sEVs isolated from a triple-negative breast cancer cell line (TNBC) with model supported lipid bilayers (SLBs) with different fluidity, ordered nanodomains, and cholesterol concentration. Using Atomic Force Microscopy (AFM) in solution, we aim to explore sEVs fusion with membranes exhibiting quasi-physiological cholesterol concentration and compositions reflecting the raft structures of in vivo systems. In particular, we focus on lipid membranes with the coexistence of two lipid phases: a tightly packed and ordered state called liquid-ordered phase (\(L_{o}\)) made of sphingolipid and cholesterol molecules, coexisting with a more fluid and disordered phase called liquid-disordered (\(L_{d}\)) phase enriched with unsaturated phospholipid. The use of AFM enables us to track single EVs interacting and fusing with the membrane in situ and with nanoscale precision, and the subsequent evolution of the target membrane.
## 2 Results and discussion
### AFM of the model membrane
Supported lipid bilayers offer a powerful model membrane platform for studying membrane-vesicles interactions. They allow for the simultaneous analysis of the SLB morphology changes with quantitative information about the surface area of the different lipid phases, height modification, and real-time observation of the vesicle fusion process with the lipidic system. In order to gain microscopic insights into the sEV fusion process, we first performed a careful topographic analysis of the multicomponent-SLBs by means of AFM. To mimic the typical organisation of cell membranes in 'lipid raft' microdomains, the composition adopted comprises 1,2-dioleoyl-sn-glycero-3-phosphocholine (DOPC) which is a neutral and monounsaturated phospholipid (\(18:1\)), sphingomyelin (SM) that represents one of the most
abundant sphingolipids in the plasma membrane, and is characterized by long saturated fatty acyl chains [27], and cholesterol (Chol) that sterically interacts with the acyl chains of other lipids and preferentially with saturated phospholipids such as SM [28]. This mixture is representative of the outer membrane leaflet as it contains phosphatidylcholine and sphingomyelin as its main building blocks. To monitor the lipid phase behavior and the cholesterol dependence on SLB morphology, three cholesterol molecular concentrations have been tested: \(5\)\(mol\%\), \(10\)\(mol\%\) and \(17\)\(mol\%\), with DOPC and SM kept at a fixed \(2:1\) (\(m/m\)) ratio. In the following sections the sample composition with \(17\)\(mol\%\) will be described in more depth and compared with a bilayer that has no sterol content. The \(17\)\(mol\%\) Chol falls in the typical biological range of \(15-50\)\(\%\) for the sterol component, and offers good reproducibility and stability when performing AFM imaging in liquid conditions [29, 30].
Typical examples of lipid phase separation as observed in AFM topographs are presented in Figure 1.
The formation of the liquid-ordered domains is visible for all three tested conditions, although they differ in height, area, and number. The taller \(L_{o}\) bilayer domains exhibit a
Figure 1: AFM topographic images of DOPC/SM \(2:1\) (m/m) SLB with (a) \(5\)\(mol\%\), (b) \(10\)\(mol\%\) and (c) \(17\)\(mol\%\) Chol. In each case a profile is shown to highlight the \(L_{d}\) (lower) and \(L_{o}\) (higher) domains, here acquired at room temperature in Tris \(10\)\(mM\).
maximum diameter of around \(0.5\;\mu m\) with \(5\;mol\%\) Chol, a value that increases with Chol and reaches around \(1.5\;\mu m\) at \(17\;mol\%\). The apparent number of domains changes very little with increasing cholesterol percentage, varying from an average value of \(113\pm 7.07\) to \(128\pm 5.65\) and \(135\pm 14.36\) respectively. The increase of the area occupied by \(L_{o}\) domains is accompanied by a decrease of their relative height to with respect to the surrounding DOPC. The total area occupied by \(L_{o}\) domains is directly related to the Chol in agreement with the theory of the preferential mixing of cholesterol with saturated lipids such as SM. This results in the increase of the area per lipid,[31, 32] and a 'cholesterol-condensing effect' on phospholipids thickening of the \(L_{d}\) phase and a reduced height difference with the \(L_{o}\) domains.[31, 33] Lastly, it has been demonstrated that the transition temperature of a phospholipid system decreases with increasing cholesterol concentration.[30] This explains the lower number of \(L_{o}\) domains per scanned area observed for the \(5\;mol\%\), where the transition starts at \(30^{\circ}C\), when compared with the \(10\;mol\%\) and \(17\;mol\%\) where lower thermal fluctuations are required to promote the \(L_{o}\) phase nucleation. While helpful to explain the AFM observations, a full description of the systems should also take into account the impact that a rigid substrate, which tends to stabilize the lipids and promote order (lower \(T_{c}\)),[34] has on the bilayer, and the kinetics of the temperature control during sample cooling, which in turn influences the \(L_{o}\) nucleation process and growth.[35]
### Adsorption of the sEVs and local biophysical changes
With the model membrane system characterized, we then explore the interaction of sEVs isolated from the TNBC MDA-MB-231 cell line. The isolation protocol is reported in the Materials and Methods section, together with all the details of our model membrane systems. TNBC represents one of the most aggressive breast cancer subtypes, with a poor prognosis due to the absence of targetable receptors, high propensity for metastatic progression, and lack of effective chemotherapy treatments [36]. TNBC-derived sEVs have been thoroughly characterized in previous works from our group [37]. In particular, it was observed that TNBC-derived sEVs induce morphological as well as biomechanical phenotype changes in non-metastatic cancer cells toward higher aggressiveness. Here, once the model membrane has been characterized, a concentration set of sEVs are introduced to the aqueous solution
Figure 2: Comparative analysis of the area and number variations of \(L_{o}\) domains at increasing cholesterol percentage.
and the system evolution followed in real time within few minutes. A representative AFM image of sEVs adsorption on the \(17\;mol\%\) Chol membrane is shown in Figure 3a.
The adsorption of sEVs induce small protrusion \(\sim 1\;nm\) above the height over the \(L_{o}\) domains, accompanied by a local destabilisation of the \(L_{o}\) region at the edges of the interaction's site. This destabilisation appears as fluid-like regions surrounding the protrusions (blue arrows in Figure 3) and the formation of pores confined at the level of the outer leaflet of the supported lipid bilayer. Given the typical \(5-6\;nm\) thickness of the membrane as measured from the SLB defects [21, 38], and the average \(15.43\pm 5.77\;nm\) height of sEVs when directly adsorbed on the mica substrate (according to Supplementary Information) we interpret the localized protrusions as sEVs clusters whose adsorption process involves their full mixing with the SLB, and the possible molecular cargo release due to pore formation. No morphological changes were observed at the DOPC (\(L_{d}\)) level but only a local interaction with \(L_{o}\) domains was detectable. To understand whether the protrusions are EV-related components or the result of the \(L_{o}\) degradation process, a time-resolved analysis was performed to track the process evolution (Figure 3a-c). A drastic rearrangement of the \(L_{o}\) domains is
Figure 3: Time-resolved AFM topographic images of EVs (MDA-MB-231 cell line) interacting with DOPC/SM \(2:1\) (m/m) SLB with \(17\;mol\%\) Chol with corresponding height profiles, acquired at \(27\;^{\circ}C\) in Tris buffer \(10\;mM\), with a time-lapse of \(10\) minutes.
visible with a progressively melting into the surrounding SLB, in favor of positive growth for both the area occupied by the lipid-vesicles protrusions and SLB invaginations. After an initial step of lateral lipid redistribution with no significant morphological variations, the area occupied by \(L_{o}\) domains progressively decreases starting from the small defects of the \(L_{o}\) phase characterized by high curvature and evolving laterally until the melting with the \(L_{d}\) phase expansion is completed. Simultaneously, a slight increase in the area occupied by pores and the \(L_{o}\) phase takes place. The'melting' effect of sEVs on planar lipid bilayer has previously been observed by our group [21], where sEVs from UC-MSC cell line were tested in the interaction with a SLB enriched with 5 \(mol\%\) cholesterol. Here, sEVs lead to a dramatic fluidification of the \(L_{o}\) phase, in contrast to the previous study where a mixing between sEVs and the \(L_{o}\) was observed instead, with the formation of high granularity patches protruding 4 \(nm\) above the SLB. These apparent differences in docking process and the resulting impact on the SLB suggests possible intrinsic differences in the EV adsorption process based on sEVs origin and cholesterol content of the target membrane. To further investigate the impact of the sEVs origins, we tested the behaviour of sEVs isolated from the UC-MSC cell line with the same target membrane containing 17 \(mol\%\) Chol, resulting in qualitatively similar results to what previously reported [21] (Figure 3S, Supplementary Information). Given the relevance of lipid raft integrity in regulating cell proliferation, adhesion, and invasion [39], these results further strengthens the idea of EV potency altering the membrane properties. It also underlines the need for screening approaches that consider, other than EV's molecular cargo and surface properties, the cell membrane molecular composition in order to be able to investigate their ability to alter the membrane properties of recipient cells, such that both faces of the interaction process can be explored.
### sEVs interaction is regulated by lipids mobility
The previous results highlight the importance of ordered nano-domains on the adsorption and fusion of sEVs. The well-established importance of cholesterol in modulating the emer
gence, stability and fate of these nano-domains makes it an obvious agent for indirectly modulating sEVs uptake in recipient cells. It is however not clear at this stage to what extent the effect is physical in terms of membrane biomechanics and fluidity or chemical through specific interactions between cholesterol and adsorbing sEVs. To further study the impact of membrane fluidity on modulating the sEVs adsorption, two control compositions with 0 % Chol content were also analysed containing either DOPC and SM \(2:1\) or DOPC and DPPC \(2:1\) at \(27^{\circ}C\). In these conditions, SM domains are expected to form an ordered phase also called solid-ordered (\(S_{o}\)), characterized by a higher degree of order and less fluidity, surrounded by fluid DOPC. Similarly, DPPC domains should form highly ordered gel-phase domains within the DOPC. For both membranes, AFM imaging confirms the expectations (Figure 4a,b), with SM forming smaller domains covering an average percentage area of 1.17 % and protruding 1.75 \(nm\) over the DOPC layer, compared to bigger DPPC domains, occupying an average 2.8 % of the membrane and with a relative height of \(2\)\(nm\) over the surrounding DOPC. The SM \(S_{o}\) domains are also more irregular in height that DPPC, showing two different levels at \(0.75\)\(nm\) and \(1.75\)\(nm\) above the DOPC layer, suggesting that the phase transition of the SM during the cooling is not uniform. Indeed, the two levels can be explained by a leaflet-by-leaflet phase transition where the SM molecules in contact with the substrate solidify first [40, 41]. This is also consistent with the fact that DPPC displays a highly cooperative phase transition characterized by a sharp peak at the main \(T_{m}\) in differential scanning calorimetry whereas SM shows a single endothermic peak with a wide transition range related to the heterogeneity of the fatty acids of the lipid [42, 43].
Figure 4: AFM topographic images of DOPC/SM and DOPC/DPPC \(2:1\) (m/m) SLB before (a,b) and after (c,d) EVs (MDA-MB-231 cell line) interaction with corresponding height profiles, acquired at \(27^{\circ}C\) in Tris buffer \(10\;mM\).
The interactions of MDA-MB-231 sEVs with the SM \(S_{o}\) phase is illustrated in Figure 4c, showing the formation of protrusions 6 \(nm\) above the lipid domains. Interestingly, the sEVs clusters that co-localise with the portion of SM domains are characterized by the largest height. Moreover, the number of interaction sites per scanned area is higher compared to the membrane with 17 \(mol\%\) Chol, indicating an enhanced EV interaction with the planar lipid bilayer. However, no local morphological variations can be observed over time, suggesting that the sEVs are no longer able to mix with their lipidic component with that of the SLB. A comparative experiment conducted on the DOPC/DPPC membrane displays a similar degree of order and level of saturation to the model system with SM, ruling out a chemical affinity of the sEVs with SM. Also in this case (Figure 4d), a specific MDA-MB-231 sEVs interaction with the ordered domains is observed. However, contrary to SM domains, a mixing with the vesicles is visible, inducing an increase of the relative height of the \(S_{o}\) domains (profile in Figure 4d). This is confirmed by AFM revealing the overlapping of multiple layers and the presence of a'vesicle-like' morphology over the DPPC domains. To fully confirm the hypothesis of sEVs preferential mixing with high-ordered domains, two control experiments were performed using single-component SLB made of either pure DOPC or pure DPPC. The results, reported in Figure 2S of Supplementary Information, confirm that sEVs do not interact with the disordered DOPC SLB, while a maximal interaction can be observed for the DPPC SLB, resulting in the SLB morphology reshaping over a larger time scale compared to the system enriched with cholesterol. These results highlight the need of lower system fluidity for the 'lipid raft' domains in order to have a fast EV adsorption process and cargo release over the SLB. Moreover, the structural SLB modification leading to a 'lipid rafts' fluidification further stresses the importance of the molecular orientation and packing in the recipient membrane lipids to control interaction and uptake of sEVs over time. These results pave the basis for further investigating the physicochemical mechanisms of the cell membrane, and in particular of lipid rafts as a preferential route of interaction with the sEVs.
## 3 Conclusions
The development of a multi-component SLB mimicking the 'lipid-raft' structure of cell model membranes, allowed us to study the driving forces regulating the sEVs uptake for vesicles isolated from breast cancer cell lines. Our findings, based on fast AFM topographic imaging, indicate a preferential sEV affinity for the ordered lipid raft-like domains. However, the adsorption process undergoes different pathways depending on lipid bilayer composition and fluidity. Working at the submicrometric level and performing a time-resolved analysis it was possible to identify two interaction pathways. For a fluid SLB enriched with cholesterol, the adsorption process is featured by the formation of sEV clusters protruding over the outer layer of the model system. In the same frame, a pore-opening close to the interaction site occurs, followed by a fluidification step that leads to lipid raft integrity loss. Whereas, for a rigid system without cholesterol, the adsorption pathway follows the budding-fission mechanisms [44], with maximal affinity with the solid-ordered domains. This alternative mechanism is described by the fusion of the vesicles with the outer layer of the model membrane and the formation of an intermediate regular lipid phase due to full lipid mixing with the vesicles. In such a rigid system, the extent of the interaction is featured by the formation of a stable state not prone to fracture, which leads to a large-scale shape modification over time. Our study provides evidence that the degree of sEV mixing with lipids is highly regulated by the vesicle origin but also by the fluidity of the SLB. Although the lipid composition is limited to a restricted choice of lipids and cholesterol range, we believe that our results provide a strong message in light of the chemical and physical forces regulating the vesicle uptake, underling that both cell membrane composition and lateral organization must be taken into consideration to rationalize sEV interaction and cargo release in the recipient cell. Moreover, it is also evident that the side effects on lipid raft integrity are not negligible as well, as it has been demonstrated that membrane domain disruption is fundamental for the regulation of molecules trafficking across the membrane and cell survival [39]. Furthermore, it is interesting to note that this versatile platform can be applied to study the impact of surface function
alization strategies (e.g. fusogenic proteins) on the vesicle uptake pathways [45], but it can also be easily integrated, besides cholesterol molecules, with other lipids and proteins. In particular, the reconstitution of transmembrane proteins in the proposed model would be an innovative approach for studying transmembrane proteins localization and activity, when the planar lipid bilayer is fabricated over a pore spanning membrane [46, 47]. We foresee that, with some implementation of the model, we can develop a versatile and broadly accessible platform for the investigation the sEVs uptake pathways.
## 2 Experimental Section
### sEV isolation and characterization
For sEV isolation, MDA-MB-231 cells (\(2\cdot 10^{6}\)) were grown in a 175 cm\({}^{2}\) flask in DMEM (Sigma-Aldrich) with 20 % FBS (EuroClone) for 3 days. The cells were then washed two times with PBS and three times with DMEM without serum. The cells were further incubated at 37\({}^{\circ}\)C. After 24 h the medium was collected and centrifuged at 300 \(g\) and 4\({}^{\circ}\)C (Allegra X-22R, Beckman Coulter) for 10 min. With a 0.22 \(\mu\)m filter, the supernatant was filtered, poured into Amicon Filter Units (Ultracel-PLPLHK, 100 kDa cutoff, Merck Millipore, UFC9100) and centrifuged at 3900\(g\)/4\({}^{\circ}\)C for 20 min (Allegra X-22R, Beckman Coulter). The samples collected were then transferred into the polypropylene (PP) ultracentrifuge tubes (Beckman Coulter, 361623), filled with PBS and centrifuged at 120000 \(g\)/4\({}^{\circ}\)C for 2 h in the ultracentrifuge (70.1 Ti rotor, k-factor 36, Beckman Coulter, Brea, CA, USA). After removing the supernatant, the pellets were resuspended in 200 \(\mu\)L of PBS, aliquoted, and conserved at \(-20\)\({}^{\circ}\)C until usage.
### Small unilamellar vesicles preparation
The lipids, 1,2-dioleoyl-sn-glycero-3-phosphoCholine (\(18:1\) (\(\Delta 9-Cis\)) PC), 1,2-dipalmitoyl-sn-glycero-3-phosphoCholine (DPPC, 16:1), Sphingomyelin (brain, porcine, SM), and choles
terol (ovine wool, \(>98\%\)), were purchased from Avanti Polar Lipids. The single lipids, suspended in chloroform, were mixed at the desired concentration and placed under vacuum overnight. The dry film was then hydrated with TRIS buffer (10mM, \(pH=7.4\)), to obtain a final concentration of 1mg/mL. The lipidic mixture was sonicated for 40 min at 45\({}^{\circ}\)C and vortexed. Lastly, the resulting solution was extruded 51 times at 40\({}^{\circ}\)C through a membrane with 100nm pores (PC Membranes 0.1 \(\mu\)m, Avanti Polar Lipids).
### Supported lipid bilayers preparation
Lipids were combined in three lipid mixtures: DOPC/SM (2:1 m/m) with Chol (5, 10, 17 mol%), DOPC/SM and DOPC/DPPC in a fixed molar ratio of 2:1, and lastly, DOPC and DPPC alone. The obtained extruded solution was diluted in TRIS/CaCl\({}_{2}\) buffer to a final concentration of 0.4 mg/mL with 2 mM CaCl\({}_{2}\). For all compositions, the vesicle fusion method was adopted as a standard procedure for planar lipid bilayer preparation. The sample was deposited on a freshly cleaved mica substrate (Nano-Tec V-1 grade, \(0.15-0.21\)\(mm\) thickness, 10 \(mm\) diameter), incubated at 50\({}^{\circ}\)C for 30 min, and slowly cooled to 27\({}^{\circ}\)C, then extensively washed with TRIS buffer 10 mM.
### Atomic Force Microscopy imaging
AFM was performed on commercially available microscope (Cypher ES from Asylum Research), working at 27\({}^{\circ}C\) in high resolution AC mode. Sharpe nitride levers (\(SNL-10\) with A geometry from Bruker Corporation) were used to perform the imaging in liquid conditions. Images were acquired at \(512\times 512\) pixel frames at 2.44 Hz.
## Author contributions
C. P., L. C., K. V. and P. P. conceived and planned the experiments. C. P. performed the atomic force microscopy experiments and analysed the data. V. D. contributed to atomic
force microscopy measurements. V.D. and B.S. contributed to EV isolation and molecular characterization. N.T. contributed to atomic force microscopy training. C. P. and L. C. took the lead in writing the manuscript. All authors provided critical feedback and helped shape the research, analysis and manuscript.
## Conflicts of interest
There are no conflicts to declare.
## Acknowledgments
The authors wish to thank M. Gimona from Paracelsus Medical University (Salzburg, Austria) for providing the EV-UC-MSC samples. We gratefully acknowledge the Structural Biology Laboratory at Elettra-Sincrotrone Trieste S.C.p.A. for the instrumentation and constant support during the cell culture experiments. We acknowledge the Soft and Bio NanoInterfaces Laboratory at Durham University for the precious and continuous support. The authors and in particular C. P. are very grateful to CERIC-ERIC for financial funding within the framework of the INTEGRA and INTEGRA's PhD project.
## References
* Tekpli et al. 2013 Tekpli, X.; Holme, J. A.; Sergent, O.; Lagadic-Gossmann, D. Role for membrane remodeling in cell death: implication for health and disease. _Toxicology_**2013**, _304_, 141-157.
* Sezgin et al. 2017 Sezgin, E.; Levental, I.; Mayor, S.; Eggeling, C. The mystery of membrane organization: composition, regulation and roles of lipid rafts. _Nature reviews Molecular cell biology_**2017**, _18_, 361-374.
* Simons and Sampaio 2011 Simons, K.; Sampaio, J. L. Membrane organization and lipid rafts. _Cold Spring Harbor perspectives in biology_**2011**, \(3\), a004697.
* Lingwood and Simons 2010 Lingwood, D.; Simons, K. Lipid rafts as a membrane-organizing principle. _science_**2010**, _327_, 46-50.
* Smart et al. 1999 Smart, E. J.; Graf, G. A.; McNiven, M. A.; Sessa, W. C.; Engelman, J. A.; Scherer, P. E.; Okamoto, T.; Lisanti, M. P. Caveolins, liquid-ordered domains, and signal transduction. _Molecular and cellular biology_**1999**, _19_, 7289-7304.
* Zajchowski and Robbins 2002 Zajchowski, L. D.; Robbins, S. M. Lipid rafts and little caves: compartmentalized signalling in membrane microdomains. _European Journal of Biochemistry_**2002**, _269_, 737-752.
* Crane and Tamm 2004 Crane, J. M.; Tamm, L. K. Role of cholesterol in the formation and nature of lipid rafts in planar and spherical model membranes. _Biophysical journal_**2004**, _86_, 2965-2979.
* Engberg et al. 2016 Engberg, O.; Hautala, V.; Yasuda, T.; Dehio, H.; Murata, M.; Slotte, J. P.; Nyholm, T. K. The affinity of cholesterol for different phospholipids affects lateral segregation in bilayers. _Biophysical journal_**2016**, _111_, 546-556.
* Li et al. 2006 Li, Y. C.; Park, M. J.; Ye, S.-K.; Kim, C.-W.; Kim, Y.-N. Elevated levels of cholesterol-rich lipid rafts in cancer cells are correlated with apoptosis sensitivity induced by cholesterol-depleting agents. _The American journal of pathology_**2006**, _168_, 1107-1118.
* Hanzal-Bayer and Hancock 2007 Hanzal-Bayer, M. F.; Hancock, J. F. Lipid rafts and membrane traffic. _FEBS letters_**2007**, _581_, 2098-2104.
* Huyan et al. 2020 Huyan, T.; Li, H.; Peng, H.; Chen, J.; Yang, R.; Zhang, W.; Li, Q. Extracellular vesicles-advanced nanocarriers in cancer therapy: progress and achievements. _International journal of nanomedicine_**2020**, 6485-6502.
* Herrmann et al. 2021 Herrmann, I. K.; Wood, M. J. A.; Fuhrmann, G. Extracellular vesicles as a next-generation drug delivery platform. _Nature nanotechnology_**2021**, _16_, 748-759.
* Araujo-Abad et al. 2022 Araujo-Abad, S.; Saceda, M.; de Juan Romero, C. Biomedical application of small extracellular vesicles in cancer treatment. _Advanced drug delivery reviews_**2022**, 114117.
* Becker et al. 2016 Becker, A.; Thakur, B. K.; Weiss, J. M.; Kim, H. S.; Peinado, H.; Lyden, D. Extracellular vesicles in cancer: cell-to-cell mediators of metastasis. _Cancer cell_**2016**, _30_, 836-848.
* Bebelman et al. 2018 Bebelman, M. P.; Smit, M. J.; Pegtel, D. M.; Baglio, S. R. Biogenesis and function of extracellular vesicles in cancer. _Pharmacology & therapeutics_**2018**, _188_, 1-11.
* Kalluri and LeBleu 2020 Kalluri, R.; LeBleu, V. S. The biology, function, and biomedical applications of exosomes. _Science_**2020**, _367_, eaau6977.
* Van der Pol et al. 2014 Van der Pol, E.; Coumans, F.; Grootemaat, A.; Gardiner, C.; Sargent, I. L.; Harrison, P.; Sturk, A.; Van Leeuwen, T.; Nieuwland, R. Particle size distribution of exosomes and microvesicles determined by transmission electron microscopy, flow cytometry, nanoparticle tracking analysis, and resistive pulse sensing. _Journal of Thrombosis and Haemostasis_**2014**, _12_, 1182-1192.
* Coumans et al. 2017 Coumans, F. A.; Brisson, A. R.; Buzas, E. I.; Dignat-George, F.; Drees, E. E.; El-Andaloussi, S.; Emanueli, C.; Gasecka, A.; Hendrix, A.; Hill, A. F., et al. Methodological guidelines to study extracellular vesicles. _Circulation research_**2017**, _120_, 1632-1648.
* Maia et al. 2018 Maia, J.; Caja, S.; Strano Moraes, M. C.; Couto, N.; Costa-Silva, B. Exosome-based cell-cell communication in the tumor microenvironment. _Frontiers in cell and developmental biology_**2018**, \(6\), 18.
* Mulcahy et al. 2014 Mulcahy, L. A.; Pink, R. C.; Carter, D. R. F. Routes and mechanisms of extracellular vesicle uptake. _Journal of extracellular vesicles_**2014**, \(3\), 24641.
* Perissinotto et al. 2021 Perissinotto, F.; Rondelli, V.; Senigagliesi, B.; Brocca, P.; Almasy, L.; Bottyan, L.; Merkel, D. G.; Amenitsch, H.; Sartori, B.; Pachler, K., et al. Structural insights into fusion mechanisms of small extracellular vesicles with model plasma membranes. _Nanoscale_**2021**, _13_, 5224-5233.
* Russell et al. 2019 Russell, A. E.; Sneider, A.; Witwer, K. W.; Bergese, P.; Bhattacharyya, S. N.; Cocks, A.; Cocucci, E.; Erdbrugger, U.; Falcon-Perez, J. M.; Freeman, D. W., et al. Biological membranes in EV biogenesis, stability, uptake, and cargo transfer: an ISEV position paper arising from the ISEV membranes and EVs workshop. _Journal of Extracellular Vesicles_**2019**, \(8\), 1684862.
* French et al. 2017 French, K. C.; Antonyak, M. A.; Cerione, R. A. Extracellular vesicle docking at the cellular port: Extracellular vesicle binding and uptake. Seminars in cell & developmental biology. 2017; pp 48-55.
* Thery et al. 2018 Thery, C.; Witwer, K. W.; Aikawa, E.; Alcaraz, M. J.; Anderson, J. D.; Andriantsitohaina, R.; Antoniou, A.; Arab, T.; Archer, F.; Atkin-Smith, G. K., et al. Minimal information for studies of extracellular vesicles 2018 (MISEV2018): a position statement of the International Society for Extracellular Vesicles and update of the MISEV2014 guidelines. _Journal of extracellular vesicles_**2018**, \(7\), 1535750.
* Grouleff et al. 2018 Grouleff, J.; Irudayam, S. J.; Skeby, K. K.; Schiott, B. The influence of cholesterol on membrane protein structure, function, and dynamics studied by molecular dynamics
simulations. _Biochimica et Biophysica Acta (BBA)-Biomembranes_**2015**, _1848_, 1783-1795.
* Caselli et al. 2021 Caselli, L.; Ridolfi, A.; Cardellini, J.; Sharpnack, L.; Paolini, L.; Bruccale, M.; Valle, F.; Montis, C.; Bergese, P.; Berti, D. A plasmon-based nanoruler to probe the mechanical properties of synthetic and biogenic nanosized lipid vesicles. _Nanoscale Horizons_**2021**, \(6\), 543-550.
* Niemela et al. 2006 Niemela, P. S.; Hyvonen, M. T.; Vattulainen, I. Influence of chain length and unsaturation on sphingomyelin bilayers. _Biophysical journal_**2006**, _90_, 851-863.
* Marquardt et al. 2016 Marquardt, D.; Kucerka, N.; Wassall, S. R.; Harroun, T. A.; Katsaras, J. Cholesterol's location in lipid bilayers. _Chemistry and Physics of Lipids_**2016**, _199_, 17-25.
* Sullan et al. 2010 Sullan, R. M. A.; Li, J. K.; Hao, C.; Walker, G. C.; Zou, S. Cholesterol-dependent nanomechanical stability of phase-segregated multicomponent lipid bilayers. _Biophysical journal_**2010**, _99_, 507-516.
* Redondo-Morata et al. 2012 Redondo-Morata, L.; Giannotti, M. I.; Sanz, F. Influence of cholesterol on the phase transition of lipid bilayers: a temperature-controlled force spectroscopy study. _Langmuir_**2012**, _28_, 12851-12860.
* Ma et al. 2016 Ma, Y.; Ghosh, S. K.; DiLena, D. A.; Bera, S.; Lurio, L. B.; Parikh, A. N.; Sinha, S. K. Cholesterol partition and condensing effect in phase-separated ternary mixture lipid multilayers. _Biophysical journal_**2016**, _110_, 1355-1366.
* McMullen et al. 2004 McMullen, T. P.; Lewis, R. N.; McElhaney, R. N. Cholesterol-phospholipid interactions, the liquid-ordered phase and lipid rafts in model and biological membranes. _Current opinion in colloid & interface science_**2004**, \(8\), 459-468.
* Hung et al. 2007 Hung, W.-C.; Lee, M.-T.; Chen, F.-Y.; Huang, H. W. The condensing effect of cholesterol in lipid bilayers. _Biophysical journal_**2007**, _92_, 3960-3967.
* Leidy et al. 2002 Leidy, C.; Kaasgaard, T.; Crowe, J. H.; Mouritsen, O. G.; Jorgensen, K. Ripples and the formation of anisotropic lipid domains: imaging two-component supported double bilayers by atomic force microscopy. _Biophysical journal_**2002**, _83_, 2625-2633.
* Blanchette et al. 2008 Blanchette, C. D.; Orme, C. A.; Ratto, T. V.; Longo, M. L. Quantifying growth of symmetric and asymmetric lipid bilayer domains. _Langmuir_**2008**, _24_, 1219-1224.
* Lee et al. 2019 Lee, K.-L.; Kuo, Y.-C.; Ho, Y.-S.; Huang, Y.-H. Triple-negative breast cancer: current understanding and future therapeutic breakthrough targeting cancer stemness. _Cancers_**2019**, _11_, 1334.
* Senigagliesi et al. 2022 Senigagliesi, B.; Samperi, G.; Cefarin, N.; Gneo, L.; Petrosino, S.; Apollonio, M.; Caponnetto, F.; Sgarra, R.; Collavin, L.; Cesselli, D., et al. Triple negative breast cancer-derived small extracellular vesicles as modulator of biomechanics in target cells. _Nanomedicine: Nanotechnology, Biology and Medicine_**2022**, _44_, 102582.
* Balgavy et al. 2001 Balgavy, P.; Dubnickova, M.; Kucerka, N.; Kiselev, M. A.; Yaradaikin, S. P.; Uhrkova, D. Bilayer thickness and lipid interface area in unilamellar extruded 1, 2-diacylglycerphosphatidylcholine liposomes: a small-angle neutron scattering study. _Biochimica et Biophysica Acta (BBA)-Biomembranes_**2001**, _1512_, 40-52.
* Badana et al. 2016 Badana, A.; Chintala, M.; Varikuti, G.; Pudi, N.; Kumari, S.; Kappala, V. R.; Malla, R. R. Lipid raft integrity is required for survival of triple negative breast cancer cells. _Journal of breast cancer_**2016**, _19_, 372-384.
* Rinia et al. 2001 Rinia, H. A.; Snel, M. M.; van der Eerden, J. P.; de Kruijff, B. Visualizing detergent resistant domains in model membranes with atomic force microscopy. _Febs Letters_**2001**, _501_, 92-96.
* Alessandrini and Facci 2014 Alessandrini, A.; Facci, P. Phase transitions in supported lipid bilayers studied by AFM. _Soft matter_**2014**, _10_, 7145-7164.
* Demetzos 2008 Demetzos, C. Differential scanning calorimetry (DSC): a tool to study the thermal behavior of lipid bilayers and liposomal stability. _Journal of liposome research_**2008**, _18_, 159-173.
* Nyholm et al. 2003 Nyholm, T. K.; Nylund, M.; Slotte, J. P. A calorimetric study of binary mixtures of dihydrosphingomyelin and sterols, sphingomyelin, or phosphatidylcholine. _Biophysical journal_**2003**, _84_, 3138-3146.
* Liu et al. 2023 Liu, L.; Duan, C.; Wang, R. Kinetic Pathway and Micromechanics of Vesicle Fusion/Fission. 2023.
* Verta et al. 2022 Verta, R.; Grange, C.; Skovronova, R.; Tanzi, A.; Peruzzi, L.; Deregibus, M. C.; Camussi, G.; Bussolati, B. Generation of Spike-Extracellular Vesicles (S-EVs) as a Tool to Mimic SARS-CoV-2 Interaction with Host Cells. _Cells_**2022**, _11_, 146.
* Teiwes et al. 2021 Teiwes, N. K.; Mey, I.; Baumann, P. C.; Strieker, L.; Unkelbach, U.; Steinem, C. Pore-Spanning Plasma Membranes Derived from Giant Plasma Membrane Vesicles. _ACS Applied Materials & Interfaces_**2021**, _13_, 25805-25812.
* Muhlenbrock et al. 2020 Muhlenbrock, P.; Herwig, K.; Vuong, L.; Mey, I.; Steinem, C. Fusion pore formation observed during SNARE-mediated vesicle fusion with pore-spanning membranes. _Biophysical Journal_**2020**, _119_, 151-161.
Supplementary Information: Lipid bilayer fluidity and degree of order regulates small EVs adsorption on model cell membrane
Carolina Paba
Department of Physics, University of Trieste, 34127 Trieste, Italy
Virginia Dorigo
Department of Physics, University of Durham, Durham DH1 3LE, United Kingdom
Beatrice Senigagliesi
Elettra Sincrotrone Trieste, 34149 Basovizza TS, Italy [email protected]
Nicolo Tormena
Department of Physics, University of Trieste, 34127 Trieste, Italy
Pietro Parisse
[
Kislon Voitchovsky
[
Loredana Casalis
[
###### Abstract
For AFM analysis of individual sEVs isolated from MDA-MB-231 cell line, a freshly cleaved mica of 10 \(mm\) in diameter with \(0.15-0.21\)\(mm\) thickness (from Micro to Nano) was first incubated with 30 \(\mu L\) of Poly-L-Omithine (from Sigma-Aldrich) for 20 \(min\); after an extensive washing with Milli-Q \(H_{2}O\), 30 \(\mu L\) of sEVs were let to incubate for 20 \(min\) and then gently rinsed with 50 \(\mu L\) of 10 mM PBS before AFM imaging.
American Chemical Society, LaTeX Unknown, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-Aldrich, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma, Sigma-, Sigma-, Sigma, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma-, Sigma, Sigma-, Sigma-, Sigma-, Sigma, Sigma-, Sigma-, Sigma-, Sigma, Sigma-, Sigma-, Sigma-, Sigma, Sigma-, Sigma-, Sigma, Sigma-, Sigma-, Sigma-, Sigma, Sigma-, Sigma-, Sigma, Sigma-, Sigma, Sigma-, Sigma, Sigma-, Sigma, Sigma-, Sigma, Sigma-, Sigma, Sigma-, Sigma, Sigma-, Sigma, Sigma-, Sigma-, Sigma, Sigma-, Sigma, Sigma-, Sigma, Sigma-, Sigma, Sigma-, Sigma, Sigma-, Sigma, Sigma-, Sigma, Sigma-, Sigma, Sigma-, Sigma, Sigma-, Sigma, Sigma-, Sigma, Sigma-, Sigma, Sigma-, Sigma, Sigma-, Sigma, Sigma-, Sigma, Sigma-, Sigma, Sigma-, Sigma, Sigma-, Sigma, Sigma-, Sigma, Sigma-, Sigma, Sigma-, Sigma, Sigma-, Sigma, Sigma-, Sigma, Sigma-, Sigma, Sigma-, Sigma, Sigma-, Sigma, Sigma-, Sigma, Sigma-, Sigma, Sigma-, Sigma, Sigma-, Sigma, Sigma-, Sigma, Sigma-, Sigma, Sigma-, Sigma, Sigma-, Sigma, Sigma-, Sigma, Sigma-, Sigma, Sigma-, Sigma, Sigma-, Sigma, Sigma-, Sigma, Sigma-, Sigma, Sigma-, Sigma, Sigma-, Sigma, Sigma, Sigma-, Sigma, Sigma, Sigma-, Sigma, Sigma-, Sigma, Sigma-, Sigma, Sigma-, Sigma, Sigma, Sigma-, Sigma, Sigma-, Sigma, Sigma, Sigma-, Sigma, Sigma-, Sigma, Sigma, Sigma-, Sigma, Sigma-, Sigma, Sigma-, Sigma, Sigma-, Sigma, Sigma-, Sigma, Sigma, Sigma-, Sigma, Sigma-, Sigma, Sigma-, Sigma, Sigma-, Sigma, Sigma, Sigma-, Sigma, Sigma-, Sigma, Sigma-, Sigma, Sigma, Sigma-, Sigma, Sigma, Sigma-, Sigma, Sigma, Sigma-, Sigma, Sigma-, Sigma, Sigma, Sigma-, Sigma, Sigma, Sigma-, Sigma, Sigma, Sigma-, Sigma, Sigma-, Sigma, Sigma, Sigma-, Sigma, Sigma, Sigma-, Sigma, Sigma, Sigma-, Sigma, Sigma, Sigma-, Sigma, Sigma-, Sigma, Sigma, Sigma-, Sigma, Sigma, Sigma-, Sigma, Sigma, Sigma-, Sigma, Sigma, Sigma-, Sigma, Sigma, Sigma-, Sigma, Sigma, Sigma-, Sigma, Sigma, Sigma-, Sigma, Sigma, Sigma-, Sigma, Sigma, Sigma-, Sigma, Sigma, Sigma-, Sigma, Sigma, Sigma-, Sigma, Sigma, Sigma-, Sigma, Sigma, Sigma-, Sigma, Sigma, Sigma-, Sigma, Sigma, Sigma-, Sigma, Sigma, Sigma-, Sigma, Sigma, Sigma-, Sigma, Sigma, Sigma-, Sigma, Sigma, Sigma-, Sigma, Sigma, Sigma-, Sigma, Sigma, Sigma-, Sigma, Sigma, Sigma-, Sigma, Sigma, Sigma-, Sigma, Sigma, Sigma-, Sigma, Sigma, Sigma-, Sigma, Sigma, Sigma-, Sigma, Sigma, Sigma-, Sigma, Sigma, Sigma-, Sigma, Sigma, Sigma-, Sigma, Sigma, Sigma, Sigma-, Sigma, Sigma, Sigma, Sigma-, Sigma, Sigma, Sigma, Sigma-, Sigma, Sigma, Sigma-, Sigma, Sigma, Sigma-, Sigma, Sigma, Sigma-, Sigma, Sigma, Sigma-, Sigma, Sigma, Sigma-, Sigma, Sigma, Sigma-, Sigma, Sigma, Sigma-, Sigma, Sigma, Sigma, Sigma-, Sigma, Sigma, Sigma, Sigma-, Sigma, Sigma, Sigma, Sigma-, Sigma, Sigma, Sigma, Sigma-, Sigma, Sigma, Sigma-, Sigma, Sigma, Sigma, Sigma-, Sigma, Sigma, Sigma, Sigma-, Sigma, Sigma, Sigma-, Sigma, Sigma, Sigma, Sigma-, Sigma, Sigma, Sigma, Sigma-, Sigma, Sigma, Sigma, Sigma-, Sigma, Sigma, Sigma, Sigma, Sigma-, Sigma, Sigma, Sigma, Sigma-, Sigma, Sigma, Sigma, Sigma-, Sigma, Sigma, Sigma, Sigma-, Sigma, Sigma, Sigma, Sigma-, Sigma, Sigma, Sigma, Sigma-, Sigma, Sigma, Sigma, Sigma-, Sigma, Sigma, Sigma, Sigma, Sigma-, Sigma, Sigma, Sigma-, Sigma, Sigma, Sigma, Sigma-, Sigma, Sigma, Sigma, Sigma, Sigma-, Sigma, Sigma, Sigma, Sigma-, Sigma, Sigma, Sigma, Sigma-, Sigma, Sigma, Sigma, Sigma, Sigma-, Sigma, Sigma, Sigma, Sigma-, Sigma, Sigma, Sigma, Sigma-, Sigma, Sigma, Sigma, Sigma-, Sigma, Sigma, Sigma, Sigma, Sigma-, Sigma, Sigma, Sigma, Sigma-, Sigma, Sigma, Sigma, Sigma-, Sigma, Sigma, Sigma, Sigma-, Sigma, Sigma, Sigma, Sigma, Sigma-, Sigma, Sigma, Sigma, Sigma, Sigma-, Sigma, Sigma, Sigma, Sigma, Sigma, Sigma-, Sigma, Sigma, Sigma, Sigma, Sigma-, Sigma, Sigma, Sigma, Sigma, Sigma-, Sigma, Sigma, Sigma, Sigma, Sigma-, Sigma, Sigma, Sigma, Sigma-, Sigma, Sigma, Sigma, Sigma, Sigma-, Sigma, Sigma, Sigma, Sigma, Sigma-, Sigma, Sigma, Sigma, Sigma, Sigma, Sigma-, Sigma, Sigma, Sigma, Sigma, Sigma, Sigma-, Sigma, Sigma, Sigma, Sigma, Sigma-, Sigma, Sigma, Sigma, Sigma, Sigma, Sigma-, Sigma, Sigma, Sigma, Sigma, Sigma, Sigma, Sigma-, Sigma, Sigma, Sigma, Sigma, Sigma, Sigma-, Sigma, Sigma, Sigma, Sigma, Sigma-, Sigma, Sigma, Sigma, Sigma, Sigma-, Sigma, Sigma, Sigma, Sigma, Sigma-, Sigma, Sigma, Sigma, Sigma, Sigma, Sigma, Sigma-, Sigma, Sigma, Sigma, Sigma, Sigma, Sigma, Sigma-, Sigma, Sigma, Sigma, Sigma, Sigma, Sigma-, Sigma, Sigma, Sigma, Sigma, Sigma, Sigma, Sigma, Sigma-, Sigma, Sigma, Sigma, Sigma, Sigma, Sigma, Sigma, Sigma-, Sigma
in the scatter plot, sEV's typical height was \(15.43\pm 5.77\ nm\), with a mean diameter equal to \(49.73\ \pm\ 18.24\ nm\), calculated on a number of 112 vesicles. These values resides in the typical range that can be found in literature [1, 2].
## 2 MDA-MB-231 sEVs affinity test with single component SLB
To confirm the hypothesis of the sEV's (from MDA-MB-231 cell line) preferential mixing with the \(S_{o}\) phase of the SLB, the vesicles have been tested in their interaction with a single component SLB composed of DOPC (Figure 2Sa) or DPPC (Figure 2Sb). In the first composition, as expected and reported in Figure 2Sc, sEVs colocalize with the DOPC defects, without interacting with the surrounding SLB. The opposite behavior is reported in Figure 2Sd for the DPPC SLB where the sEVs cover the whole area available. As previously reported, sEVs mixing with DPPC lead to the formation of an intermediate phase, regular in height, with a thickness comparable to the total thickness of the bilayer.
Figure 1S: AFM size characterisation of sEVs from MDA-MB-231 cell line, imaged in PBS 1X buffer over MICA substrate.
Figure 2S: AFM topographic images of DOPC and DPPC SLB before (a,b) and after (c,d) sEVs (MDA-MB-231 cell line) interaction with corresponding height profiles, acquired at \(27^{\circ}C\) in Tris buffer \(10\;mM\).
## UC-MSC sEVs interaction with model membrane
To study the differences in the sEVs interaction process as a function of the sEV's origin, sEVs isolated from the human umbilical cord-derived mesenchymal stromal cells (UC-MSCs) have been added to the SLB composed of DOPC and SM with \(17\ mol\%\) of Cholesterol. From the AFM image reported in Figure 3Sa, it can be observed that the sEV's mixing is localised at the interphase of the \(L_{o}/L_{d}\) domains, forming local invaginations with an average value of \(0.5\ nm\) at the level of the \(L_{d}\) phase. This result is in agreement with the work proposed by our group [3] for the same sV sample, where it was attributed to the formation of a mixed phase due to sEV fusion with the SLB. A time-resolved analysis was also performed to track the evolution of the sEV mixing with the SLB. In Figure 3Sb is reported the same area was analysed after \(40\ min\) from the sEV uptake. It can be noticed a positive growth of the area occupied by sEVs patches that is accompanied by a dramatic decrease in the \(L_{o}\) domains. However, the process evolution differs from what can be observed for the same SLB composition in interaction with the sEVs isolated from the MDA-MB-231 cell line, as reported in the main text. In the latter, the \(L_{o}\) melting was not characterized by any significative increase over the time of the area occupied by the sEV's patches, while here this phenomenon is more pronounced. This can be attributed to intrinsic differences in the sEV's composition and mechanisms of interaction with the proposed model system.
|
2303.17725
|
Spectral equations and ground state for the modular $\sinh$-Gordon model
|
We study the modular pair of $TQ$ equations for the quantum $\sin$-Gordon
model in the framework of non-compact $\mathcal{U}_{q,q^*}(\widehat{sl}_2)$. We
assume some conjectures for the thermodynamic limit allowing one to obtain its
ground state.
|
Sergey Sergeev
|
2023-03-30T21:58:08Z
|
http://arxiv.org/abs/2303.17725v1
|
###### Abstract
###### Abstract
We study the modular pair of \(TQ\) equations for the quantum sin-Gordon model in the framework of non-compact \({\cal U}_{q,q^{*}}(\widehat{sl}_{2})\). We assume some conjectures for the thermodynamic limit allowing one to obtain its ground state.
**Spectral equations and ground state for the modular \(\sinh\)-Gordon model.**
Sergey M. Sergeev.
_Department of Theoretical Physics, Research School of Physics and Engineering,_
_Australian National University, Canberra, ACT 0200, Australia_
_and_
_Faculty of Science and Technology,_
_University of Canberra, Bruce ACT 2617, Australia_
## 1 Introduction.
I this paper we discuss some analytical properties of the Baxter's \(Q\)-operator for the quantum non-compact sinh-Gordon model considered in the framework of the modular double \({\cal U}_{q,\overline{q}}(\widehat{sl}_{2})\)[1, 2]. We do not discuss construction of the model here. Instead, we start directly from the quantum curve for it and the pair of \(TQ\) equations. Quantisation condition [3, 4, 5, 6] is the absence of the poles in the _eigenfunction_\(Q(x)\) except some "kinematic" poles coming explicitly from the kernel of the _operator_\(Q(x)\) and described by usual quantum dilogarithms. Corresponding spectral equations are given in Section 6, eq. (6.5). This paper can be seen as a continuation of [3]. The main purpose of this paper is the study of the spectral equations in the thermodynamic limit. Under certain conjectures we obtain a functional relation for the density of a properly defined Bethe-type variables for the ground state of the model. Our results are in accordance with [7, 8].
## 2 Basic notations.
Let the self-conjugated pair \(\mathbf{x},\mathbf{p}\) form the Heisenberg algebra,
\[[\mathbf{x},\mathbf{p}]\;=\;\frac{{\rm i}}{2\pi}\;. \tag{2.1}\]
Let also
\[{\sf b}\;=\;{\sf e}^{{\rm i}\theta}\;,\quad 0<\theta<\frac{\pi}{2}\;, \tag{2.2}\]
and
\[q\;=\;{\sf e}^{{\rm i}\pi{\sf b}^{2}}\;,\quad q^{*}\;=\;{\sf e}^{-{\rm i}\pi{ \sf b}^{-2}}\;. \tag{2.3}\]
**Convention 2.1**.: _In what follows, "star \(*\)" will stand for the involution_
\[u\ =\ e^{2\pi{\rm b}x}\ \to\ u^{*}\ =\ e^{2\pi{\rm b}^{-1}x}\ ;\quad q^{2}\ =\ e^{2\pi{\rm i}{\rm b}^{2}}\ \to\ q^{*2}\ =\ e^{-2\pi{\rm i}{\rm b}^{-2}}\;. \tag{2.4}\]
_Involutive antihomomorphism \(*\) is the complex/Hermitian conjugation in the regime of real \(x\) in (2.4). However, in general the \(*\) involution implies complex \(x\) understood as the analytical continuation._
Two conjugated Weyl pairs to be introduced are
\[{\boldsymbol{u}}\ =\ {\rm e}^{2\pi{\rm b}{\boldsymbol{x}}}\,\quad{\boldsymbol{v}} \ =\ {\rm e}^{2\pi{\rm b}{\boldsymbol{p}}}\;,\quad{\boldsymbol{u}}\,{ \boldsymbol{v}}\ =\ q^{2}\,{\boldsymbol{v}}\,{\boldsymbol{u}}\;, \tag{2.5}\]
and
\[{\boldsymbol{u}}^{*}\ =\ {\rm e}^{2\pi{\rm b}^{-1}{\boldsymbol{x}}}\;,\quad{ \boldsymbol{v}}^{*}\ =\ {\rm e}^{2\pi{\rm b}^{-1}{\boldsymbol{p}}}\;,\quad{ \boldsymbol{v}}^{*}\,{\boldsymbol{u}}^{*}\ =\ q^{*2}\,{\boldsymbol{u}}^{*}\,{ \boldsymbol{v}}^{*}\;, \tag{2.6}\]
so that
\[[{\boldsymbol{u}},{\boldsymbol{v}}^{*}]\ =\ [{\boldsymbol{v}},{\boldsymbol{u}}^{* }]\ =\ 0\;. \tag{2.7}\]
Dirac's bracket notations are defined by
\[\langle x|\,{\boldsymbol{u}}\ =\ u\,\langle x|\;,\quad\langle x|\,{\boldsymbol{u} }^{*}\ =\ u^{*}\,\langle x|\;,\quad\langle x|\,{\boldsymbol{v}}\ =\ \langle x-{\rm i}{\rm b}|\;,\quad\langle x|\,{ \boldsymbol{v}}^{*}\ =\ \langle x-{\rm i}{\rm b}^{-1}|\;. \tag{2.8}\]
In what follows, we will always use
\[u\ \stackrel{{ def}}{{=}}\ {\rm e}^{2\pi{\rm b}x}\;,\quad u^{*} \ \stackrel{{ def}}{{=}}\ {\rm e}^{2\pi{\rm b}^{-1}x}\;,\quad\eta\ =\ \frac{{\rm b}+{\rm b}^{-1}}{2}\;,\quad\sigma\ =\ \frac{{\rm b}-{\rm b}^{-1}}{2{\rm i}}\;. \tag{2.9}\]
## 3 Special Functions.
Now we will define a set of special functions.
The \(q\)-exponent is defined by
\[(u;q^{2})_{\infty}\ =\ \prod_{n=0}^{\infty}(1-uq^{2n})\;, \tag{3.1}\]
and shortened \(\vartheta\)-function - by
\[\vartheta_{1}(u)\ =\ (u;q^{2})_{\infty}(q^{2}u^{-1};q^{2})_{\infty}\;,\quad \vartheta_{1}(u)\ =\ -\,u\,\vartheta_{1}(q^{2}u)\;. \tag{3.2}\]
Jacobi identity reads
\[\frac{\vartheta_{1}(u)}{\vartheta_{1}(u)^{*}}\ =\ {\rm e}^{{\rm i}\pi(x+\sigma)^{2} +{\rm i}\pi c_{\rm b}}\;,\quad c_{\rm b}\ =\ \frac{1}{12}({\rm b}^{2}+{\rm b}^{-2})\;. \tag{3.3}\]
Two basic \(q\)-dilogarithmic functions [9] are:
\[\log\varphi(x)\ =\ \int\limits_{{\mathbb{R}}+{\rm i}0}\frac{{\rm e}^{-2{\rm i} xy}}{4\sinh({\sf b}y)\sinh({\sf b}^{-1}y)}\,\frac{dy}{y}\ =\ \log\frac{(-qu;q^{2})_{\infty}}{(-qu;q^{2})_{\infty}^{*}}\;, \tag{3.4}\]
and [8, 10]
\[\log\varphi_{2}(x)\ =\ \int\limits_{{\mathbb{R}}+{\rm i}0}\frac{{\rm e}^{-2{ \rm i}xy}}{8\cosh(2\eta y)\sinh({\sf b}y)\sinh({\sf b}^{-1}y)}\,\frac{dy}{y}\;. \tag{3.5}\]
Most remarkable properties of \(\varphi(x)\) and \(\varphi_{2}(x)\) to be mentioned:
\[\begin{array}{l}\log\varphi(x-{\rm i}\eta)\,-\,\log\varphi(x+{\rm i}\eta)\;= \;\log(1-u)(1-u^{*})\;,\\ \log\varphi_{2}(x+{\rm i}\eta)\,+\,\log\varphi_{2}(x-{\rm i}\eta)\;=\;\varphi (x)\;,\\ \log\varphi(x)\,+\,\log\varphi(-x)\;=\;{\rm i}\pi x^{2}\,+\,{\rm i}\pi c_{\sf b }\;,\\ \log\varphi_{2}(x)\,+\,\log\varphi_{2}(-x)\;=\;\frac{{\rm i}\pi}{2}x^{2}\,+\,2 \pi{\rm i}c_{\sf b}\,+\,\frac{{\rm i}\pi}{4}\;.\end{array} \tag{3.6}\]
## 4 \(Tq\) - Equations.
In this section we will formulate the modular pair of Baxter's \(TQ\) equations. In this paper we will not give a detailed formulation of \({\cal U}_{q,q^{*}}(\widehat{sl}_{2})\) sin-Gordon model in the terms of the whole machinery of the Quantum Inverse Scattering Method. Instead, our staring point will be corresponding Quantum Characteristic Polynomial.
We start with a few auxiliary notations. Let
\[A(u)\;=\;\prod_{\nu=1}^{N}\left(1\,+\,q^{-1}\frac{u}{a_{\nu}}\right)\;,\quad B (u)\;=\;\prod_{\nu=1}^{N}\left(1\,+\,q\frac{u}{b_{\nu}}\right)\;, \tag{4.1}\]
and
\[A^{\prime}(1/u)\;=\;\prod_{\nu=1}^{N}\left(1\,+\,q\frac{a_{\nu}}{u}\right)\;, \quad B^{\prime}(1/u)\;=\;\prod_{\nu=1}^{N}\left(1\,+\,q^{-1}\frac{b_{\nu}}{u }\right)\;. \tag{4.2}\]
Let also
\[\prod_{\nu=1}^{N}a_{\nu}\;=\;a^{N}\;,\quad\prod_{\nu=1}^{N}b_{\nu}\;=\;b^{N}\;, \tag{4.3}\]
so that
\[A(u)\;=\;\left(\frac{u}{qa}\right)^{N}A^{\prime}(1/u)\;,\quad B(u)\;=\;\left( \frac{qu}{b}\right)^{N}B^{\prime}(1/u)\;. \tag{4.4}\]
We will imply
\[a_{\nu}\;=\;{\rm e}^{2\pi{\mathfrak{b}}\alpha_{\nu}}\;,\quad a^{*}_{\nu}\;=\;{ \rm e}^{2\pi{\mathfrak{b}}^{-1}\alpha_{\nu}}\;,\quad b_{\nu}\;=\;{\rm e}^{2\pi{ \mathfrak{b}}\beta_{\nu}}\;,\quad b^{*}_{\nu}\;=\;{\rm e}^{2\pi{\mathfrak{b}}^{ -1}\beta_{\nu}}\;. \tag{4.5}\]
One more convenient notation:
\[A(u;q^{2})_{\infty}\;=\;\prod_{m=0}^{\infty}A(q^{2m}u)\;=\;\prod_{\nu=1}^{N}(-q ^{-1}\frac{u}{a_{\nu}};q^{2})_{\infty}\;,\quad\mbox{etc.} \tag{4.6}\]
Also we will assume for the geometric mean (4.3)
\[a\;=\;{\rm e}^{2\pi{\mathfrak{b}}\mu}\;,\quad b\;=\;{\rm e}^{-2\pi{\mathfrak{b }}\mu}\;,\quad ab\;=\;1\;. \tag{4.7}\]
The Characteristic Polynomial for the sin-Gordon model is
\[J({\boldsymbol{u}},{\boldsymbol{v}})\;=\;t^{-1}\,A({\boldsymbol{u}})\,{ \boldsymbol{v}}\;-\;T({\boldsymbol{u}})\;+\;t\,B({\boldsymbol{u}})\,{ \boldsymbol{v}}^{-1}\;, \tag{4.8}\]
where \(A,B\) are defined by (4.1), \(T({\boldsymbol{u}})\) is the transfer-matrix,
\[T({\boldsymbol{u}})\;=\;\sum_{n=0}^{N}T_{n}{\boldsymbol{u}}^{n}\;, \tag{4.9}\]
and \(N\) is the chain length. Note that here \({\boldsymbol{u}},{\boldsymbol{v}}\) are the spectral parameters, while
\[T_{1},\quad T_{2},\quad\ldots\quad T_{N-1} \tag{4.10}\]
is the commutative set of the integrals of motion. The opposite terms of the transfer matrix are the "external" parameters1,
Footnote 1: \(t\) is an analogue of \(q^{S_{z}}\)
\[T_{0}\;=\;t^{-1}+t\;,\quad T_{N}\;=\;(-)^{N}(t^{-1}+t)\;. \tag{4.11}\]
Extra condition to be imposed to parameter \(t\) (for finite \(N\)) is
\[|t^{2}|\,<\,1\;. \tag{4.12}\]
The particular form of \(T_{0}\), \(T_{N}\), \(A(u)\) and \(B(u)\) follows from the explicit construction of the quantum curve via \(L\)-operators etc.
Characteristic polynomial \(J({\boldsymbol{u}},{\boldsymbol{v}})\) must be the normal operator. Baxter's \(TQ\) equations read2
Footnote 2: State \(|Q\rangle\) is called sometimes \(Q_{L}\).
\[J({\boldsymbol{u}},{\boldsymbol{v}})\,|Q\rangle\;=\;J({\boldsymbol{u}},{ \boldsymbol{v}})^{\dagger}|Q\rangle\;=\;0\;. \tag{4.13}\]
In the coordinate representation
\[Q(x)\;=\;\langle x|Q\rangle \tag{4.14}\]
\(TQ\) equations (4.13) take convenient form,
\[\left\{\begin{array}{l}t^{-1}A(u)Q(x-{\sf i}{\sf b})\;+\;tB(u)Q(x+{\sf i}{\sf b}) \;=\;T(u)Q(x)\;,\\ \\ [t^{-1}A(q^{2}u)]^{*}Q(x-{\sf i}{\sf b}^{-1})\;+\;[tB({u\over q^{2}})]^{*}Q(x+{ \sf i}{\sf b}^{-1})\;=\;T(u)^{*}Q(x)\;.\end{array}\right. \tag{4.15}\]
Recall the convention 2.1: for the real \(x,\alpha_{\nu},\beta_{\nu}\), the anti-homomorphism "\(*\)" coincides with the Hermitian conjugation "\(\dagger\)". However, one must consider the analytical continuation "\(*\)" rather then "\(\dagger\)" in the pair (4.15) for general complex \(x,\alpha_{\nu},\beta_{\nu}\).
## 5 Holomorphic and anti-holomorphic solutions.
In this section we introduce the main building blocks for the solution to (4.15).
Let holomorphic and anti-holomorphic functions3
Footnote 3: \(\chi_{\pm,n}\sim q^{n(n+1)}\).
\[\mathbf{\chi}_{+}(u)\;=\;1+\sum_{n=1}^{\infty}\chi_{+,n}u^{n}\quad \mbox{and}\quad\mathbf{\chi}_{-}(u)\;=\;1+\sum_{n=1}^{\infty}\chi_{-, n}u^{-n} \tag{5.1}\]
be the solutions of
\[\mathbf{\chi}_{+}({u\over q^{2}})\;+\;t^{2}A(q^{2}u)B(u)\mathbf{\chi}_{+}(q^{2}u)\;=\;tT(u)\,\mathbf{\chi}_{+}(u) \tag{5.2}\]
and
\[\mathbf{\chi}_{-}(q^{2}u)\;+\;t^{2}A^{\prime}({1\over u})B^{\prime}( {q^{2}\over u})\,\mathbf{\chi}_{-}({u\over q^{2}})\;=\;{tT(u)\over(- u)^{N}}\,\mathbf{\chi}_{-}(u)\;. \tag{5.3}\]
Their \(q\)-difference Wronskian
\[W(u)\;=\;\mathbf{\chi}_{+}({u\over q^{2}})\mathbf{\chi}_{-} (u)\;-\;t^{2}(-qa)^{N}A(u)B^{\prime}({q^{2}\over u})\mathbf{\chi}_{+} (u)\mathbf{\chi}_{-}({u\over q^{2}}) \tag{5.4}\]
satisfies due to (5.2,5.3)
\[W(u)\;=\;(-u)^{N}W(q^{2}u)\;. \tag{5.5}\]
Thus (see (3.2)), one can introduce the "Bethe Ansatz" variables \(x_{\nu}\) (equivalently \(u_{\nu}\)),
\[W(u)\;=\;\varrho\,\prod_{\nu=1}^{N}\,\vartheta_{1}\left({u\over u_{\nu}} \right)\;,\quad u_{\nu}\;=\;{\sf e}^{2\pi{\sf b}x_{\nu}}\;,\quad\sum_{\nu=1}^{ N}x_{\nu}\;=\;0\;. \tag{5.6}\]
Presumably, the set \(x_{\nu}\in{\mathbb{R}}\) form a continuous distribution for a ground state of the transfer matrix \(T(u)\) in the thermodynamic limit \(N\to\infty\).
Eigenfunction \(Q(x)\) and spectral equations.
Solution \(Q(x)\) of (4.15) is given by
\[Q(x)\;=\;Q_{1}(x)\;-\;\xi\,Q_{2}(x)\;, \tag{6.1}\]
where
\[Q_{1}(x)\;=\;\exp\left(-\,\sum_{\nu=1}^{N}\log\varphi(x-\alpha_{\nu})\right)\, \frac{\mathbf{\chi}_{+}(u)\mathbf{\chi}_{-}(u)^{*}}{W(u)^{* }}\;, \tag{6.2}\]
and
\[Q_{2}(x)\;=\;\exp\left(2\pi{\rm i}\tau^{\prime}xN-\sum_{\nu=1}^{N}\log\varphi( \beta_{\nu}-x)\right)\,\frac{\mathbf{\chi}_{-}(u)\mathbf{\chi }_{+}(u)^{*}}{W(q^{2}u)^{*}}\;. \tag{6.3}\]
Here we assume
\[t\;=\;{\rm e}^{2\pi{\rm b}\tau}\;,\quad{\rm and}\quad\tau^{\prime}\;=\;\frac{ 2\tau}{N}+\mu+{\rm i}\eta\;. \tag{6.4}\]
**Statement 6.1**.: \(Q_{1,2}(x)\) _is the basis in the space of solutions of (4.15) with proper asymptotic. Double-periodic coefficients are neglected as unwanted._
Now one can formulate the principle of quantisation of \(T(u)\). The wave-function \(Q(x)\) must have only "kinematic" poles coming from the dilogarithms \(1/\varphi(x-\alpha_{\nu})\) and \(1/\varphi(\beta_{\nu}-x)\), i.e. the zeros of \(W(u)^{*}\) must be canceled out:
\[\lim_{x\to x_{\nu}}\frac{Q_{1}(x)}{Q_{2}(x)}\;=\;\xi\;,\quad\forall\;\;\nu=1, \ldots,N\;. \tag{6.5}\]
Definiton of the Wronskian (5.4) guarantees that equations (6.5) are then satisfied for all
\[x\;\to\;x_{\nu}+{\rm i}{\rm b}n-{\rm i}{\rm b}^{-1}m\;,\quad\forall\;\;n,m \in{\mathbb{Z}}\;. \tag{6.6}\]
## 7 Properties of \(\mathbf{\chi}_{\pm}\) for finite \(N\).
The principal system of the spectral equations is the system (6.5). Initial variables are \(\{x_{\nu}\}\) (equivalently \(\{u_{\nu}\}\)). However, the system (6.5) involves the functions \(\mathbf{\chi}_{\pm}(u)\) being understood as the functions of the formal set of \(\{u_{\nu}\}\):
\[\mathbf{\chi}_{\pm}(u)\;=\;\mathbf{\chi}_{\pm}(u;\{u_{\nu} \})\;. \tag{7.1}\]
The main technical problem now is to reconstruct the functions \(\mathbf{\chi}_{\pm}(u)\) via \(\{u_{\nu}\}\).
For the finite \(N\) one can use the \(t^{2}\)-expansion approach. Namely, let
\[T_{0}(u)\;=\;\prod_{\nu=1}^{N}\left(1\,-\,\frac{u}{u_{\nu}}\right)\;,\quad T_ {0}(u;q^{2})_{\infty}\;=\;\prod_{\nu=1}^{N}\,\left(\frac{u}{u_{\nu}};q^{2} \right)_{\infty}\;, \tag{7.2}\]
There is the \(t^{2}\) expansion:
\[\mathbf{\chi}_{+}(u)\;=\;\sum_{m=0}^{\infty}t^{2m}F_{m}(u)T_{0}(q^{2(m+1)} u;q^{2})_{\infty}\;, \tag{7.3}\]
and
\[tT(u)\;=\;T_{0}(u)\;+\;\sum_{m=1}^{\infty}t^{2m}T_{m}(u)\;, \tag{7.4}\]
where
\[F_{0}(u)\;=\;1\;,\quad F_{m}(u)\;=\;F_{m,1}u\;+\;\cdots\;+F_{m,mN}u^{mN}\;, \tag{7.5}\]
so that \(\mathbf{\chi}_{+}(0)\;=\;1\), and
\[T_{1}(u)\;=\;1+\cdots+(-u)^{N}\;,\quad T_{m}(u)\;=\;T_{m,1}u+\cdots+T_{m,N-1} u^{N-1}\;, \tag{7.6}\]
in accordance with (4.11). The \(t^{2}\)-expansion of equation (5.2) gives
\[\begin{array}{l} T_{0}(q^{2m}u)F_{m}(\frac{u}{q^{2}})\;+\;A(q^{2}u)B(u)F_{m- 1}(q^{2}u)\;=\\ \\ =\;\sum_{k=0}^{m}T_{k}(u)F_{m-k}(u)T_{0}(q^{2(m-k+1)}u)\cdots T_{0}(q^{2m}u)\;. \end{array} \tag{7.7}\]
**Statement 7.1**.: _The system (7.7) provides a linear bootstrap allowing one to construct uniquely step by step all \(F_{m}(u)\) and \(T_{m}(u)\)._
The bootstrap (7.7) involves the finite polynomials and therefore it can be solved either straightforwardly or by a partial fraction decomposition. However, we failed to find any manageable expression for \(F_{m}(u)\) and \(T_{m}(u)\) except for the simplest case \(N=1\) (see below). Thus, the statement 7.1 can be seen yet as a "theorem of existence".
Consider the zero decomposition of \(\mathbf{\chi}_{\pm}(u)\):
\[\mathbf{\chi}_{+}(u)\;=\;\prod_{m=1}^{\infty}\prod_{\nu=1}^{N}\left( 1-q^{2m}\frac{u}{u_{\nu,m}}\right)\;,\quad\mathbf{\chi}_{-}(u)\;=\; \prod_{m=1}^{\infty}\prod_{\nu=1}^{N}\left(1-q^{2m}\frac{u^{\prime}_{\nu,m}}{u }\right)\;. \tag{7.8}\]
It is easy to show, the coefficients \(u_{\nu,m}\) entering the zero decomposition (7.8) have the following asymptotic:
\[\frac{u_{\gamma}}{u_{\gamma,n}}\;=\;1\;+\;t^{2n}\Delta^{\prime}_{\gamma,n}\;+ \;{\cal O}(t^{2n+2})\;, \tag{7.9}\]
where
\[\Delta^{\prime}_{\gamma,n}\;=\;\lim_{u\to u_{\gamma}}\left(1-\frac{u_{\gamma} }{u}\right)\Delta^{\prime}(1/u;q^{2})_{n}\;,\quad\Delta^{\prime}(1/u;q^{2})_{n }\;=\;\frac{A^{\prime}(1/u;q^{2})_{n}B^{\prime}(q^{2}/u;q^{2})_{n}}{T^{\prime }_{0}(1/u;q^{2})_{n}T^{\prime}_{0}(q^{2}/u;q^{2})_{n}}\;. \tag{7.10}\]
Here in addition to (7.2) we use
\[T^{\prime}_{0}(1/u)\;=\;\frac{T_{0}(u)}{(-u)^{N}}\;=\;\prod_{\nu=1}^{N}\left(1- \frac{u_{\nu}}{u}\right)\;,\quad T^{\prime}_{0}(1/u;q^{2})_{n}\;=\;\prod_{\nu=1} ^{N}\left(\frac{u_{\nu}}{u};q^{2}\right)_{n}\;. \tag{7.11}\]
In the similar way
\[\frac{u^{\prime}_{\gamma,n}}{u_{\gamma}}\;=\;1\;+\;t^{2n}\Delta_{\gamma,n}\;+ \;{\cal O}(t^{2n+2})\;, \tag{7.12}\]
where
\[\Delta_{\gamma,n}\;=\;\lim_{u\to u_{\gamma}}\left(1-\frac{u}{u_{\gamma}} \right)\Delta(u;q^{2})_{n}\;,\quad\Delta(u;q^{2})_{n}\;=\;\frac{A(q^{2}u;q^{2} )_{n}B(u;q^{2})_{n}}{T_{0}(u;q^{2})_{n}T_{0}(q^{2}u;q^{2})_{n}}\;. \tag{7.13}\]
### Remarks
Also we'd like to point out remarkable symmetric case: choosing
\[\beta_{\nu}\;=\;-\alpha_{\nu}\quad\mbox{and}\quad x_{\nu}\;=\;-x_{-\nu}\;, \tag{7.14}\]
one gets
\[T^{\prime}_{0}(1/u)\;=\;T_{0}(1/u)\;,\quad\frac{T(u)}{(-u)^{N}}\;=\;T(1/u)\;, \quad{\boldsymbol{\chi}}_{-}(u)\;=\;{\boldsymbol{\chi}}_{+}(1/u)\;. \tag{7.15}\]
In this case the parameter \(\xi\) in (6.5) becomes the parity of the state, \(\xi=\pm 1\):
\[{\rm e}^{-2\pi{\rm i}\tau^{\prime}x_{\gamma}N}\;\left(\prod_{\nu}\frac{\varphi (-\alpha_{\nu}-x_{\gamma})}{\varphi(x_{\gamma}-\alpha_{\nu})}\right)\;\frac{{ \boldsymbol{\chi}}_{+}(u_{\gamma}){\boldsymbol{\chi}}_{+}(1/u_{\gamma})^{*}}{ {\boldsymbol{\chi}}_{+}(1/u_{\gamma}){\boldsymbol{\chi}}_{+}(u_{\gamma})^{*} }\;=\;\pm 1\;,\quad\gamma=1,\cdots,\frac{N}{2}\;. \tag{7.16}\]
### The toy case \(N=1\)
When \(N=1\), then the \(t^{2}\)-series expansion is manageable:
\[{\boldsymbol{\chi}}_{+}(u)\;=\;\sum_{m=0}^{\infty}q^{m(m+1)}\;\frac{(-t^{2}u )^{m}}{(q^{2};q^{2})_{m}}\;\frac{(-qa;q^{2})_{m}(-qb;q^{2})_{m}}{(q^{2}t^{2};q ^{2})_{m}}\;(q^{2(m+1)}u;q^{2})_{\infty}\;, \tag{7.17}\]
and
\[{\boldsymbol{\chi}}_{-}(u)\;=\;{\boldsymbol{\chi}}_{+}(1/u)\;. \tag{7.18}\]
Transfer-matrices are
\[T_{0}(u)\;=\;T_{1}(u)\;=\;1-u\;,\quad\mbox{and}\;\;\;T_{m}(u)\;=\;0\quad\mbox {for}\;\;m\geq 2\;. \tag{7.19}\]
The \(q\)-Wronskian (5.4) is then
\[W(u)\;=\;\frac{(-qt^{2}a;q^{2})_{\infty}(-qt^{2}b;q^{2})_{\infty}}{(q^{2}t^{2} ;q^{2})_{\infty}^{2}}\;\vartheta_{1}(u)\;. \tag{7.20}\]
Thermodynamical limit.
So far we do not have essential numerical results in finding \(u_{\gamma,m}\) via \(u_{\gamma}\), however we have an irresistible temptation to make some traditional assumptions for the thermodynamic limit \(N\to\infty\).
First of all, for the limit \(N\to\infty\) define the "external" densities \(P_{A}\) and \(P_{B}\):
\[\frac{1}{N}\sum_{\nu}f(\alpha_{\nu})\;=\;\int dx_{0}P_{A}(x_{0})f(x_{0})\;, \quad\frac{1}{N}\sum_{\nu}f(\beta_{\nu})\;=\;\int dx_{0}P_{B}(x_{0})f(x_{0})\;. \tag{8.1}\]
Expectations for \(P_{A}\) and \(P_{B}\) are
\[\mu_{A}\;=\;\int dx_{0}P_{A}(x_{0})x_{0}\;=\;\mu\;,\quad\mu_{B}\;=\;\int dx_{0} P_{B}(x_{0})x_{0}\;=\;-\mu\;, \tag{8.2}\]
see (4.7). Also, let
\[S_{A}\;=\;\int dx_{0}P_{A}(x_{0})x_{0}^{2}\;,\quad S_{B}\;=\;\int dx_{0}P_{B}( x_{0})x_{0}^{2}\;. \tag{8.3}\]
The symmetric case implies \(P_{B}(x_{0})=P_{A}(-x_{0})\), \(S_{A}=S_{B}\). The homogeneous symmetric case corresponds fo \(P_{A}(x_{0})\;=\;\delta(x_{0}-\mu)\), \(P_{B}(x_{0})\;=\;\delta(x_{0}+\mu)\).
In what follows, let for shortness
\[P_{AB}(x_{0})\;=\;P_{A}(x_{0})+P_{B}(x_{0})\;. \tag{8.4}\]
Next we assume a continuous distribution of the Bethe-type variables \(x_{\nu}\) on the real axis. Let their density be an analytical function \(P(x_{0})\), so that e.g.
\[\frac{1}{N}\log T_{0}(u)\;=\;\int_{\mathbb{R}\pm{\rm i}0}dx_{0}P(x_{0})\log(1- {\rm e}^{2\pi{\rm b}(x-x_{0})})\;,\quad x\in\mathbb{R}\;, \tag{8.5}\]
where the notation "\(\mathbb{R}\pm{\rm i}0\)" marks a proper choice of the branch of the logarithm.
Condition \(x\in\mathbb{R}\) is implied everywhere below.
Next assumption is that the roots \(x_{\gamma,m}\) are continuously distributed along some complex contours \({\cal C}_{m}\), however these contours can be analytically straightened to the real axis with the same distribution density \(P(x_{0})\) as before. Later we will see that there is even more strong behaviour:
\[x_{\gamma,m}\;\to\;x_{\gamma}\quad\forall\;\;m\quad\mbox{when}\;\;N\to 0\;. \tag{8.6}\]
This is based on
\[\frac{1}{N}\log|\Delta_{\gamma,n}|\;<\;0\;, \tag{8.7}\]
where \(\Delta_{\gamma,n}\) enters the asymptotic of \(x_{\gamma,m}\), see (7.10,7.13).
With the assumptions made, the left hand side of Bete Ansatz equations (6.5) become
\[\begin{array}{l}{\frac{1}{N}\log\frac{Q_{1}(x)}{Q_{2}(x)}\;= \;-2\pi{\rm i}(\mu+{\rm i}\eta)x+\int dx_{0}P(x_{0})\log\frac{\varphi(x-x_{0}+{ \rm i}\eta)}{\varphi(x_{0}-x+{\rm i}\eta)}}\\ {-\int dx_{0}P_{A}(x_{0})\log\varphi(x-x_{0})+\int dx_{0}P_{B}(x_{0}) \log\varphi(x_{0}-x)\;.}\end{array} \tag{8.8}\]
Elementary transformations provide
\[\begin{array}{l}{\frac{1}{N}\log\frac{Q_{1}(x)}{Q_{2}(x)}\;= \;{\rm i}\pi(\eta^{2}+S_{B}-S_{P})}\\ {+\int dx_{0}(P(x_{0}+{\rm i}\eta)+P(x_{0}-{\rm i}\eta)-P_{AB}(x_{0})) \log\varphi(x-x_{0})\;,}\end{array} \tag{8.9}\]
where the normalisation and expectations
\[\int dx_{0}P(x_{0})\;=\;1\;,\quad\int dx_{0}P(x_{0})x_{0}\;=\;0\;,\quad\int dx _{0}P(x_{0})x_{0}^{2}\;=\;S_{P}\;. \tag{8.10}\]
are taken into account. Thus, the Bethe Ansatz equations (6.5) in the integral form (8.9) give
\[P(x_{0}+{\rm i}\eta)+P(x_{0}-{\rm i}\eta)\;=\;P_{A}(x_{0})+P_{B}(x_{0}) \tag{8.11}\]
for the ground state density \(P(x_{0})\). On the solution of (8.11) one has \(S_{P}=\eta^{2}+\frac{1}{2}(S_{A}+S_{B})\), so that the Bethe Ansatz equations (6.5) become the identity
\[\frac{1}{N}\log\frac{Q_{1}(x)}{Q_{2}(x)}\;=\;\frac{{\rm i}\pi}{2}(S_{B}-S_{A}) \tag{8.12}\]
Solution to (8.11) gives the ground state density
\[P(x)\;=\;\int dx_{0}K(x-x_{0})P_{AB}(x_{0})\;, \tag{8.13}\]
where the kernel
\[K(x)\;=\;\int\frac{{\rm e}^{-2\pi{\rm i}xy}}{2\cosh(2\pi\eta y)}dy\;=\;\frac{ 1}{4\eta\cosh\left(\frac{\pi x}{2\eta}\right)}\;. \tag{8.14}\]
In particular, in the homogeneous symmetric case \(P_{AB}(x)=\delta(x-\mu)+\delta(x+\mu)\),
\[P(x)\;=\;\frac{1}{4\eta\cosh\left(\frac{\pi(x-\mu)}{2\eta}\right)}+\frac{1}{4 \eta\cosh\left(\frac{\pi(x+\mu)}{2\eta}\right)}\;. \tag{8.15}\]
Expressions for \(\mathbf{\chi}_{\pm}(u)\) then become
\[\frac{1}{N}\log\frac{\mathbf{\chi}_{+}(u)\mathbf{\chi}_{-}(u)^{ *}}{W(u)^{*}}\;=\;\Phi_{1}(x)\;,\quad\frac{1}{N}\log\frac{\mathbf{\chi$ }_{-}(u)\mbox{\boldmath$\chi}_{+}(u)^{*}}{W(q^{2}u)^{*}}\;=\;\Phi_{2}(x)\;, \tag{8.16}\]
where
\[\Phi_{1}(x)\;=\;\int dx_{0}P(x_{0})\log\varphi(x-x_{0}+{\rm i}\eta)\;=\;\int dx _{0}P_{AB}(x_{0})\log\varphi_{2}(x-x_{0}+{\rm i}\eta)\;, \tag{8.17}\]
\[\Phi_{2}(x)\;=\;\int dx_{0}P(x_{0})\log\varphi(x_{0}-x+{\rm i}\eta)\;=\;\int dx _{0}P_{AB}(x_{0})\log\varphi_{2}(x_{0}-x+{\rm i}\eta)\;. \tag{8.18}\]
Expressions (8.17) and (8.18) with the homogeneous density (8.15) being analytically continued to the stat mechanical regime of imaginary \(x\) and \(\mu\) explicitly reproduce the free energy for the Faddeev-Volkov model [10]. Moreover, for imaginary \(\mu\) the distribution (8.15) becomes unimodal.
The transfer matrix is given by
\[\frac{1}{N}\log T_{0}(u)\;=\;\int_{\mathbb{R}\pm{\rm i}0}dx_{0}P(x_{0})\log(1 -{\rm e}^{2\pi{\rm b}(x-x_{0})})\;=\;\int dx_{0}P_{AB}(x_{0})I_{\pm}(x-x_{0})\;, \tag{8.19}\]
where
\[I_{+}(x)\;=\;\log\frac{(-qu;q^{4})_{\infty}}{(-q^{3}u;q^{4})_{\infty}}\frac{(- {\rm i}w;-{\rm i}p)_{\infty}}{(iw;-{\rm i}p)_{\infty}} \tag{8.20}\]
with
\[w\;=\;\exp\left(\frac{\pi x}{2\eta}\right)\;,\quad p\;=\;\exp\left(-\frac{\pi \sigma}{2\eta}\right)\;,\quad-{\rm i}p\;=\;\exp\left(-\frac{{\rm i}\pi{\rm b}^ {-1}}{2\eta}\right)\;, \tag{8.21}\]
and
\[I_{-}(x)\;=\;I_{+}(x)-2\pi{\rm i}y(x)\;,\quad y(x)\;=\;\int_{-\infty}^{x}dx_{ 0}K(x_{0})\;=\;\frac{1}{2\pi{\rm i}}\log\frac{1+{\rm i}w}{1-{\rm i}w}\;. \tag{8.22}\]
These computations allow one to verify condition (8.7):
\[\frac{1}{N}\log\left|\frac{A(q^{2}u)B(u)}{T_{0}(u)T_{0}(q^{2}u)}\right|\;=\; \int dx_{0}P_{AB}(x_{0})\log\tanh\left|\frac{\pi(x-x_{0}-\sigma)}{4\eta} \right|\;<\;0 \tag{8.23}\]
Note that due to the branch cuts of log in (8.19) the expression for \(I(x)\) must be modified in each strip \(k\eta<{\rm Im}\;x<(k+1)\eta\), \(x\in\mathbb{C}\). In particular, \(\log T(u)\) has no singularities. Note also,
\[\frac{1}{N}\log\left|\frac{A(q^{2}u)B(u)}{T_{0}(u)T_{0}(u)_{x\to x+2{\rm i} \eta}}\right|\;=\;0\;, \tag{8.24}\]
where \(\log T_{0}(u)_{x\to x+2{\rm i}\eta}\) corresponds to the analytical continuation \(I_{\pm}(x)\to I_{\pm}(x+2{\rm i}\eta)\) in (8.19).
The results here correspond to the ground state structure of the model. Next step is the study of the low energy excitations.
Acknowledgements.I would like to thank R. Kashaev, V, Bazhanov and V. Mangazeev for valuable discussions. Also I acknowledge the support of the Australian Research Council grant DP190103144.
|
2303.00259
|
Computing All Restricted Skyline Probabilities on Uncertain Datasets
|
Restricted skyline (rskyline) query is widely used in multi-criteria decision
making. It generalizes the skyline query by additionally considering a set of
personalized scoring functions F. Since uncertainty is inherent in datasets for
multi-criteria decision making, we study rskyline queries on uncertain datasets
from both complexity and algorithm perspective. We formalize the problem of
computing rskyline probabilities of all data items and show that no algorithm
can solve this problem in truly subquadratic-time, unless the orthogonal
vectors conjecture fails. Considering that linear scoring functions are widely
used in practical applications, we propose two efficient algorithms for the
case where $\calF$ is a set of linear scoring functions whose weights are
described by linear constraints, one with near-optimal time complexity and the
other with better expected time complexity. For special linear constraints
involving a series of weight ratios, we further devise an algorithm with
sublinear query time and polynomial preprocessing time. Extensive experiments
demonstrate the effectiveness, efficiency, scalability, and usefulness of our
proposed algorithms.
|
Xiangyu Gao, Jianzhong Li, Dongjing Miao
|
2023-03-01T06:33:05Z
|
http://arxiv.org/abs/2303.00259v2
|
# Computing All Restricted Skyline Probabilities for Uncertain Data
###### Abstract
Since data uncertainty is inherent in multi-criteria decision making, recent years have witnessed a dramatically increasing amount of attention devoted to conducting advanced analysis on uncertain data. In this paper, we revisit restricted skyline query on uncertain datasets from both complexity and algorithm perspective. Instead of conducting probabilistic restricted skyline analysis under threshold or top-\(k\) semantics, we focus on a more general problem that aims to compute the restricted skyline probability of all objects. We prove that the problem can not be solved in truly subquadratic-time unless the Orthogonal Vectors conjecture fails, and propose two algorithms, one with near-optimal time complexity and the other with better expected time complexity. We also propose an algorithm with sublinear query time and polynomial preprocessing time for the case where the preference region is described by \(d-1\) ratio bound constraints. Our thorough experiments over real and synthetic datasets demonstrate the effectiveness of the problem and the efficiency of the proposed algorithms.
Uncertain data, probabilistic restricted skyline
## I Introduction
Restricted skyline (rskyline) query is a recently proposed powerful tool for multi-criteria decision making. It extends the skyline query by supporting personalized preferences. In particular, given a dataset and a set \(\mathcal{F}\) of scoring functions of interest, the rskyline query retrieves a set of tuples that are not \(\mathcal{F}\)-dominated by any other tuples. Here a tuple \(t\) is said to \(\mathcal{F}\)-dominated another tuple \(s\) if \(t\) scores better than \(s\) under all functions in \(\mathcal{F}\). It has been verified that rskyline query is highly effective in restricting the set of tuples of interest.
In [1, 2], many efficient algorithms have been proposed to answer rskyline queries on traditional datasets where no uncertainty is involved. However, data uncertainty is inherent in multi-criteria decision making from various causes such as equipment limitations, data randomness, outdated sources, and so on [3]. And typically using a probability distribution to model data uncertainty [4, 5, 6]. In this paper, we focus on computing rskyline probabilities on datasets with discrete uncertainty, i.e., each object is represented by a set of tuples, also called instances, and corresponding probabilities to take those particular instances, where the _rskyline probability_ of an object is defined as the probability that it is not \(\mathcal{F}\)-dominated by any other objects with respect to a set of scoring functions \(\mathcal{F}\). We first describe two application scenarios as follows.
_E-commerce Scenario:_ Probabilistic selling is a novel selling strategy in e-commerce [7]. For example, when booking hotels on _Houviere_, each product has a probability of getting any one of a set of hotels with different locations and room numbers. And only when users pay for a product, the particular hotel property is revealed to them. Consider products provided by _Houviere_ as an uncertain dataset with two attributed #Room and Loc (e.g., distance to the beach). If a user prefers Loc to #Room, i.e., let, let \(\mathcal{F}=\{\omega_{1}\text{\#Room}+\omega_{2}\text{Loc}\mid\omega_{1}\leq \omega_{2}\}\), computing rskyline probabilities of products with respect to \(\mathcal{F}\) can mathematically quantify the probability that the final hotel obtained by choosing a product is a rskyline tuple.
_Player Selection Scenario:_ The performance data of a player may vary differently game by game due to many factors such as fluctuations of players' conditions, the locations of the games, and the support from audience. Modeling players' game-by-game performance data as uncertain datasets for interesting knowledge and comprehensive view has been widely investigated in [4, 6, 8]. Suppose the two criteria for evaluating a player are Point and Assist, and it is pointed out that Point is more important than Assist, but no more than twice, i.e., \(\mathcal{F}=\{\omega_{1}\text{Point}+\omega_{2}\text{Assist}\mid\omega_{2} \leq\omega_{1}\leq 2\omega_{2}\}\). Computing rskyline probabilities of players with respect to \(\mathcal{F}\) can compute the probability that a player belongs to rskyline players (e.g., most valuable players) of a game.
Instead of identifying objects with top-\(k\) rskyline probabilities or whose rskyline probabilities are higher than a given threshold, we study the problem of computing rskyline probabilities of all objects from both complexity and algorithm perspective. This overcomes the difficulty of selecting a suitable threshold and is convenient for users to retrieve results with different sizes. For problem complexity, we prove that no algorithm can compute rskyline probabilities of all objects within truly subquadratic time, unless the Orthogonal Vectors conjecture [9] fails.
Then, for efficient algorithms, we focus on a practically relevant case where \(\mathcal{F}\) consists of linear scoring functions whose weights are described by linear constraints. A major challenge to overcome is the irregularity of the \(\mathcal{F}\)-dominance region,
which contains all instances \(\mathcal{F}\)-dominated by an instance. We address this by mapping objects into a higher dimensional data space and results in a near-optimal algorithm with time complexity \(O(n^{2-1/d^{\prime}})\), where \(d^{\prime}\) is the dimensionality of the mapped data space. Furthermore, by conducting the mapping on the fly and designing effective pruning strategies, we propose an algorithm with better expected time complexity based on the branch-and-bound paradigm.
When linear constraints are composed of \(d-1\) ratio bound constraints, we propose a more efficient \(\mathcal{F}\)-dominance test method. Based on the test condition, we establish a Turing reduction from the problem of computing rskyline probabilities of all objects to the half-space range search problem [10]. And we introduce a \(O(2^{d}mn\log n)\)-time algorithm with polynomial-time preprocessing, where \(m\) and \(n\) is the number of objects and instances, respectively. Subsequently, we introduce a _multi-level_ structure and a _data-shifting_ strategy to further improve the time complexity to \(O(2^{d-1}\log n+n)\), where the additional linear time is required by reporting the final results. This algorithm matters from the following two aspects. First, it proves that the _online rskyline probability query_ belongs to the complexity class \(\mathrm{PsL}\)[11], which can be further used to design efficient algorithms for other queries in \(\mathrm{PsL}\). Second, although this algorithm is somewhat inherently theoretical, experimental results shows that its extension for this special rskyline query on certain datasets outperforms the state-of-the-art index-based method proposed in [2].
To the best of our knowledge, this paper is the first work conducting rskyline analysis on uncertain datasets. The main contributions of this paper are summarized as follows.
* We formalize the problem of computing rskyline probabilities of all objects and prove that there is no algorithm can solve this problem in \(O(n^{2-\delta})\) time for any \(\delta>0\), unless the Orthogonal Vectors conjecture fails.
* When \(\mathcal{F}\) consists of linear scoring functions whose weights are described by linear constraints, we propose an near-optimal algorithm with time complexity \(O(n^{2-1/d^{\prime}})\), where \(d^{\prime}\) is the number of vertices of the preference region, and an algorithm with expected time complexity \(O(mn\log n)\).
* When \(\mathcal{F}\) consists of linear scoring functions whose weights are described by \(d-1\) ratio bound constraints, we propose an algorithm with polynomial preprocessing time and \(O(2^{d-1}\log n+n)\) query time. For online rskyline probability query, we propose an algorithm with polynomial preprocessing time and \(O(2^{d-1}\log n)\) query time.
* We conduct extensive experiments over real and synthetic datasets to demonstrate the effectiveness of the problem studied in this paper and the efficiency and scalability of the proposed algorithms.
The reminder of this paper is organized as follows. We review the related work in Section II. We formally define the problem studied in this paper and study its conditional lower bound in Section III. Then, we propose two efficient algorithms for the ARSP problem in Section IV, and design an algorithm with sublinear query time and polynomial preprocessing time for ratio bound constraints in Section V. We report the experimental results in Section VI. Finally, we conclude the paper in Section VII.
## II Related Work
In this section, we elaborate on two pieces of previous work that are most related to ours.
**Queries on Uncertain Datasets.** Pei et al. [6] are the first to extend skyline computation on certain datasets to probabilistic skyline computation on uncertain datasets. They proposed two algorithms to identify those objects whose skyline probabilities are higher than a user-specified threshold \(p\). Considering the inherent limitations of threshold query, Atallah and Qi [5] presented the first work that addresses the problem of computing exact skyline probabilities of all objects. They proposed a \(\tilde{O}(n^{2-1/(d+1)})\)-time algorithm by using two basic skyline probability computation methods, weighted dominance counting method and grid method, to deal with frequent and infrequent objects, respectively. With a more efficient sweeping method for infrequent objects, Atallah et al. [12] improved the time complexity to \(\tilde{O}(n^{2-1/d})\). However, the utilities of these two methods are limited to 2D datasets because of a hidden factor exponential in the dimensionality of the dataset caused by the high dimensional weighted dominance counting algorithm. To get rid of this, Afshani et al. [13] calculated the skyline probabilities of all instances by performing a pre-order traversal of a modified KD-tree. With the well-know property of the KD-tree, it is easy to verify that the time complexity of their algorithm is \(O(n^{2-1/d})\). More practically, Kim et al. [4] introduced an in-memory Z-tree structure in all skyline probabilities computation to reduce the number of dominance tests, which has been experimentally demonstrated efficient. However, it is non-trivial to revise algorithms for computing skyline probabilities of all objects to address the problem studied in this paper. This is because these algorithms rely on the fact that the dominance region of an instance is a hyper-rectangle, which no longer holds under \(\mathcal{F}\)-dominance.
Somehow related to what we study in this paper are those works on top-\(k\) queries on uncertain datasets [14, 15, 16, 17, 18]. Under the possible world model, top-\(k\) semantics are unclear, which give rise to different definitions, e.g., to compute the most likely top-\(k\) set, the tuple with high probability to rank \(i\)-th, the tuples having a probability greater than a specified threshold to be included in top-\(k\), etc. Our work differs from theirs as an exactly input weight is required in these studies, whereas we focus on finding a set of non-\(\mathcal{F}\)-dominated tuples where \(\mathcal{F}\) is a set of scoring functions. In other word, our work can be regarded as extending theirs by relaxing the preference input into a region.
**Operators with Restricted Preference.** Given a set of monotone scoring functions \(\mathcal{F}\), Ciaccia and Martinenghi [1] defined that a tuple \(t\)\(\mathcal{F}\)-dominates another tuple \(s\) if \(t\) scores better than \(s\) for any \(f\in\mathcal{F}\). Based on \(\mathcal{F}\)-dominance, they introduced two restricted skyline operators, ND for retrieving the set of non-\(\mathcal{F}\)-dominated tuples and PO for finding the set of tuples that are optimal according to at least one function
in \(\mathcal{F}\). And they designed several linear programming based methods for these two queries, respectively. Mouratidis and Tang [19] extended PO under top-\(k\) semantic when \(\mathcal{F}\) is a convex preference polytope \(\Omega\), i.e., they studied the problem of identifying all tuples that appear in the top-\(k\) result for at least one \(\omega\in\Omega\). They first disqualified records \(\mathcal{F}\)-dominated by \(k\) or more others, and then determined the top-\(k\)-th in each partition of \(\Omega\) among the remaining candidates. Liu et al. [2] investigated a case of \(\mathcal{F}\)-dominance where \(\mathcal{F}\) consists of \(d-1\) constraints on the weight ratio of other dimensions to the user-specified reference dimension. They defined _eclipse_ query as retrieving the set of all non-eclipse-dominated tuples and proposed a series of algorithms. These works only consider datasets without uncertainty, and we extend above dominance-based operators to uncertain datasets. Their techniques can not be extended to our problem since the introduction of uncertainty makes the problem challenging as we need to identify all instances that \(\mathcal{F}\)-dominate it for each instance.
## III Problem Definition
In this section, we first review the restricted skyline query, then formally define the all rskyline probabilities problem and investigate its conditional problem complexity. For reference, the major notations used in this paper is summarized in Table I.
### _Restricted Skyline_
Let \(D\) denote a \(d\)-dimensional dataset \(D\) consisting of \(n\) tuples. Each tuple \(t\in D\) has \(d\) numeric attributes, denoted as \(t=(t[1],\cdots,t[d])\). Without loss of generality, we assume that the numeric domain of each attribute is normalized into the unit interval \([0,1]\) and the lower values are preferred than higher ones. Given a _scoring function_\(f:[0,1]^{d}\rightarrow\mathbb{R}^{+}\), the value \(f(t[1],\cdots,t[d])\) is called the _score_ of tuple \(t\) under \(f\), also written as \(f(t)\). Function \(f\) is called _monotone_ if for any tuple \(t\) and \(s\), it holds that \(f(t)\leq f(s)\) if \(\forall 1\leq i\leq d\), \(t[i]\leq s[i]\). Let \(\mathcal{F}\) be a set of monotone scoring functions \(\mathcal{F}\), a tuple \(t\)\(\mathcal{F}\)-_dominates_ another tuple \(s\neq t\), denoted as \(t\prec_{\mathcal{F}}s\), if \(\forall f\in\mathcal{F}\), \(f(t)\leq f(s)\). The _restricted skyline_ (rskyline) of \(D\) with respect to \(\mathcal{F}\) consists of all non-\(\mathcal{F}\)-dominated tuples.
### _Restricted Skyline Probability_
Following the uncertain data model used in previous related work [5], a \(d\)-dimensional uncertain dataset \(\mathcal{D}\) consists of \(m\) objects \(\{T_{1},\cdots,T_{m}\}\). Each object \(T_{i}\in\mathcal{D}\) is modeled by a set of instances \(\{t_{i,1},\cdots,t_{i,n_{i}}\}\) along with the probabilities \(\{\Pr(t_{i,1}),\cdots,\Pr(t_{i,n_{i}})\}\) for each instance to occur. To cope with datasets of large scale, we assume \(\mathcal{D}\) is organized by a spatial index R-tree. For the simplicity of notation, we also use \(T_{i}\) to denote the set of its instances \(\{t_{i,1},\cdots,t_{i,n_{i}}\}\) and write \(t\in T_{i}\) to mean that \(t\) is an instance of \(T_{i}\). We assume that the sum of probabilities over all instances of an object may add up to less than 1 and the existence probability may vary from one instance from another. Moreover, each object can only take one instance at a time and objects are independent of each other. Let \(I=\bigcup_{i=1}^{m}T_{i}\) denote the set of all instances and \(n=|I|=\sum_{i=1}^{m}n_{i}\) denote the number of instances.
Given an uncertain dataset \(\mathcal{D}\) and a set of monotone scoring functions \(\mathcal{F}\), an instance \(t\in T_{i}\) belongs to the rskyline of \(\mathcal{D}\) if and only if \(T_{i}\) occurs as \(t\) and none of other objects appears as an instance that \(\mathcal{F}\)-dominates \(t\). We refer to such probability as the _rskyline probability_ of \(t\), denoted by \(\Pr_{\mathrm{rsky}}(t)\). With the assumption that each object can only take one instance at a time and objects are independent of each other, \(\Pr_{\mathrm{rsky}}(t)\) can be computed as follows,
\[\Pr_{\mathrm{rsky}}(t)=\Pr(t)\cdot\prod_{j=1,j\neq i}^{m}(1-\sum_{s\in T_{j},s\prec rt}\Pr(s)). \tag{1}\]
The rskyline probability of an object \(T_{i}\) is defined as the sum of rskyline probabilities of all its instances, i.e.,
\[\Pr_{\mathrm{rsky}}(T_{i})=\sum_{t\in T_{i}}\Pr_{\mathrm{rsky}}(t). \tag{2}\]
**Example 1**.: _Consider the uncertain dataset shown in Fig. 1. There are 4 objects in this uncertain dataset and their instances and existence probabilities are shown in the table. Given a set of scoring functions \(\mathcal{F}=[\omega[1]t[1]+\omega[2]t[2]\mid\omega[1]\geq\omega[2])\), regions containing all instances \(\mathcal{F}\)-dominating \(b_{3}\) and being \(\mathcal{F}\)-dominated by \(b_{3}\) are shaded in gray and green, respectively. Thus, the rskyline probability of \(b_{3}\) can be computed as \(\Pr_{\mathrm{rsky}}(b_{3})=\Pr(b_{3})\times(1-\Pr(c_{1}))=0.18\). Similarly, we can derive that \(\Pr_{\mathrm{rsky}}(b_{1})=0.3\) and \(\Pr_{\mathrm{rsky}}(b_{2})=0.018\). The rskyline probability of object \(B\) is \(\Pr_{\mathrm{rsky}}(B)=\Pr_{\mathrm{rsky}}(b_{1})+\Pr_{\mathrm{rsky}}(b_{2})+ \Pr_{\mathrm{rsky}}(b_{3})=0.498\)._
Fig. 1: An uncertain dataset and rskyline probabilities of instances.
In this paper, we study the problem of computing rskyline probabilities of all instances, from which the rskyline probabilities of all objects can also be computed.
**Problem: All RSkyline Probabilities (ARSP) Problem**
**Input:** an uncertain dataset \(\mathcal{D}\) and a set of monotone scoring functions \(\mathcal{F}\).
**Output:** rskyline probability for all instances in \(I\), i.e.,
\[\operatorname{ARSP}=\{(t,\Pr_{\operatorname{rsky}}(t))\mid t\in I\}.\]
### _Conditional Lower Bound_
In what follows, we prove that no algorithm can compute rskyline probabilities of all instances in truly subquadratic time without preprocessing unless the Orthogonal Vectors conjecture fails.
\(\blacktriangleright\)**Conjecture (OVC) [9].** Given two sets \(A,B\), each of \(n\) vectors in \(\{0,1\}^{d}\), for every \(\delta>0\), there is a \(c\geq 1\) such that no \(O(n^{2-\delta})\)-time algorithm can determine if there is a pair \((a,b)\in A\times B\) such that \(a\times b=0\) with \(d=c\log n\).
**Theorem 1**.: _Given an uncertain dataset \(\mathcal{D}\) and a set of monotone scoring functions \(\mathcal{F}\), no algorithm can compute restricted skyline probabilities for all instances within \(O(n^{2-\delta})\) time for any \(\delta>0\), unless the Orthogonal Vectors conjecture fails._
Proof.: We establish a fine-grained reduction from the orthogonal vectors problem to all rskyline probabilities problem. Given two sets \(A,B\), each of \(n\) vectors in \(\{0,1\}^{d}\), we construct an uncertain dataset \(\mathcal{D}\) and a set \(\mathcal{F}\) of monotone scoring functions as follows. First, for each vector \(b\in B\), we construct an uncertain tuple \(T_{b}\) with a single instance \(b\) and \(\Pr(b)=1\). Then, we construct an uncertain tuple \(T_{A}\) with \(n\) instances \(\xi(a)\) and \(\Pr(\xi(a))=\frac{1}{n}\) for all vectors \(a\in A\), where \(\xi(a)[i]=\frac{3}{2}\) if \(a[i]=0\) and \(\xi(a)[i]=\frac{1}{2}\) if \(a[i]=1\) for \(1\leq i\leq d\). Finally, let \(\mathcal{F}\) consists of \(d\) linear scoring functions \(f_{i}(t)=t[i]\) for \(1\leq i\leq d\), which means instance \(t\)\(\mathcal{F}\)-dominates another instance \(s\) if and only if \(t[i]\leq s[i]\) for \(1\leq i\leq d\). We claim that for each instance \(\xi(a)\in T_{A}\), there exists an instance \(b\) from other uncertain tuple \(T_{b}\)\(\mathcal{F}\)-dominating \(\xi(a)\) if and only if \(a\) is orthogonal to \(b\).
Suppose there is a pair \((a,b)\in A\times B\) such that \(a\times b=0\), then \(a[i]=0\) or \(b[i]=0\) for \(1\leq i\leq d\). If \(a[i]=0\), then \(b[i]\) can be either 0 or 1 and \(\xi(a)[i]=\frac{3}{2}>b[i]\). Or if \(b[i]=0\), then \(a[i]\) can be either 0 or 1 and \(\xi(a)[i]\geq\frac{1}{2}>b[i]\). That is \(b\prec_{\mathcal{F}}\xi(a)\). On the other side, suppose there is a pair of instances \(b\) and \(\xi(a)\) such that \(b\prec_{\mathcal{F}}\xi(a)\). For each \(1\leq i\leq d\), \(b[i]\) is either 0 or 1 and \(\xi(a)[i]\) is either \(\frac{3}{2}\) and \(\frac{1}{2}\). If \(b[i]=0\), then \(b[i]\cdot a[i]=0\). Or if \(b[i]=1\), then \(\xi(a)[i]=\frac{3}{2}\) since \(b[i]\leq\xi(a)[i]\). So \(a[i]=0\) according to the mapping \(\xi(\cdot)\). Hence \(a[i]\cdot b[i]=0\). Thus we conclude that there is a pair \((a,b)\in A\times B\) such that \(a\times b=0\) if and only if there exists an instance \(\xi(a)\in T_{A}\) with \(\Pr_{\operatorname{rsky}}(\xi(a))=0\). Since \(\mathcal{D}\) can be constructed in \(O(nd)\) time and whether such instance exists can be determined in \(O(n)\) time, any \(O(n^{2-\delta})\)-time algorithm for all rskyline probabilities computation for some \(\delta>0\) would yield an algorithm for Orthogonal Vectors in \(O(nd+n^{2-\delta}+n)=O(n^{2-\delta^{\prime}})\) time for some \(\delta^{\prime}>0\) when \(d=\Theta(\log n)\), which contradicts the OVC.
## IV Algorithms for ARSP Problem
The linear scoring function is one of the most commonly used scoring functions [20]. Given a weight (preference) \(\omega\), the _score_ of tuple \(t\) is defined as \(S_{\omega}(t)=\sum_{i=1}^{d}\omega[i]t[i]\). Since ordering two tuples by score is independent from the magnitude of \(\omega\), we assume that \(\omega\) belongs to the unit \((d-1)\)-simplex \(\mathbb{S}^{d-1}\), called _preference domain_, i.e., \(\sum_{i=1}^{d}\omega[i]=1\). To serve the specific preferences of an individual user, a notable approach is to add some constraints on the preference domain. Notationally, let the matrix inequality \(A\times\omega\leq b\) be a set of linear constraints on \(\mathbb{S}^{d-1}\) and \(c\) be the number of rows of \(A\). In this section, we propose two algorithms for ARSP problem in case of \(\mathcal{F}\) is a set of linear scoring functions whose weight are described by a set of linear constraints.
### _Baseline Algorithm_
Given an uncertain dataset \(\mathcal{D}\) and a set of linear scoring functions \(\mathcal{F}=\{S_{\omega}(\cdot)\mid\omega\in\mathbb{S}^{d-1}\wedge A\times \omega\leq b\}\), a straight method to calculate rskyline probability for each instance \(t\) is to compute the product of probabilities that none of instances from other objects that \(\mathcal{F}\)-dominate \(t\) exist by performing \(\mathcal{F}\)-dominance tests against other instances. With the fact that the _preference region_\(\Omega=\{\omega\in\mathbb{S}^{d-1}\mid A\times\omega\leq b\}\) is a _closed convex polytope_, the \(\mathcal{F}\)-dominance relation between two instances can be determined by comparing their scores under the set of vertices \(V\) of \(\Omega\), where a weight \(\omega\) is called a vertex of \(\Omega\) if and only if it is the unique solution to a \(d\)-subset inequalities of \(A\times\omega\leq b\).
**Theorem 2** (\(\mathcal{F}\)-dominance test [1]).: _Given a set of linear scoring functions \(\mathcal{F}=\{S_{\omega}(\cdot)\mid\omega\in\mathbb{S}^{d-1}\wedge A\times \omega\leq b\}\), let \(V\) be the set of vertices of the preference region \(\Omega=\{\omega\in\mathbb{S}^{d-1}\mid A\times\omega\leq b\}\), an instance \(t\)\(\mathcal{F}\)-dominates another instance \(s\) if and only if \(S_{\omega}(t)\leq S_{\omega}(s)\) holds for all weights \(\omega\in V\)._
With the above theorem, we construct a baseline algorithm as follows. Since the preference region \(\Omega\) is guaranteed to be closed, the set of linear constraints can be transformed into a set of points using the _polar duality_[21] such that the intersection of the linear constraints is the dual of the convex hull of the points. After the transformation, the baseline invokes the quickhull algorithm proposed in [22] to compute the set of vertices \(V\) of \(\Omega\). Then it sorts the set of instances using a scoring function \(S_{\omega}\) for some \(\omega\in V\). This guarantees that if an instance \(t\) precedes another instance \(s\) in the sorted set, then \(s\not\prec_{\mathcal{F}}t\). After that, for each instance \(t\), the baseline tests \(t\) against every instance of other objects preceding \(t\) to compute \(\Pr_{\operatorname{rsky}}(t)\) according to Equation 1. Since \(V\) can be computed in \(O(c^{2})\) time [23], where \(c\) is the number of linear constraints, and after that each \(\mathcal{F}\)-dominance test can be accomplished in \(O(dd^{\prime})\) time, where \(d^{\prime}=|V|\) the time complexity of the baseline algorithm is \(O(c^{2}+dd^{\prime}n^{2})\). Although the theoretical upper bound of \(d^{\prime}\) is \(\Theta(c^{\lfloor d/2\rfloor})\)[24], the actual size of \(V\) is experimentally observed to be small. Hence we conclude that the time complexity of the baseline algorithm is \(O(n^{2})\).
### _Tree-Traversal Algorithm_
A major challenge for the ARSP problem is the irregularity of the \(\mathcal{F}\)-dominance region. In this subsection, we overcome this challenge by reducing the ARSP problem to an ASP problem [13] in a higher dimensional data space. Afterwards, calling the state-of-the-art method [13] for the ASP problem yields an algorithm with near-optimal time complexity.
**Problem: All Skyline Probabilities (ASP) Problem**[13]
**Input:** an uncertain dataset \(\mathcal{D}\).
**Output:** skyline probability \(\Pr_{\mathrm{sky}}(t)\) for each instance \(t\in I\), where supposing \(t\) belongs to \(T_{i}\)
\[\Pr_{\mathrm{sky}}(t)=\Pr(t)\cdot\prod_{j=1,j\neq i}^{m}(1-\sum_{s\in T_{j},s \prec t}\Pr(s)).\]
Given a set of linear scoring functions \(\mathcal{F}=\{S_{\omega}(\cdot)\mid\omega\in\mathbb{S}^{d-1}\wedge A\times \omega\leq b\}\), let \(V=\{\omega_{1},\cdots,\omega_{d^{\prime}}\}\) denote the set of vertices of the preference region \(\Omega=\{\omega\in\mathbb{S}^{d-1}\mid A\times\omega\leq b\}\) and \(d^{\prime}=|V|\). And let \(S_{{}_{V}}(t)=(S_{{}_{\omega_{1}}}(t),\cdots,S_{{}_{\omega_{d^{\prime}}}}(t))\) denote the score vector of tuple \(t\) under all vertices in \(V\). The reduction maps each instance \(t\in I\) into an instance \(S_{{}_{V}}(t)\) with the same existence probability, and groups them according to the objects they belong to. In other words, the reduction constructs an uncertain dataset \(S_{{}_{V}}(I)\) in the \(d^{\prime}\)-dimensional _score space_, where \(S_{{}_{V}}(I)=\{S_{{}_{V}}(T_{i})=\{S_{{}_{V}}(t)\mid t\in T_{i}\}\mid T_{i} \in\mathcal{D}\}\) and for each instance \(S_{{}_{V}}(t)\in S_{{}_{V}}(I)\), \(\Pr(S_{{}_{V}}(t))=\Pr(t)\). According to Theorem 2, an instance \(t\)\(\mathcal{F}\)-dominates another instance \(s\neq t\) if and only if \(S_{{}_{V}}(t)\) dominates \(S_{{}_{V}}(s)\), where a tuple \(t\) dominates another tuple \(s\neq t\), denoted as \(t\prec s\), if \(\forall i\in[1,d],t[i]\leq s[i]\). This means the \(\mathcal{F}\)-dominance relation between any two instances \(t\) and \(s\) in the original space is equivalent to the dominance relationship between \(S_{{}_{V}}(t)\) and \(S_{{}_{V}}(s)\) in the mapped _score space_. Hence the rskyline probability \(\Pr_{\mathrm{rsky}}(t)\) of each instance \(t\in I\) in the original space equals to the skyline probability \(\Pr_{\mathrm{sky}}(S_{{}_{V}}(t))\) of the corresponding instance \(S_{{}_{V}}(t)\) in the score space.
Thus, after the reduction, we call a procedure \(kd\mbox{-}\textsc{ASP}^{*}\) to compute the skyline probability for all instances in \(S_{{}_{V}}(I)\). This procedure is implemented based on the state-of-the-art \(kd\)-tree traversal algorithm for the ASP problem proposed in [13]. We introduce some implementation optimizations which does not improve the time complexity but indeed enhance its experimental performance. The original algorithm first constructs a \(kd\)-tree \(T\) on \(S_{{}_{V}}(I)\), and then progressively computes skyline probabilities of all instances by performing a preorder traversal of \(T\). In our implementation, we integrate the preorder traversal into the construction of \(T\) and also prune the construction of a subtree if all instances included in the subtree have zero rskyline probability.
Concretely, \(kd\mbox{-}\textsc{ASP}^{*}\) always keeps a path from the root of \(T\) to the current reached node in the main memory. And for each node \(N\) in the path, let \(P\) be the set of instances contained in \(N\) and \(P_{\min}\) (\(P_{\max}\)) denote the minimum (maximum) corner of the minimum bounding rectangle of \(P\), \(kd\mbox{-}\textsc{ASP}^{*}\) maintains the following information, 1) a set \(C\) including instances that dominates \(P_{\max}\), 2) an array \(\sigma=\langle\sigma[1],\cdots\sigma[m]\rangle\), where \(\sigma[i]=\sum_{t\in T_{i},S_{{}_{V}}(t)\prec P_{\min}}\Pr(t)\) is the sum of probabilities over all instances of \(S_{{}_{V}}(T_{i})\) that dominates \(P_{\min}\), 3) a value \(\beta=\prod_{1\leq i\leq m,\sigma[i]\neq 1}(1-\sigma[i])\), and 4) a counter \(\chi=|\{i\mid\sigma[i]=1\}|\). It is easy to see that for the root node of \(T\), \(C=S_{{}_{V}}(I)\), \(\sigma[i]=0\) for \(1\leq i\leq m\), \(\beta=1\), and \(\chi=0\).
Now, assuming the information of all nodes in the maintained path is available, \(kd\mbox{-}\textsc{ASP}^{*}\) constructs the next arriving node \(N\) as follows. Again, let \(P\) denote the set of instances in \(N\). For each point \(S_{{}_{V}}(t)\in C_{par}\), where \(C_{par}\) is the set \(C\) of the parent node of \(N\), it tests \(S_{{}_{V}}(t)\) against \(P_{\min}\). If \(S_{{}_{V}}(I)\prec P_{\min}\), say \(t\in T_{i}\), it updates \(\sigma[i]\), \(\beta\), and \(\chi\) accordingly (lines 13-16 in Algorithm 1). Otherwise, it further tests \(S_{{}_{V}}(t)\) against \(P_{\max}\) and inserts \(S_{{}_{V}}(t)\) into the set \(C\) of \(N\) if \(S_{{}_{V}}(t)\prec P_{\max}\). When \(\chi\) becomes to one, it is known that \(\Pr_{\mathrm{sky}}(P_{\min})=0\), and so are all instances in \(N\) due to the transitivity of dominance relation. Therefore,
prunes the construction of the subtree rooted at \(N\) and returns to its parent node. Otherwise, \(kd\)-ASP\({}^{*}\) keeps growing the path (partitioning set \(P\) like a \(kd\)-tree) until it reaches a node including only one instance \(S_{{}_{\mathcal{V}}}(t)\) and then computes \(\Pr_{\mathrm{rsky}}(t)=\Pr_{\mathrm{rsky}}(S_{{}_{\mathcal{V}}}(t))\) based on \(\beta\) and \(\sigma[i]\).
**Example 2**.: _As shown in Fig. 2, suppose all instances of an object occur with the same probability. The original algorithm keeps a whole \(kd\)-tree in the main memory but \(kd\)-ASP\({}^{*}\) only maintains a path from the root node, e.g., \(R_{1}\to R_{2}\to R_{5}\). Moreover, when \(kd\)-ASP\({}^{*}\) traverses from \(R_{1}\) to \(R_{3}\), it updates \(\sigma[2]\) to \(1\) and \(\chi\) to \(1\) since \(t_{2,1}\prec R_{3}\). This indicates that the skyline probability of all instances in the subtree rooted at \(R_{3}\) is zero, thus \(kd\)-ASP\({}^{*}\) prunes the construction of the subtree rooted at \(R_{3}\) as shown in Fig. 2(b)._
The pseudocode of the derived algorithm is presented in Algorithm 1. As stated previously, the computation of \(V\) takes \(O(c^{2})\) time, where \(c\) is the number of linear constraints. The score vector of an instance can be derived in \(O(dd^{\prime})\) time, where \(d^{\prime}=|V|\). And given a set of \(n\) instances in \(d^{\prime}\)-dimensional data space, the time complexity of \(kd\)-ASP\({}^{*}\) is \(O(n^{2-1/d^{\prime}})\)[13]. Therefore, the overall time complexity of Algorithm 1 is \(O(c^{2}+d^{\prime}dn+n^{2-1/d^{\prime}})=O(n^{2-1/d^{\prime}})\).
Next, we claim that Theorem 1 still holds even if we limit \(\mathcal{F}\) into linear scoring functions whose weights are described by a set of linear constraints, which proves that Algorithm 1 achieves a near-optimal time complexity. Let \(\mathcal{F}\) be the set of all linear scoring functions. Given two instances \(t\) and \(s\), if \(t\prec_{\mathcal{F}}s\), then \(t[i]\leq s[i]\) for \(1\leq i\leq d\) since \(\omega_{i}\in\Omega\) where \(\omega_{i}[i]=1\) and \(\omega_{i}[j]=0\) for all \(1\leq j\neq i\leq d\). If \(t[i]\leq s[i]\) for \(1\leq i\leq d\), it is known that \(t\prec_{\mathcal{F}}s\) since all linear scoring functions are monotone. Hence, we can also conclude that \(t\prec_{\mathcal{F}}s\) if and only if \(t[i]\leq s[i]\) for \(1\leq i\leq d\). Thus, with the same reduction established in the proof of Theorem 1, it is known that there is no subquadratic-time algorithm for ARSP problem even if \(\mathcal{F}\) is limited into linear scoring functions whose weights are described by a set of linear constraints.
**Remark.** Algorithm 1 is also correct if \(kd\)-ASP\({}^{*}\) adopts any other space-partitioning tree. The only details that need to be modified are the method to partition the data space (line 19-21 in Algorithm 1). In our experimental study, we implement a variant of Algorithm 1 based on the quadtree, which partitions the data space in all dimensions each time. It is observed that choosing an appropriate space-partitioning tree can improve the performance of Algorithm 1, e.g., the quadtree-based implementation works well in low-dimensional data spaces, while the \(kd\)-tree-based implementation have better scalability for data dimensions.
### _Branch-and-Bound Algorithm_
A drawback of Algorithm 1 is that it needs to map all instances into the score space in advance each time, in this subsection, we show how to conduct the mapping on the fly so that unnecessary computations can be avoided.
Recall that if instances in \(I\) are sorted in ascending order according to their scores under a scoring function \(f\in\mathcal{F}\), then instance \(t\) will not be \(\mathcal{F}\)-dominated by any instance \(s\) after \(t\). Supposing instances are processed in the sorted order, the score vector \(S_{{}_{\mathcal{V}}}(t)\) is unnecessary until \(t\) is to be processed. With this observation, we design efficient pruning strategies to tell whether an instance or a set of instances can be safely ignored during the computation, and if so, their mappings can be avoided. Unlike conducting probabilistic rskyline analysis under top-\(k\) or threshold semantics, it is easy to see that maintaining upper and lower bounds on each instance's rskyline probability as pruning criteria is helpless since our goal is to compute exact rskyline probabilities of all instances. Thus, the only pruning strategy can be utilized is that if an instance \(t\) is \(\mathcal{F}\)-dominated by another instance \(s\) and \(\Pr_{\mathrm{rsky}}(s)\) is zero, then \(\Pr_{\mathrm{rsky}}(t)\) is also zero due to the transitivity of \(\mathcal{F}\)-dominance. A straightforward method for efficiently performing this pruning strategy is to keep a rskyline of all instances processed so far whose rskyline probability is zero and compare the next instance to be processed against all instances in the rskyline beforehand. However, the maintained rskyline may suffer from huge scale on anti-correlated datasets. In the following theorems, we prove that a set \(P\) of size at most \(m\) is sufficient for pruning tests and all instances with zero rskyline probability can be safely ignored without affecting subsequent rskyline probabilities computation.
**Theorem 3**.: _All instances with zero rskyline probability can be safely discarded._
Proof.: Let \(t\in T_{i}\) be an instance with \(\Pr_{\mathrm{rsky}}(t)=0\). Recall the formulation of rskyline probability in Equation 1, all other instances of \(T_{i}\) will not be affected by \(t\). This also holds for instances of other objects \(T_{j}\) that are not \(\mathcal{F}\)-dominated by \(t\). Now, suppose \(s\) is an instance of \(T_{j\neq i}\) and \(s\) is \(\mathcal{F}\)-dominated by \(t\). Since \(t\prec_{\mathcal{F}}s\) and \(\Pr_{\mathrm{rsky}}(t)=0\), it is easy to see that there exists a set of objects \(\mathcal{T}=\{T_{k}\mid k\neq j\wedge k\neq i\}\) such that all instances of each object \(T_{k}\in\mathcal{T}\)\(\mathcal{F}\)-dominate \(t\). Moreover, because \(\mathcal{F}\)-dominance is asymmetric, it is known that there exists at least one object \(T_{k}\in\mathcal{T}\), all instances of which have non-zero rskyline probability. Therefore, according to the transitivity of \(\mathcal{F}\)-dominance, \(s\) is also \(\mathcal{F}\)-dominated by all instances of \(T_{k}\) and thus \(\Pr_{\mathrm{rsky}}(s)=0\).
**Theorem 4**.: _Let \(V=\{\omega_{1},\cdots,\omega_{d^{\prime}}\}\) be the set of vertices of the preference region \(\Omega=\{\omega\in\mathbb{S}^{d-1}\mid A\times\omega\leq b\}\), there is a
Fig. 2: Running example for Algorithm 1 and implementation optimizations.
set \(P\) such that for any instance \(t\), \(\Pr_{\mathrm{rsky}}(t)=0\) if and only if \(S_{{}_{V}}(t)\) is dominated by some instance \(p\in P\) and \(|P|\leq m\)._
Proof:: We start with the construction of the pruning set \(P\). For each object \(T_{i}\) with \(\sum_{t\in T_{i}}\Pr(t)=1\), we insert an instance \(p_{i}=(\max_{t\in T_{i}}S_{\omega_{i}}(t),\cdots,\max_{t\in T_{i}}S_{\omega_{ d^{\prime}}}(t))\) into \(P\). Note that the above construction also requires to map all instances into the score space in advance in order to facilitate the understanding of the proof. However, in the proposed algorithm, we construct \(P\) incrementally during the computation. It is straight to verify that \(|P|\leq m\) from the construction of \(P\). Then, let \(t\) denote an instance of object \(T_{i}\), we prove that \(\Pr_{\mathrm{rsky}}(t)=0\) if and only if \(S_{{}_{V}}(t)\) is dominated by some \(p_{j\neq i}\in P\). From Equation 1, it is easy to see that \(\Pr_{\mathrm{rsky}}(t)=0\) if and only if there must exist an object \(T_{j\neq i}\) such that every instance \(s\in T_{j}\)\(\mathcal{F}\)-dominates \(t\) and \(\sum_{s\in T_{j}}\Pr(s)=1\). That is \(S_{{}_{V}}(s)\prec S_{{}_{V}}(t)\) holds for all instances \(s\in T_{j}\) according to Theorem 2. Moreover, since a set of instances dominates another instance if and only if the maximum corner of their minimum bounding rectangle dominates that instance, it is derived that \(\Pr_{\mathrm{rsky}}(t)=0\) if and only if \(p_{j}=(\max_{s\in T_{j}}S_{\omega_{1}}(s),\cdots,\max_{s\in T_{j}}S_{\omega_{ d^{\prime}}}(s))\prec t\). Based on the construction of \(P\), it is known that all \(p_{j}\) are included in \(P\), thus completing the proof.
Now, we integrate the above strategies into the proposed algorithm and the pseudocode is shown in Algorithm 2. The algorithm first computes the set of vertices \(V\) of the preference region \(\Omega\) and initializes \(m\) aggregated R-trees \(R_{1},\cdots,R_{m}\), where \(R_{i}\) is used to incrementally index \(S_{{}_{V}}(t)\) for all instances \(t\in T_{i}\) with \(\Pr_{rs}(t)>0\) that have been processed by the algorithm. After that, the algorithm traverses the index \(R\) in a _best-first_ manner. Specifically, it first inserts the root of R-tree on \(\mathcal{D}\) into a _minimum heap_\(H\) sorted according to its score under some \(S_{\omega\in V}(\cdot)\), where the score of a node \(N\) is defined as \(S_{\omega}(N_{\min})\). Then, at each time, it handles the top node \(N\) popped from \(H\). If \(S_{{}_{V}}(N_{\min})\) is dominated by some point in \(P\), then the algorithm ignores all instances in \(N\) since their rskyline probabilities are zero due to the transitivity of \(\mathcal{F}\)-dominance. Otherwise, if \(N\) is a leaf node, say \(t\in T_{i}\) is contained in \(N\), the algorithm computes \(S_{{}_{V}}(t)\) and issues the window query with the origin and \(S_{{}_{V}}(t)\) on each aggregated R-tree \(R_{j\neq i}\) to compute \(\sigma[j]=\sum_{s\in T_{j},s\prec r}\Pr(s)\) and inserts \(S_{{}_{V}}(t)\) into \(R_{i}\). Then it updates \(p_{i}\), which records the maximum corner of the minimum bounding rectangle of \(S_{{}_{V}}(t)\) for all instances \(t\in T_{i}\) with \(\Pr_{\mathrm{rsky}}(t)>0\) that have been processed so far, and inserts \(p_{i}\) into \(P\) if all instances in \(T_{i}\) have non-zero rskyline probability. Or if \(N\) is an internal node, it inserts all non-pruned child nodes of \(N\) into \(H\) for further computation.
```
Input: an uncertain dataset \(\mathcal{D}\), a set of linear scoring functions \(\mathcal{F}=\{S_{\omega}(\cdot)\mid\omega\in\mathbb{S}^{d-1}\wedge A\times \omega\leq b\}\) Output: the rskyline probabilities of all instances
1 Compute vertices \(V\) of \(\Omega=\{\omega\in\mathbb{S}^{d-1}\mid A\times\omega\leq b\}\);
2 Initialize a min-heap \(H\) with respect to \(S_{\omega}(\cdot)\) and \(m\)\(d^{\prime}\)-dimensional aggregated R-trees \(R_{1},\cdots,R_{m}\);
3\(P\leftarrow\emptyset\); \(\mathrm{ARSP}\leftarrow\emptyset\);
4Insert the root of R-tree on \(\mathcal{D}\) into \(H\);
5while\(H\) is not emptydo
6 Let \(N\) be the top node in \(H\);
7if\(N\) is not pruned by \(P\)then
8if\(N=\{t\}\) is a leaf node (say \(t\in T_{i}\))then
9\(S_{{}_{V}}(t)\leftarrow\) compute \(t\)'s score vector under \(V\);
10\(\Pr_{\mathrm{rsky}}(t)\leftarrow\Pr(t)\);
11foreachaggregated R-tree \(R_{j\neq i}\)do
12\(\sigma[j]\leftarrow\) perform window query with the origin and \(S_{{}_{V}}(t)\) on \(R_{j}\);
13\(\Pr_{\mathrm{rsky}}(t)\leftarrow\Pr_{\mathrm{rsky}}(t)\times(1-\sigma[j])\);
14Insert \(S_{{}_{V}}(t)\) into \(R_{i}\);
15Insert \((t,\Pr_{\mathrm{rsky}}(t))\) into \(\mathrm{ARSP}\);
16\(cnt[i]\gets cnt[i]+1\);
17foreach\(j\gets 1\)to\(|V|\)do
18\(p_{i}[j]\leftarrow\max(p_{i}[j],S_{{}_{V}}(t)[j])\);
19if\(cnt[i]=|T_{i}|\)then
20Insert \(p_{i}\) into \(P\);
21
22
23else
24foreach child node \(N^{\prime}\) of \(N\)do
25if\(N^{\prime}\) is not pruned by \(P\)then
26Insert \(N^{\prime}\) into \(H\);
27
28
29
30
310
32return\(\mathrm{ARSP}\);
```
**Algorithm 2**Branch-and-Bound Algorithm
With the fact that Algorithm 2 only visits the nodes which contain instances \(t\) with \(\Pr_{\mathrm{rsky}}(t)>0\) and never access the same node twice, it is easy to prove that the number of nodes accessed by Algorithm 2 is optimal to derive the final result for ARSP problem. And since \(m-1\) orthogonal range queries are performed on aggregated R-trees for each instance in \(I\), the expected time complexity of Algorithm 2 is \(O(nm\log n)\).
## V Sublinear-time Algorithm
In this section, we focus on a special linear constraints which consist of \(d-1\) ratio bound constraints of the form \(l_{i}\leq\omega[i]/\omega[d]\leq h_{i}\) for \(1\leq i<d\). For brevity of notation, we use \(R=\prod_{i=1}^{d-1}[l_{i},h_{i}]\) to denote the set of ratio bound constraints in what follows. In [2], Liu et al. have investigated this special case of \(\mathcal{F}\)-dominance on certain datasets, named as _eclipse-dominance_, and defined the _eclipse_ query as retrieving the set of all non-_eclipse-dominated_ tuples. We refer the readers to their paper for wide applications of this query. Although we focus on uncertain datasets in this section, our methods can also be used to design improved algorithms for eclipse query processing.
### _Reduction to Range Searching Problem_
Given a set of ratio bound constraints \(R=\prod_{i=1}^{d-1}[l_{i},h_{i}]\), the \(\mathcal{F}\)-dominance test condition stated in Theorem 2 can be
equivalently represented as determining whether the following linear programming problem has a non-negative solution,
\[\text{minimize}\quad S_{\omega}(s)-S_{\omega}(t)=\sum_{i=1}^{d}(s[i]- t[i])\times\omega[i]\] \[\text{subject to}\quad l_{i}\leq\omega[i]/\omega[d]\leq h_{i}\qquad i \in\{1,\cdots,d-1\} \tag{3}\] \[\sum_{i=1}^{d}\omega[i]=1\]
The crucial observation is that the sign of the minimum value of the above linear programming problem can be determined more efficiently. Specifically, since \(\omega[d]>0\), transforming the object function \(\sum_{i=1}^{d}(t[i]-s[i])\times\omega[i]\) into \(\sum_{i=1}^{d-1}(t[i]-st[i])\times\omega[i]/\omega[d]+(s[d]-t[d])\) does not affect the sign of the minimum value. After that, we can choose each coordinate of the new unknowns \(r[i]=\omega[i]/\omega[d]\) for \(1\leq i<d\) independently in the corresponding interval \([l_{i},h_{i}]\). Thus, the minimum value of the new object function can be directly obtained in \(O(d)\) time, so is the sign of original minimum.
**Theorem 5** (Efficient \(\mathcal{F}\)-dominance test).: _Let \(\mathcal{F}\) be a set of linear scoring functions whose weights are described by ratio bound constraints \(R=\prod_{i=1}^{d-1}[l_{i},h_{i}]\), then \(t\prec_{\mathcal{F}}s\) if and only if \(\sum_{i=1}^{d-1}[\mathbf{1}(s[i]>t[i])\times l_{i}+(1-\mathbf{1}(s[i]>t[i])) \times h_{i}](s[i]-t[i])+(s[d]-t[d])\geq 0\), where \(\mathbf{1}(\cdot)\) is the indicator function._
Now, consider the set \(I\) of all instances as a set of points in the data space \([0,1]^{d}\), the \(i\)-th attributes as coordinates in the \(i\)-th dimension. For a point \(t\in I\), partition the data space into \(2^{d-1}\) regions using \(d-1\) hyperplanes \(x[i]=t[i]\) for \(1\leq i<d\). Each resulted region can be identified by a \((d-1)\)-bit code such that the \(i\)-th bit is _zero_ if the \(i\)-th coordinates of points in this region are less than \(t[i]\), and _one_ otherwise. We refer to the region whose identifier is \(k\) in decimal as region \(k\), e.g., region 0 contains all points whose first \(d-1\) coordinates are less than \(t\) and region \(2^{d-1}-1\) contains all points whose first \(d-1\) coordinates are greater than \(t\). Theorem 5 further indicates that given a set of ratio bound constraints \(R=\prod_{i=1}^{d-1}[l_{i},h_{i}]\), for \(0\leq k<2^{d-1}\), all points in region \(k\) that \(\mathcal{F}\)-dominate \(t\) lie in the following closed half-space,
\[x[d]\leq\sum_{i=1}^{d-1}[(1-|k|_{2}[i])\times l_{i}+|k|_{2}[i]\times h_{i}](t [i]-x[i])+t[d], \tag{4}\]
where \(|k|_{2}[i]\) is \(i\)-th bit of the binary of number \(k\).
**Example 3**.: _See Fig. 3(a) for an illustration. For point \(t_{2,3}\), region 0 contains the set of points \(\{t\in I\mid t[1]\leq 9\}\). Suppose that the ratio bound constraint is \(R=[0.5,2]\), the closed half-space contains all points in region 0 that \(\mathcal{F}\)-dominate \(t_{2,3}\) is \(t[2]\leq-0.5t[1]+16.5\). Since \(t_{3,1}\) is included in that half-space, it is concluded that \(t_{3,1}\prec_{\mathcal{F}}t_{2,3}\)._
The above procedure reduces the problem of finding all instances that \(\mathcal{F}\)-dominate \(t\) to a series of \(2^{d-1}\) half-space range searching problem [10]. Formally, the half-space range searching problem asks to preprocess a set of points in \(\mathbb{R}^{d}\) into a data structure such that all points lying below or on a query hyperplane can be reported quickly. This problem can be efficiently solved using the well-known _point-hyperplane duality_[25]. To be specific, the duality transform maps a point \(p=(p[1],\cdots,p[d])\in\mathbb{R}^{d}\) into the hyperplane \(p^{*}:x[d]=p[1]x[1]+\cdots+p[d-1]x[d-1]-p[d]\), and a hyperplane \(h:x[d]=\alpha[1]x[1]+\cdots+\alpha[d-1]x[d-1]-\alpha[d]\) into the point \(h^{*}=(\alpha[1]\cdots,\alpha[d])\). It is proved that if \(p\) lies above (resp., below, on) \(h\), then \(h^{*}\) lies above (resp., below, on) \(p^{*}\). The dual version of the half-space searching problem becomes that given a set of \(n\) hyperplanes in \(\mathbb{R}^{d}\) and a query point \(q\), report all hyperplanes lying above or through \(q\). Then, let \(H\) be a set of \(n\) hyperplanes in \(\mathbb{R}^{d}\), the _arrangement_ of \(H\), denoted by \(\mathcal{A}(H)\), is a subdivision of \(\mathbb{R}^{d}\) into _faces_ of dimension \(k\) for \(0\leq k\leq d\). Each face in \(\mathcal{A}(H)\) is a maximal connected region of \(\mathbb{R}^{d}\) that lies in the same subset of \(H\). For a query point \(q\), let \(\lambda(q,H)\) denote the set of hyperplanes in \(H\) lying above or through \(q\). It is easy to verify that all points \(p\) lying on the same face \(f\) of \(\mathcal{A}(H)\) have the same \(\lambda(p,H)\), denoted by \(\lambda(f,H)\). Thus, with a precomputation of \(\lambda(f,H)\) for each face \(f\) of \(\mathcal{A}(H)\) and the following structure for point location in \(\mathcal{A}(H)\), \(\lambda(q,H)\) can be computed in logarithmic time.
**Theorem 6** (Structure for Point Location [26]).: _Given a set \(H\) of \(n\) hyperplanes in \(\mathbb{R}^{d}\) and a query point \(q\), there is a data structure of size \(O(n^{d+\varepsilon})\) which can be constructed in \(O(n^{d+\varepsilon})\) expected time for any \(\varepsilon>0\), so that the face of \(\mathcal{A}(H)\) containing \(q\) can be located in \(O(\log n)\) time._
Returning to the ARSP problem, we propose an improved algorithm based on the above reduction. In the preprocessing stage, for each point \(t\in I\), say \(t\in T_{i}\), whose \(\Pr_{\mathrm{sky}}(t)>0\), the algorithm partitions all instances from other objects into \(2^{d-1}\) sets \(I_{t,k}=\{s\in I\setminus T_{i}\mid s\text{ in region }k\text{ partitioned by }t\}\) for \(0\leq k<2^{d-1}\). Then, it builds the structure stated in Theorem 6 for the set of dual hyperplanes \(I_{t,k}^{*}\) for each set \(I_{t,k}\), and computes an array \(\sigma_{f}=\langle\sigma_{f}[j]\mid 1\leq j\leq m\rangle\) of aggregated values for each face \(f\) of \(\mathcal{A}(I_{t,k}^{*})\), where \(\sigma_{f}[j]=\sum_{s^{*}\in\lambda(f,I_{t,k}^{*})\wedge s\in T_{j}}\Pr(s)\), i.e., the sum of probabilities over all instances of object \(T_{j}\) lying below or on the hyperplane \(p^{*}\), where point \(p\) lies in face \(f\).
In the query processing stage, given a set of ratio bound constraints \(R=\prod_{i=1}^{d-1}[l_{i},h_{i}]\), the algorithm processes each
Fig. 3: Reduction to range searching problem and point-hyperplane duality.
point \(t\) as follows. If there is no auxiliary structure built for that point, then it reports \(\Pr_{\mathrm{rsky}}(t)\) as zero. Otherwise, the algorithm first initializes \(\Pr_{\mathrm{rsky}}(t)=\Pr(t)\) and \(\sigma[i]=0\) for \(1\leq i\leq m\), where \(\sigma[i]\) is for recording the sum of existence probability of instances from object \(T_{i}\) that \(\mathcal{F}\)-dominate \(t\) found so far. Then, for \(0\leq k<2^{d-1}\), let \(h_{t,k}\) denote the bounding hyperplane of the half-space in region \(k\), which is defined in Equation 4, the algorithm performs point location query \(h^{*}_{t,k}\) on the structure built for the set of hyperplanes \(I^{*}_{t,k}\), and updates \(\Pr_{\mathrm{rsky}}(t)\) according to Equation 1 based on the array \(\sigma_{f}\). Specifically, for \(1\leq j\leq m\), it updates \(\Pr_{\mathrm{rsky}}(t)\) to \(\Pr_{\mathrm{rsky}}(t)\times(1-\sigma[j]-\sigma_{f}[j])/(1-\sigma[j])\) and adds \(\sigma_{f}[j]\) to \(\sigma[j]\). After all queries, it is easy to verify that \(\Pr_{\mathrm{rsky}}(t)\) is the exact rskyline probability of \(t\). Since each point location query can be performed in \(O(\log n)\) time and the update of \(\Pr_{\mathrm{rsky}}(t)\) requires \(O(m)\) time for each \(\sigma_{f}\), the time complexity of the reduction-based algorithm is \(O\big{(}2^{d}mn\log n\big{)}\).
**Example 4**.: _Contine with point \(t_{2,3}\) in Fig. 3(a), set \(I_{t_{2,3},0}\) includes points \(t_{1,1},t_{1,2},t_{3,1},t_{3,2},t_{4,1}\) and the set of dual planes is plotted in Fig. 3(b). By performing the point-location query \(q=(-0.5,-16.5)\), which is the dual point of \(t[2]=-0.5t[1]+16.5\), the face \(f\) containing \(q\) is returned. Since the aggregated value \(\sigma_{f}[3]=\Pr(t_{3,1})+\Pr(t_{3,2})\) is precomputed for \(f\) in the preprocessing stage, the algorithm updates \(\Pr_{\mathrm{rsky}}(t_{2,3})\) to \(\Pr_{\mathrm{rsky}}(t_{2,3})*(1-\sigma[3]-\sigma_{f}[3])/(1-\sigma[3])\) and adds \(\sigma_{f}[3]\) to \(\sigma[3]\)._
### _Sublinear-time Algorithm_
To achieve better time complexity, two bottlenecks of the above algorithm should be addressed. 1) There are in total \(2^{d-1}\) arrays of aggregated values for each point and it seems unrealistic to merge them efficiently according to Equation 1. 2) Points are sequentially processed in the algorithm. Since all of them should be scanned at least once, the time complexity is \(\Omega(n)\). In subsequent, we introduce two strategies to solve these two inefficiencies.
**Multi-level Strategy.** The reason why we have to merge \(2^{d-1}\) arrays for each point \(t\) is that the half-spaces in \(2^{d-1}\) regions containing points that \(\mathcal{F}\)-dominate \(t\) are different from each other. Thus, the above algorithm performs \(2^{d-1}\) point location queries, one for each bounding hyperplane, to retrieve \(\sigma_{f}\) in each region. We show how to resolve this issue with the help of _multi-level strategy_[27]. Since the number of point location queries performed for each point is always \(2^{d-1}\), we build a multi-level structure for the set of dual hyperplanes \(I^{*}\) to retrieve the final aggregated result, each level of which is used to find all points lying below or on a query hyperplane.
To be specific, an 1-level auxiliary structure is defined as a point location tree (see Theorem 6) built for \(I^{*}\), and an array \(\sigma_{f}=\langle\sigma_{f}[j]\mid 1\leq j\leq m\rangle\) of aggregated values is computed for each face \(f\) of \(\mathcal{A}(I^{*})\), where \(\sigma_{f}[j]=\sum_{s^{*}\in\lambda(f,I^{*})\wedge s\in T_{j}}\Pr(s)\), i.e., the sum of probabilities over all instances of object \(T_{j}\) lying below or on the hyperplane \(p^{*}\), where \(p\) lies in face \(f\). Moreover, a product \(\beta_{f}=\prod_{j=1,\sigma_{f}[j]\neq 1}^{m}(1-\sigma_{f}[j])\) and a count \(\chi_{f}=|\{j\mid\sigma_{f}[j]=1\}|\) are also recorded for each face \(f\in\mathcal{A}(I^{*})\). Then, a \(k\)-level structure is recursively defined as an 1-level structure built for \(I^{*}\) and additionally equipped with a \((k-1)\)-level structure for \(\lambda(f,I^{*})\) for each face \(f\) of \(\mathcal{A}(I^{*})\).
After constructing the multi-level structure, the algorithm processes an input set of ratio bound constraints \(R=\prod_{i=1}^{d-1}[l_{i},h_{i}]\) as follows. For each point \(t\), it performs \(2^{d-1}\) point location queries on the multi-level structure, i.e., the dual point of the bounding hyperplane of the \(i\)-th half-space for the \(i\)-th region is queried on the \(i\)-level structure. Let \(f\) be the face returned by the final point location query. According to values recorded for face \(f\), the rskyline probability of \(t\) can be calculated as
\[\Pr_{\mathrm{rsky}}(t)=\begin{cases}\frac{\beta_{f}\cdot\Pr(t)}{1-\sigma_{f}[i]}& \text{if $\chi_{f}=0$},\\ \beta_{f}\times\Pr(t)&\text{else if $\chi_{f}=1\wedge\sigma_{f}[i]=1$},\\ 0&\text{otherwise}.\end{cases}\]
It is easy to verify that the time consumed by computing the rskyline probability for a point \(t\) is \(O(2^{d-1}\log n)\). Therefore the total time complexity of the multi-level structure based algorithm for all rskyline probabilities computation is \(O(2^{d-1}n\log n)\) time. This also leads to an algorithm with logarithmic query time and polynomial preprocessing time for rskyline probability query. The formal definition of rskyline probability query is given as follows.
**Problem:****RSkyline Probability Query (RSPQ) Input:** an uncertain dataset \(\mathcal{D}\), a set of query ratio bound constraints \(R\), and a query instance \(q\).
**Output:** for the query instance \(q\), the probability that \(q\) is not \(\mathcal{F}\)-dominated by any instances in \(I\), i.e.,
\[\Pr_{\mathrm{rsky}}(q)=\prod_{i=1}^{m}(1-\sum_{t\in T_{i},t\prec x\neq q}\Pr(t )).\]
**Theorem 7**.: RSPQ _belongs to the complexity class \(\mathrm{PSL}\)[11]._
**Shift Strategy.** The major obstacle for the second bottleneck is that the set of \(2^{d-1}\) dual queries are different for each instance \(t\in I\). To be specific, according to Equation 4, the \(k\)-th dual query \(h^{*}_{t,k}\) in region \(k\) for a point \(t\) is the \(k\)-th vertex of \(R\) appended by the score of \(t\) under that vertex. It is surprising to discover that the \(k\)-th dual queries in region \(k\) for any two points \(t\) and \(s\) only differ in the last dimension. Therefore, if the scores of all instances remain the same under all vertices of \(R\), we can unify the procedures of performing point location queries for each of them. Specifically, for each point \(t\in I\), say \(t\in T_{i}\), whose \(\Pr_{\mathrm{rsky}}(t)>0\), we create a shifted dataset with \(t\) as the origin, i.e., \(I_{t}=\{s-t\mid s\in I\setminus T_{i}\}\) Then we merge all sets \(I_{t}\) to a key-value pair set \(\mathcal{I}=\{(s,\langle t\mid s\in I_{t}\rangle)\mid s\in\bigcup_{t\in I}I_{t}\}\), where \(s\) is a new point resulted by shifted the dataset with respect to some point \(t\) and duplicate \(s\) is eliminated by recording an array \(\langle t\mid s\in I_{t}\rangle\) of its multiple origins. Finally, we build the above multi-level structure for the set of dual hyperplanes \(\mathcal{I}^{*}=\{s^{*}\mid(s,-)\in\mathcal{I}\}\), except that the aggregated array of an 1-level structure is redefined
as \(\Pr_{f}=\langle\Pr_{f}[t]\mid t\in I\rangle\) for each face \(f\) of \(\mathcal{A}(\mathcal{I}^{*})\), where \(\Pr_{f}[t]=\prod_{j=1,j\neq i}^{m}(1-\sum_{s^{*}\in\lambda(f,\mathcal{I}^{*}) \wedge s+t\in T_{f}}\Pr(s))\).
After that, the algorithm processes an input set of ratio bound constraints \(R=\prod_{i=1}^{d-1}[l_{i},h_{i}]\) as follows. It generates \(2^{d-1}\) queries by appending a zero to each vertex of \(R\), and then performs these point location queries on the auxiliary structure. Let \(f\) denote the face returned by the final query, the rskyline probability of an instance \(t\) is computed as \(\Pr(t)\times\Pr_{f}(t)\) if \(\Pr_{f}(t)\) is recorded in \(\Pr_{f}\), and 0 otherwise. It is easy to verify that the point location queries can be executed in \(O(2^{d-1}\log n)\) time and all rskyline probabilities can be reported in an additional \(O(n)\) time.
## VI Experiments
In this section, we report the experimental study of the algorithms proposed for the ARSP problem.
### _Experimental Setting_
**Datasets and Constraints.** We use both real dataset and synthetic datasets for the experiments. We include a real dataset that is widely used by related work [28, 6, 4]. Specifically, NBA contains 28,475 technical statistics of 3707 players with 5 professional metrics: points, assists, rebounds, steals, and blocks, extracted from [https://www.nba.com/stats/](https://www.nba.com/stats/). We consider each player as an uncertain object and his season records as instances with the same existence probability of that object. And following the previous related work [28, 6, 4], we generate synthetic datasets as follows. We first generate centers of objects in the data space \([0,1]^{d}\) according to independent distributions (IND) or anti-correlated distributions (ANTI) using the standard data generation tool [29]. Then, for each center, we construct a hyper-rectangle whose edge length follows a normal distribution \(N(l/2,l/8)\). After that, we generate instances of the object uniformly within the hyper-rectangle and assign all instances the same existence probability. The number of instances follows a uniform distribution over interval \([1,cnt]\). Finally, we remove one instance from \(\phi\times m\) objects so that the total probability of each of them is strictly less than one. The expected number of instances in the dataset is \((\frac{cnt}{2}-\phi)\cdot m\).
We consider two types are constraints in our experiment. The first are _weak rankings_[1], which is one of the most common types of constraints on weights. For any number \(c\) of constraints, the input set of constraints is \(\{\omega[i]\geq\omega[i+1]\mid i\in\{1,\cdots,c\}\}\). The second are _interactive constraints_, which are generated in an interactive manner. Specifically, we first choose an inner point in \(\mathbb{S}^{d-1}\) as an estimation of one's preference. Then, at each time, we generate a pair of tuples \((t,s)\) uniformly in \([0,1]^{d}\) and choose the side of hyperplane \(\sum_{i=1}^{d}(t[i]-s[i])\times\omega[i]=0\) that containing the inner point as an input constraint. It is easy to see that the major difference between this two constraints is the number of vertices of the preference region. The first preference region always has \(d\) vertices while the number of vertices of the second one generally increases as \(c\) increases.
Table II lists all parameters for synthetic datasets and constraints with their tested and default values (in bold). To eliminate the bias of the generated datasets and constraints, we repeat all experiments 10 times for each parameter configuration and report the average as final results.
**Algorithms.** We implement the following algorithm in C++ and the source code can be accessed in [30]. All the algorithms are compiled by GNU G++ 7.5.0 with -O2 optimization and all experiments are conducted on a machine with a 3.5-GHz Intel(R) Core(TM) i9-10920X CPU, 256GB main memory, and 1TB hard disk running CentOS 7.
* BSL: the baseline algorithm in Section IV-A.
* KDTT: the kdtree-traversal algorithm in Section IV-B.
* KDTT*: the kdtree-traversal algorithm incorporating pre-order traversal into tree construction in Section IV-B.
* QDTT*: the quadtree-traversal algorithm incorporating pre-order traversal into tree construction in Section IV-B.
* B&B: the branch-and-bound algorithm in Section IV-C.
* DUAL (-M/S): the dual-based algorithm in Section V, where -M is for multi-level strategy, -S is for shift strategy.
### _Effectiveness of the ARSP._
To verify the effectiveness of the ARSP, we compute rskyline probabilities of all players on real NBA dataset, and report the top-10 players in rskyline probability ranking along with their rskyline probabilities in Table III. For comparison, we also conduct the traditional rskyline analysis. We calculate the average statistics for each player and retrieve rskyline on this average dataset, which is called aggregated rskyline for short hereafter. All players in the aggregated rskyline are marked with a "*" sign in Table III.
All players returned by aggregated rskyline or with high rskyline probability have good scoring ability according to their statistics, but there are still some differences between these two results. It is observed that players in the aggregated rskyline have very different rskyline probabilities, meanwhile players not in the aggregated rskyline may have a high rskyline
probability. The reason is that a player's season performance may have a high variance which can not be reflected by the average statistics. For example, compared with Michael Jordan, Russell Westbrook has more season statistics with rskyline probability less than 0.001 and the average of David Robinson's statistics is relatively low, but the variance is high, which makes him not belong to the aggregated rskyline but have a high rskyline probability.
In addition, the rskyline probability determines an order of players in the aggregated rskyline, which is not represented in the original result. This expresses the difference between two players that are not comparable under the set of user-specified scoring functions, e.g., we can say that although both belong to the aggregated rskyline, Michael Jordan is better than LeBron James since the former is more likely to appear in the rskyline of a match. Moreover, users can efficiently perform top-\(k\) queries or threshold queries on the result of ARSP problem to retrieve a set with specified size, while the aggregated rskyline size is uncontrollable. From above observations, we conclude that the ARSP provides a more comprehensive view on uncertain datasets than the aggregated rskyline.
### _Experimental Results under Linear Constraints._
In what follows, we study the efficiency and scalability of the proposed algorithms.
Fig. 4(a), 5(a), and Fig. 4(c), 5(c) show the effect of object cardinality \(m\) and instance count \(cnt\) on the running time of all algorithms, respectively. According to the generation procedure of the synthetic datasets, the number of instances \(n\) increases as \(m\) and \(cnt\) increase. Thus, the running time increases too for all algorithms. All proposed algorithms outperform the baseline by around an order of magnitude since BSL performs a numerous number of \(\mathcal{F}\)-dominate tests and does not involve effective pruning methods in the computation. Generally, B&B runs fastest due to the incremental mapping and the pruning strategy, but the gap narrows as \(m\) increases since the more objects, the more aggregated R-trees are queried by each instance. In addition, B&B is more sensitive to the data distribution than other methods because the data distribution directly affects the effect of its pruning strategy. Although with similar strategies, \(\text{QDTT}^{*}\) performs better than \(\text{KDTT}^{*}\) because the split of all dimensions at internal nodes makes subtrees whose points all have zero skyline probability pruned as early as possible. The results also demonstrate our optimization techniques significantly improve the experimental performance of KDTT under all settings. And the relative performance of the algorithms remains basically unchanged with respect to \(cnt\).
Having established BSL is inefficient for the ARSP problem, we henceforth exclude it from the following experiments. We also omit the curves of KDTT since it is always outperformed by KDTT\({}^{*}\). Fig. 4(b) and 5(b) shows the running time of the algorithms on datasets with varying dimensionality \(d\). The running time of all algorithms increase with \(d\) due to the cost of \(\mathcal{F}\)-dominance test increases. \(\text{QDTT}^{*}\) and \(\text{KDTT}^{*}\) are more efficient than B&B on low-dimensional datasets, but scale poorly with respect to \(d\). This is because as \(d\) increases, roots of subtrees pruned during the preorder traversal get closer to leaf nodes in \(\text{KDTT}^{*}\) and \(\text{QDTT}^{*}\). Moreover, the exponential growth in the number of child nodes of \(\text{QDTT}^{*}\) also causes its inefficiency on high-dimensional datasets.
For B&B, its performance mainly depends on the pruning ability of the incrementally constructed score vectors in \(P\) and the time cost of querying the aggregated R-trees. These are affected by the region length \(l\) and percentage \(\phi\) of objects with sum probability less than one as shown in Fig. 4(d), Fig. 4(e), Fig. 5(d), and Fig. 5(e). It is easy to see that all algorithms follow similar trends, but B&B is more sensitive to these two parameters. This is because the larger the region length of an object \(T\), the longer time B&B takes to query the aggregated R-tree for \(T\) and the fewer instances are pruned by the score vector constructed based on \(T\). Meanwhile, the more objects with sum probability less than one, the fewer score vectors are inserted into the pruning set \(P\).
Then, we evaluate the effect of the number and type of constraints. Fig. 6 plots the running time of algorithms with varying number of linear constraints. As \(c\) grows, the preference region becomes smaller, which improves the pruning ability of each instance, while makes instances in the score space more compact. Therefore, the running time of each algorithm first increase and then decreases with \(c\). Note that the inconsistent performance of B&B on IND and ANTI is because its pruning strategy is less effective on anti-correlated datasets. We also study the performance of the proposed algorithms with _interactive constraints_, where the number of vertices of the preference region increases along with \(c\). As shown in Fig. 7, the algorithms show trends similar to weak rankings expect Fig 7(c). The result indicates that the performance of all algorithms improves as \(c\) grows except for \(\text{QDTT}^{*}\) This is because the preference region gets smaller with the increasing of \(c\) which makes more instances pruned in B&B and more subtrees pruned in \(\text{KDTT}^{*}\). But the dimensional disaster of \(\text{QDTT}^{*}\), see curve \(|V|\) in Fig. 7(c), overshadows the performance improvement gains from the narrow of the preference region. And this also accounts for the failure of \(\text{QDTT}^{*}\) when \(d>5\) in Fig. 7(b).
### _Experimental Results under Ratio Bound Constraints._
Since the structure stated in Theorem 6 is somewhat inherently theoretical, especially in high dimensions, we introduce a specialized version of \(\text{DUAL-MS}\) for \(d=2\) to avoid this. Recall that for each instance \(t\), we reduce the computation of \(\Pr(t)\) to two half-space searching problems as illustrated in Fig. 3(a). It is noticed that these two queries can be reinterpreted as a continuous range query if \(d=2\). As shown in Fig. 8(a), when processing \(t_{2,3}\), we can treat \(t_{2,3}\) as the origin, ray \(y=t_{2,3}[2],x\geq t_{2,3}[1]\) as a base and represent each instance by an angle, e.g., \(\theta=\pi+\arctan\frac{12-5}{9-6}\), then the two query lines \(t[2]\leq-0.5t[1]+16.5\) and \(t[2]\leq-2t[1]+30\) can be mapped into a range query \([\pi-\arctan\frac{1}{2},2\pi-\arctan 2]\) with respect to angle. With this transformation, we can use a
simple binary search tree to organize the instances instead of the point location tree. We give an implementation of this specialized DUAL-MS and evaluate its performance on the NBA dataset. For reference, we also attach a simple preprocessing strategy to KDTT\({}^{*}\), which removes all instances with zero skyline probability from \(I\). Fig. 8(b) shows the running time of these two algorithms. Although the efficiency is improved, the huge preprocessing time and memory consumption prevent its application on big datasets.
The above drawbacks of DUAL-MS are alleviated for eclipse query because eclipse is a subset of skyline \(S\), which has a logarithmic size in expectation, and the multi-level strategy is no longer needed since for each tuple \(t\in S\), \(t\) belongs to the eclipse of \(D\) if and only if all point location queries on \(S^{*}_{t,k}\) (\(0\leq k<2^{d-1}\)) return emptiness. Thus, we extend the dual-based algorithm DUAL-S for eclipse query, in which we use a \(kd\)-tree to index the dataset resulted by applying shifted strategy. For comparison, we also implement the state-of-the-art index-based method QUAD [2] for eclipse query in C++.
We evaluate their efficiency and scalability with respect to data cardinality \(n\), data dimensionality \(d\), and ratio range \(q\), where the defaulted value is set as \(n=2^{14}\), \(d=3\), and \(q=[0.36,2.75]\). As shown in Fig. 9(a) and 9(b), the running time of these two methods increases with the increasing of both \(n\) and \(d\). Concretely, DUAL-S outperforms QUAD by at least an order of magnitude and even more on high-dimensional datasets. The reason is that QUAD needs to iterate over the set of hyperplanes returned by the window query performed on its Intersection Index, and then reports all tuples with zero Order Vector as the final result. This takes \(O(s^{2})\) time in the worst case, where \(s\) is the skyline size of the dataset. But in DUAL-S, we exclude a tuple from the result if there is a range query returns non-empty result, which only take \(O(s)\) time in the worst case. Moreover, the hyperplane quadtree adopted in QUAD scales poorly with respect to \(d\) for the following two reasons. On the one hand, the tree index has a large fan-out since it splits all dimensions at each internal node. On the other hand, the number of intersection hyperplanes of a node decreases slightly relative to its parent, especially on high-dimensional datasets, which results in an unacceptable tree height. Moreover, as shown in Fig. 9(c), QUAD is more sensitive to the ratio range than DUAL-S because the number of hyperplanes returned by the window query actually determines the running time.
Fig. 4: Running time on IND datasets.
Fig. 5: Running time on ANTI datasets.
Fig. 6: Effect of \(c\) on running time for linear constraints.
Fig. 7: Running time under interactive constraints (IND).
## VII Conclusions
In this paper, we study the problem of computing rsky-line probabilities of all instances from both complexity and algorithm perspective. By establishing a fine-grained reduction from the OVC, we prove that the problem can not be solved in truly subquadratic time unless the OVC fails. As for algorithmic concern, when \(\mathcal{F}\) is a set of linear scoring functions described by linear constraints, we propose two algorithms with near-optimal time complexity \(O(n^{2-1/d^{\prime}})\) and better expected time-complexity \(O(mn\log n)\), respectively. For a special case where the linear constraints consists of \(d-1\) ratio bounds of the form \(\{l_{i}\leq\omega[i]/\omega[d]\leq h_{i}\mid 1\leq i<d\}\), we propose an algorithm with \(O(2^{d-1}\log n+n)\) query time and polynomial preprocessing time for ARSP problem, and an algorithm with \(O(2^{d-1}\log n)\) query time and polynomial preprocessing time for online rskyline probability query. Our thorough experiments over real and synthetic datasets demonstrate the effectiveness of the problem and the efficiency of the proposed algorithms. Moreover, the extension of the sublinear-time algorithm also outperforms the state-of-the-art index-based method for the corresponding query on certain datasets. For future directions, there are two possible ways. On the one hand, conducting rskyline analysis on datasets with continuous uncertainty remains open, where it becomes expensive to make the integral for computing the dominance probability. On the other hand, it is still worthwhile to investigate concrete lower bounds for ARSP problem under some specific dimensions.
|
2302.00616
|
How many real zeros does a random Dirichlet series have?
|
Let $F(\sigma)=\sum_{n=1}^\infty \frac{X_n}{n^\sigma}$ be a random Dirichlet
series where $(X_n)_{n\in\mathbb{N}}$ are independent standard Gaussian random
variables. We compute in a quantitative form the expected number of zeros of
$F(\sigma)$ in the interval $[T,\infty)$, say $\mathbb{E} N(T,\infty)$, as
$T\to1/2^+$. We also estimate higher moments and with this we derive
exponential tails for the probability that the number of zeros in the interval
$[T,1]$, say $N(T,1)$, is large. We also consider almost sure lower and upper
bounds for $N(T,\infty)$. And finally, we also prove results for another class
of random Dirichlet series, e.g., when the summation is restricted to prime
numbers.
|
Marco Aymone, Susana Frómeta, Ricardo Misturini
|
2023-02-01T17:31:02Z
|
http://arxiv.org/abs/2302.00616v3
|
# How many real zeros does a random Dirichlet series have?
###### Abstract.
Let \(F(\sigma)=\sum_{n=1}^{\infty}\frac{X_{n}}{n^{\beta}}\) be a random Dirichlet series where \((X_{n})_{n\in\mathbb{N}}\) are independent standard Gaussian random variables. We compute the expected number of zeros of \(F(\sigma)\) in the interval \([T,\infty)\), say \(\mathbb{E}N(T,\infty)\), and showed that as \(T\to 1/2^{+}\)
\[\mathbb{E}N(T,\infty)=\frac{1}{4\pi}\log\left(\frac{1}{T-1/2}\right)+c_{0}+O(T -1/2),\]
where \(c_{0}\) is a constant. We also showed weaker estimates for another class of random Dirichlet series, _e.g._, when the summation is restricted to prime numbers.
## 1. Introduction.
Around 1938, in a series of papers [10, 11, 12, 13], Littlewood and Offord proved estimates for the average number of real roots of a random polynomial
\[p(z)=X_{0}+X_{1}z+...+X_{n}z^{n},\]
where \((X_{j})_{j=0}^{n}\) are random variables. In 1943, inspired in the first of these papers, Kac [9] presented a formula for the expected number of these real roots in the Gaussian case. From this formula he deduced that if \(n\) is the degree of the random polynomial, and if \((X_{j})_{j=0}^{n}\) are independent standard Gaussian variables, then
\[\mathbb{E}\text{ Number of real roots of }p(z)=\left(\frac{2}{\pi}+o(1)\right)\log n.\]
An analogous statement for random variables with other distributions is also true, but this has turned out to be a great challenge in the last century, when, for instance, we consider that \((X_{j})_{j=0}^{n}\) are Rademacher random variables (see this 1956 paper [6] by Erdos and Offord for the Rademacher case, and these other papers [7, 8] by Ibragimov and Maslova for other distributions).
In the past 50 years, this beautiful theory has evolved in deepness and in many perspectives - here we refer to these papers [4, 14] by Do-Nguyen-Vu and Nguyen-Vu for a short survey and a nice state of an art on this topic.
In Analytic Number Theory, the location of zeros of certain analytic functions are of utmost importance. For instance, the location of the zeros of the analytic continuation of the Riemann
zeta function
\[\zeta(s)=\sum_{n=1}^{\infty}\frac{1}{n^{s}},\ Re(s)>1,\]
have deep connections with the distribution of prime numbers.
The Riemann \(\zeta\) function is a particular case of a Dirichlet series, and here we are interested in the case where we replace the constant \(1\) by random variables, _i.e._,
\[F(\sigma)=\sum_{n=1}^{\infty}\frac{X_{n}}{n^{\sigma}},\]
where \((X_{n})_{n\in\mathds{N}}\) are i.i.d. Gaussian random variables with mean \(0\) and variance \(1\).
This random Dirichlet series is, almost surely, well defined for all \(s\) in the complex half plane \(Re(s)>1/2\) due to the Kolmogorov Two-Series Theorem, and to classical results for general Dirichlet series. These series have been studied recently by the authors [2], where it has been proved a Law of the Iterated Logarithm (LIL) that describes the almost sure fluctuations of \(F(\sigma)\) when \(\sigma\to 1/2^{+}\) (in the Rademacher case), and by Buraczewski et al. [3], where they considered a more general class of this particular random series and proved LIL and other convergence theorems.
A key difference between the zeros of this random Dirichlet series \(F(\sigma)\) and that of the Riemann zeta function, is that, \(\zeta(s)\) has no real zeros1 in the half plane \(Re(s)>0\), while in the random case there are an infinite number of real zeros accumulating at the right of \(1/2\), almost surely, see [1].
Footnote 1: Indeed \(\zeta(s)=\frac{1}{1-2^{1-s}}\sum_{n=1}^{\infty}\frac{(-1)^{n+1}}{n^{s}}\), and this alternating series is well defined for all \(Re(s)=\sigma>0\). The fact that \(\zeta\) has no real zeros follows from the fact that the sequence \((1/n^{\sigma})_{n\in\mathds{N}}\) is decreasing and the series is alternated.
Throughout this paper, for \(1/2<T<U\), \(N(T,U)\) denotes the number of real zeros of \(F(\sigma)\) in the interval \([T,U]\), where \(U\) can be either a real number or \(\infty\). Since \(F(\sigma)\) is an analytic function, \(N(T,U)<\infty\) for all \(T>1/2\) and \(U<\infty\), almost surely.
As far as we are aware, little attention has been given for zeros of random Dirichlet series in the literature. We found a nice geometric point of view of the expected number of zeros of a general random series of functions by Edelman and Kostlan, see [5]. For the case of our random Dirichlet series, in [5] appeared the following formula2:
Footnote 2: Indeed we can deduce formula (1) from Theorem 3.1 from [5]. But we highlight that the formula at subsection 3.2.5 of [5] that treats the random Dirichlet series case has a derivative notation that might lead to a certain confusion.
\[\mathbb{E}N(T,U)=\frac{1}{2\pi}\int_{T}^{U}\sqrt{\frac{d^{2}}{ds^{2}}\log \zeta(s)\bigg{|}_{s=2\sigma}}d\sigma. \tag{1}\]
The first aim of this paper is to make quantitative the formula above.
**Theorem 1.1**.: _There exist \(\delta>0\) and constants \((c_{n})_{n\geq 0}\) such that, for all \(T\in(1/2,1/2+\delta)\):_
\[\mathbb{E}N(T,\infty)=\frac{1}{4\pi}\log\left(\frac{1}{T-1/2}\right)+\sum_{n=0 }^{\infty}c_{n}(T-1/2)^{n}. \tag{2}\]
In particular, as \(T\to 1/2^{+}\),
\[\mathbb{E}N(T,\infty)=\frac{1}{4\pi}\log\left(\frac{1}{T-1/2}\right)+c_{0}+O( T-1/2).\]
It is interesting to observe that the lower order terms in eq. (2) have a power series representation. As we show, this is a consequence of the good analytic properties of the Riemann \(\zeta\) function around its simple pole together with a well known zero free region of this function.
### More general random Dirichlet series
We also compute the expected number of real zeros of random Dirichlet series of the form
\[F(\sigma):=\sum_{p}\frac{X_{p}}{p^{\sigma}},\]
where \(p\) runs orderly over an increasing set of positive real numbers \(\mathcal{P}:=\{1\leq p_{1}<p_{2}<...\}\) with \(p_{n}\to\infty\), and \(X_{p}\) are independent standard Gaussian random variables. We assume some regularity in the counting function \(\pi(x):=|\{p\leq x:p\in\mathcal{P}\}|\):
\[\pi(x)=(1+o(1))x(\log x)^{\alpha},\;x\to\infty, \tag{3}\]
where \(\alpha\) is a real number. As an example, the positive integers satisfy the quantitative statement above with \(\alpha=0\), and the prime numbers with \(\alpha=-1\), due to the Prime Number Theorem.
We denote by \(N_{\alpha}(T,U)\) the number of zeros in the interval \([T,U]\) of the random series \(F(\sigma)\) associated to \(\mathcal{P}\) satisfying (3). Regardless the value of \(\alpha\), we have that \(F(s)\) converges for all \(Re(s)>1/2\), and diverges for all \(Re(s)<1/2\), almost surely.
By letting
\[\zeta_{\alpha}(s):=\sum_{p}\frac{1}{p^{s}},\]
we see from [5] that (1) generalizes to
\[\mathbb{E}N_{\alpha}(T,U)=\frac{1}{2\pi}\int_{T}^{U}\sqrt{\frac{d^{2}}{ds^{2} }\log\zeta_{\alpha}(s)\Big{|}_{s=2\sigma}}d\sigma. \tag{4}\]
It is important to observe that the assumption (3) is not enough to deduce good analytic properties of \(\zeta_{\alpha}(s)\) around its singularity at \(s=1\). Even so, a qualitative result, weaker in comparison with Theorem 1.1, can be obtained.
**Theorem 1.2**.: _As \(T\to 1/2^{+}\), we have that_
\[\mathbb{E}N_{\alpha}(T,\infty)=(1+o(1))\times\left\{\begin{array}{ll}&\frac{ \sqrt{1+\alpha}}{4\pi}\log\left(\frac{1}{T-1/2}\right),\text{ if }\alpha>-1,\\ &\frac{1}{2\pi}\sqrt{\log\left(\frac{1}{T-1/2}\right)},\text{ if }\alpha=-1,\\ &c,\text{ if }\alpha<-1,\end{array}\right.\]
_where \(c>0\) is a number that depends on the set \(\mathcal{P}\)._
## 2. Notation
We use the standard notation
1. \(f(x)\ll g(x)\) or equivalently \(f(x)=O(g(x))\);
2. \(f(x)=o(g(x))\).
The case (1) is used whenever there exists a constant \(C>0\) such that \(|f(x)|\leq|g(x)|\), for all \(x\) in a set of numbers. This set of numbers when not specified is the real interval \([L,\infty]\), for some \(L>0\), but also there are instances where this set can accumulate at the right or at the left of a given real number, or at complex number. Sometimes we also employ the notation \(\ll_{\epsilon}\) or \(O_{\epsilon}\) to indicate that the implied constant may depends in \(\epsilon\).
In case (2), we mean that \(\lim_{x}f(x)/g(x)=0\). When not specified, this limit is as \(x\to\infty\) but also can be as \(x\) approaches any complex number in a specific direction.
## 3. Proof of the main results
Proof of Theorem 1.1.: We begin by recalling some well known facts about the Riemann zeta function. Classically defined for \(Re(s)>1\) as
\[\zeta(s)=\sum_{n=1}^{\infty}\frac{1}{n^{s}},\]
we have that actually \(\zeta\) has analytic continuation to the all complex plane except at \(s=1\) where has simple pole with residue \(1\). Therefore, for all \(s\neq 1\), we have that \(\zeta\) has a Laurent series representation:
\[\zeta(s)=\frac{1}{s-1}+\sum_{n=0}^{\infty}(-1)^{n}\frac{\gamma_{n}}{n!}(s-1)^{ n}, \tag{5}\]
where \(\gamma_{n}\) are the Stieltjes constants. It is noteworthy to say that actually \(\gamma_{0}=\gamma\), the Euler-Mascheroni constant.
Now we have that
\[\frac{d^{2}}{ds^{2}}\log\zeta(s)=\frac{d}{ds}\frac{\zeta^{\prime}(s)}{\zeta(s)}= \frac{\zeta^{\prime\prime}(s)}{\zeta(s)}-\frac{\zeta^{\prime 2}(s)}{\zeta^{2}(s)}.\]
The zero of \(\zeta(s)\) closest to \(s=1\) is at \(s=-2\). Therefore, \(\frac{1}{\zeta(2s)}\) is analytic in a open ball centered at \(s=1/2\) and with radius \(1\). The same is true for \(\frac{1}{\zeta^{2}(2s)}\).
We recall that \(\frac{1}{\zeta(s)}\) has a simple zero at \(s=1\). Therefore \(\frac{\zeta^{\prime\prime}(2s)}{\zeta(2s)}\) is a meromorphic function with a pole of order \(2\) at \(s=1/2\), and \(\frac{\zeta^{\prime\prime}(2s)}{\zeta(2s)}=\frac{2+o(1)}{(2s-1)^{2}}\) as \(s\to 1/2\). Also, \(\frac{\zeta^{\prime 2}(2s)}{\zeta^{2}(2s)}\) is meromorphic with a pole of order \(2\) at \(s=1/2\), with \(\frac{\zeta^{\prime 2}(2s)}{\zeta^{2}(2s)}=\frac{1+o(1)}{(2s-1)^{2}}\) as \(s\to 1/2\).
Now, we have that
\[\frac{d^{2}}{ds^{2}}\log\zeta(2s)=\frac{1}{(2s-1)^{2}}\left(1+A(s)\right),\]
where \(A(s)\) is an analytic function in a open ball centered at \(s=1/2\) and with radius \(1\). Moreover, \(A(s)=O(|s-1/2|)\), and hence, there exists a \(\delta>0\) such that \(|A(s)|\) does not exceed \(1/2\) for all \(s\) in an open ball \(B\) of center \(1/2\) and radius \(\delta\).
Thus, the function
\[\sqrt{1+A(s)}\]
is analytic in this open ball \(B\) and has power series representation
\[\sqrt{1+A(s)}=1+\sum_{n=1}^{\infty}b_{n}(2s-1)^{n}.\]
Therefore, since for real \(1/2<\sigma\leq 1/2+\delta/2\)
\[\sqrt{\frac{d^{2}}{ds^{2}}\log\zeta(s)\bigg{|}_{s=2\sigma}}=\frac{1}{2\sigma- 1}\sqrt{1+A(\sigma)},\]
and the power series converges absolutely, the integral of the sum is the sum of integrals:
\[\mathbb{E}N(T,1/2+\delta/2) =\frac{1}{2\pi}\int_{T}^{1/2+\delta/2}\left(\frac{1}{(2\sigma-1) }+\sum_{n=0}^{\infty}b_{n+1}(2\sigma-1)^{n}\right)d\sigma\] \[=\frac{1}{4\pi}\log\left(\frac{1}{T-1/2}\right)+c_{0}+\sum_{n=1} ^{\infty}c_{n}(T-1/2)^{n}.\]
Now we will show that \(\mathbb{E}N(1/2+\delta/2,\infty)\) is a constant. Indeed, for \(Re(s)>1\)
\[\frac{\zeta^{\prime}(s)}{\zeta(s)}=-\sum_{n=2}^{\infty}\frac{\Lambda(n)}{n^{s}},\]
where \(\Lambda(n)\) is the classical von Mangoldt function3. Therefore
Footnote 3: The von Mangoldt function is defined as follows: If \(n\) is the power of a prime, say \(n=p^{m}\), then \(\Lambda(n)=\log p\). If \(n\) is not a prime power, then \(\Lambda(n)=0\)
\[\frac{d}{ds}\frac{\zeta^{\prime}(s)}{\zeta(s)}=\sum_{n=2}^{\infty}\frac{ \Lambda(n)\log n}{n^{s}}.\]
By the general theory of Dirichlet series, \(\frac{d}{ds}\frac{\zeta^{\prime}(s)}{\zeta(s)}\) is a continuous function in the real interval \([1+\delta,100]\), and hence \(\mathbb{E}N(1/2+\delta/2,100)\) is a real number. Let \(L>100\). Since \(\sqrt{a+b}\leq\sqrt{a}+\sqrt{b}\) for all \(a,b\geq 0\), and \(0\leq\Lambda(n)\leq\log n\), we have that
\[\int_{100}^{L}\sqrt{\sum_{n=2}^{\infty}\frac{\Lambda(n)\log n}{n ^{2\sigma}}d\sigma} \leq\int_{100}^{L}\sum_{n=2}^{\infty}\frac{\log n}{n^{\sigma}}d\sigma\] \[\leq\sum_{n=2}^{\infty}\log n\int_{100}^{\infty}\exp(-\sigma\log n )d\sigma=\sum_{n=2}^{\infty}\frac{1}{n^{100}}<\infty,\]
where the interchange between the integration and summation is justified by the fact that the Dirichlet series converges absolutely for \(\sigma\) in the range \([100,L]\), for any large \(L>100\). Therefore, the limit
\[\lim_{L\to\infty}\mathbb{E}N(100,L)\]
exists and is a real number. This completes the proof.
Proof of Theorem 1.2.: Just as in Theorem 1.1, we have that
\[\mathbb{E}N_{\alpha}(T,U)=\frac{1}{2\pi}\int_{T}^{U}\sqrt{\frac{\zeta_{\alpha} ^{\prime\prime}(2\sigma)}{\zeta_{\alpha}(2\sigma)}-\left(\frac{\zeta_{\alpha} ^{\prime}(2\sigma)}{\zeta_{\alpha}(2\sigma)}\right)^{2}}d\sigma.\]
Thus, we need to estimate, as \(\sigma\to 1/2^{+}\), quantities of the form
\[\zeta_{\alpha}^{(\beta)}(2\sigma)=\sum_{p}\frac{(\log p)^{\beta}}{p^{2\sigma} }=\int_{2}^{\infty}\frac{(\log x)^{\beta}}{x^{2\sigma}}d\pi(x)+O(1),\]
where \(\beta=0,1,2\) and the last integral above is in the Riemann-Stieltjes sense. Integration by parts gives, as \(\sigma\to 1/2^{+}\):
\[\zeta_{\alpha}^{(\beta)}(2\sigma)=(2\sigma+o(1))\int_{2}^{\infty}\frac{\pi(x) (\log x)^{\beta}}{x^{2\sigma+1}}dx=(1+o(1))\int_{2}^{\infty}\frac{(\log x)^{ \alpha+\beta}}{x^{2\sigma}}dx.\]
**Lemma 3.1**.: _Let \(\gamma\) be a real number. Then, as \(\sigma\to 1/2^{+}\):_
\[J(\gamma,\sigma):=\int_{2}^{\infty}\frac{(\log x)^{\gamma}}{x^{2\sigma}}dx=(1+ o(1))\left\{\begin{array}{rl}&\frac{\Gamma(\gamma+1)}{(2\sigma-1)^{\gamma+1}}, \gamma>-1\\ &\log\left(\frac{1}{\sigma-1/2}\right),\gamma=-1\\ &c(\gamma),\gamma<-1,\end{array}\right.\]
_where \(\Gamma\) is the classical Euler's Gamma function, and \(c(\gamma)\) is a constant that depends on \(\gamma\)._
Proof of Lemma 3.1.: The case \(\gamma<-1\) is easy. Let then \(\gamma>-1\). Let \(u=(2\sigma-1)\log x\). Then
\[J(\gamma,\sigma) =\frac{1}{(2\sigma-1)^{1+\gamma}}\int_{(2\sigma-1)\log 2}^{\infty}u^ {(1+\gamma)-1}e^{-u}du\] \[=(1+o(1))\frac{\Gamma(\gamma+1)}{(2\sigma-1)^{1+\gamma}}.\]
In the case that \(\gamma=-1\),
\[J(-1,\sigma) =\int_{(2\sigma-1)\log 2}^{\infty}\frac{e^{-u}}{u}du\] \[=\int_{(2\sigma-1)\log 2}^{1/100}\frac{e^{-u}}{u}du+O(1)\] \[=\int_{(2\sigma-1)\log 2}^{1/100}\frac{1+O(u)}{u}du+O(1)\] \[=\log\left(\frac{1}{\sigma-1/2}\right)+O(1).\]
This proves the Lemma.
Now we continue with the proof of Theorem 1.2. We begin by estimating \(\mathbb{E}N_{\alpha}(T,1/2+\delta)\) for some small delta.
Case \(\alpha>-1\). In this case, by Lemma 3.1 we have that
\[\mathbb{E}N_{\alpha}(T,1/2+\delta)=\frac{1}{2\pi}\int_{T}^{1/2+ \delta} \left(\frac{(1+o(1))\Gamma(3+\alpha)(2\sigma-1)^{-(3+\alpha)}}{(1 +o(1))\Gamma(1+\alpha)(2\sigma-1)^{-(1+\alpha)}}\right.\] \[\left.-\left(\frac{(1+o(1))\Gamma(2+\alpha)(2\sigma-1)^{-(2+ \alpha)}}{(1+o(1))\Gamma(1+\alpha)(2\sigma-1)^{-(1+\alpha)}}\right)^{2} \right)^{1/2}d\sigma.\]
Due to the property that \(\Gamma(z+1)=z\Gamma(z)\), the last expression simplifies to
\[\mathbb{E}N_{\alpha}(T,1/2+\delta)= (1+o(1))\frac{\sqrt{1+\alpha}}{2\pi}\int_{T}^{1/2+\delta}\frac{1 }{2\sigma-1}d\sigma\] \[=(1+o(1))\frac{\sqrt{1+\alpha}}{4\pi}\log\left(\frac{1}{T-1/2} \right).\]
Case \(\alpha=-1\). In this case
\[\mathbb{E}N_{-1}(T,1/2+\delta)= \frac{1}{2\pi}\int_{T}^{1/2+\delta}\bigg{(}\frac{(1+o(1))\Gamma(2)( 2\sigma-1)^{-2}}{\log(1/(2\sigma-1))}\bigg{)}^{1/2}d\sigma\] \[=\frac{(1+o(1))}{4\pi}\int_{2T-1}^{2\delta}\frac{1}{x\sqrt{- \log x}}dx\] \[=\frac{(1+o(1))}{4\pi}\int_{-\log(2T-1)}^{-\log(2\delta)}-v^{-1/2}dv\] \[=\frac{(1+o(1))}{2\pi}\sqrt{\log\bigg{(}\frac{1}{T-1/2}\bigg{)}}.\]
Case \(\alpha<-1\). In this case we have that \(\zeta_{\alpha}(2\sigma)=c+o(1)\), as \(\sigma\to 1/2^{+}\), for some \(c>0\). The proof of this case follows the idea of the previous ones, in which the function \(J(\gamma,\sigma)\) of Lemma 3.1 is analyzed. For that, we will have to divide our proof in the cases \(-2<\alpha<-1\), \(\alpha=-2\), \(-3<\alpha<-2\), \(\alpha=-3\) and \(\alpha<-3\). We will present the details only for the case \(-2<\alpha<-1\). The other cases can be treated similarly.
Let then \(-2<\alpha<-1\). We have that
\[\mathbb{E}N_{\alpha}(T,1/2+\delta)=\frac{1}{2\pi}\int_{T}^{1/2+ \delta} \bigg{(}\frac{(1+o(1))\Gamma(3+\alpha)(2\sigma-1)^{-(3+\alpha)}}{c+o (1)}\] \[-\bigg{(}\frac{(1+o(1))\Gamma(2+\alpha)(2\sigma-1)^{-(2+\alpha)}} {c+o(1)}\bigg{)}^{2}\bigg{)}^{1/2}d\sigma.\]
The function inside the square-root above behaves, as \(\sigma\to 1/2^{+}\), as a constant times
\[(2\sigma-1)^{-\frac{3+\alpha}{2}}.\]
Apart from the fact that this function blows as \(\sigma\to 1/2^{+}\), we have that the exponent \((3+\alpha)/2\) lies in the interval \((1/2,1)\), and hence the hole function is integrable in the interval \([1/2^{+},1]\).
Now we will show that in any case, \(\mathbb{E}N(1/2+\delta,\infty)\) is a real number. The function \(\zeta_{\alpha}(\sigma)\) converges absolutely for \(\sigma>1\), and hence it is an analytic function. Further, \(\zeta_{\alpha}(\sigma)\) is a series of positive numbers, and hence \(\zeta_{\alpha}(\sigma)\neq 0\) for all \(\sigma>1\). Hence \(\mathbb{E}N_{\alpha}(1/2+\delta,100)\) is the definite integral of a continuous function, a real number. Consider now
\[F(\sigma)=\sum_{n=1}^{\infty}\frac{X_{p_{n}}}{p_{n}^{\sigma}}=p_{1}^{-\sigma} \sum_{n=1}^{\infty}\frac{X_{p_{n}}}{(p_{n}/p_{1})^{\sigma}}:=p_{1}^{-\sigma}G (\sigma).\]
Thus, \(F(\sigma)\) share same zeros with \(G(\sigma)\). Now we can write
\[\zeta_{\mathcal{Q}}(\sigma):=\sum_{q\in\mathcal{Q}}\frac{1}{q^{\sigma}},\]
where \(\mathcal{Q}=\{q_{1}=1<q_{2}<q_{3}<...\}\) and \(q_{n}=p_{n}/p_{1}\), for all \(n\). Thus, \(\zeta_{\mathcal{Q}}(\sigma)>1\) for all \(\sigma>1\) and \(\lim_{\sigma\to\infty}\zeta_{\mathcal{Q}}(\sigma)=1\). Hence
\[\mathbb{E}N_{\alpha}(100,L) \ll\int_{100}^{L}\sqrt{\frac{\zeta_{\mathcal{Q}}^{\prime\prime}(2 \sigma)}{\zeta_{\mathcal{Q}}(2\sigma)}}d\sigma\] \[\ll\int_{100}^{L}\sqrt{\zeta_{\mathcal{Q}}^{\prime\prime}(2 \sigma)}d\sigma\] \[\ll\int_{100}^{L}\sum_{q>1}\frac{\log q}{q^{\sigma}}d\sigma\] \[\ll\sum_{q>1}\frac{1}{q^{100}}\] \[<\infty.\]
This shows that \(\mathbb{E}N(100,\infty)\) is a real number, and this ends the proof.
## 4. Concluding remarks
It is interesting to observe from formula (4) that a number \(\lambda>0\) such that
\[\pi(x)=(\lambda+o(1))x(\log x)^{\alpha}\]
has no effect in the asymptotics of \(\mathbb{E}N_{\alpha}(T,U)\).
Another interesting remark comes from the fact that we could deal with a slight more general random Dirichlet series if we allow to put extra weights \(\{a_{p}\geq 0:p\in\mathcal{P}\}\):
\[F(\sigma)=\sum_{p\in\mathcal{P}}\frac{a_{p}X_{p}}{p^{\sigma}}.\]
All we need to do is to make regularity assumptions on the partial sums
\[\pi_{*}(x):=\sum_{p\leq x}a_{p}.\]
In this case, formula (4) remains valid if we replace \(\zeta_{\alpha}\) by
\[\zeta_{*}(s):=\sum_{p\in\mathcal{P}}\frac{a_{p}^{2}}{p^{s}}.\]
The results of Theorem 1.2 remains unchanged if we worked out with assumptions on \(\pi_{*}(x)\) instead of \(\pi(x)\), since all what matters is the behaviour of \(\zeta_{*}(s)\) around its singularity.
An interesting example comes when we consider \(\mathcal{P}=\mathbb{N}\) and \(a_{n}=\sqrt{\tau(n)}\), where \(\tau(n)\) is the number of positive divisors of \(n\). In this case, \(\zeta_{*}(s)=\zeta^{2}(s)\) and the expected number of zeros will be just \(\sqrt{2}\) times \(\mathbb{E}N(T,\infty)\) given by Theorem 1.1.
**Acknowledgements.** This project was funded by CNPq, grant Universal no. 403037/2021-2.
|
2303.17043
|
Federated Learning for Heterogeneous Bandits with Unobserved Contexts
|
We study the problem of federated stochastic multi-arm contextual bandits
with unknown contexts, in which M agents are faced with different bandits and
collaborate to learn. The communication model consists of a central server and
the agents share their estimates with the central server periodically to learn
to choose optimal actions in order to minimize the total regret. We assume that
the exact contexts are not observable and the agents observe only a
distribution of the contexts. Such a situation arises, for instance, when the
context itself is a noisy measurement or based on a prediction mechanism. Our
goal is to develop a distributed and federated algorithm that facilitates
collaborative learning among the agents to select a sequence of optimal actions
so as to maximize the cumulative reward. By performing a feature vector
transformation, we propose an elimination-based algorithm and prove the regret
bound for linearly parametrized reward functions. Finally, we validated the
performance of our algorithm and compared it with another baseline approach
using numerical simulations on synthetic data and on the real-world movielens
dataset.
|
Jiabin Lin, Shana Moothedath
|
2023-03-29T22:06:24Z
|
http://arxiv.org/abs/2303.17043v2
|
# Federated Stochastic Bandit Learning with Unobserved Context
###### Abstract
We study the problem of federated stochastic multi-arm contextual bandit with unknown contexts, in which \(M\) agents are faced with different bandits and they collaborate to learn. The communication model consists of central server and the agents share their estimates with the central server periodically to learn to choose optimal actions in order to minimize the total regret. We assume that the exact contexts are not observable and the agents observe only a distribution on the contexts. Such a situation arises, for instance, when the context itself is a noisy measurement or based on a prediction mechanism. Our goal is to develop a distributed and federated algorithm that facilitates collaborative learning among the agents to select a sequence of optimal actions so as to maximize the cumulative reward. By performing a feature vector transformation, we propose an elimination-based algorithm and prove the regret bound for linearly parametrized reward functions. Finally, we validated the performance of our algorithms and compared it with another baseline approach using numerical simulations on synthetic data and on the real-world movielens dataset.
## I Introduction
Decision making under uncertain situations is becoming more prevalent in our daily lives with increasing levels of autonomy. It becomes crucial to find effective decision-making approaches by utilizing information from the environment. Multi-arm bandits (MAB) is one such framework that models the repeated interaction of the system with the environment to learn optimal decisions under uncertain scenarios using the information from the environment [1]. MAB problems have been used in many applications including autonomous connected vehicles and trajectory planning in robotics. In MAB problems the goal of the learner is to maximize the reward. In each round, the learner interacts with the environment, chooses an action based on the current estimates, receives reward for the chosen action, and based on this observation updates the estimates.
Over the last decade, multi-agent MAB problems have been widely studied in many papers [2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]. In multi-agent MAB, multiple agents/learners collaborate to learn and optimize their decisions based on their local knowledge about the environment. Recently, _federated_ learning has gained attention due to increasing focus on security and privacy and federated MABs are also studied in some of the papers [13, 14, 15, 16]. In federated learning the agents do not share the raw data with the other agents or the server, but rather share only the local estimates.
One of the key challenge in bandit problems is the exploration-exploitation tradeoff. When analyzing the MAB problem, in order to maximize the reward, it is ideal to arrive at an accurate estimation with less exploration. However, less exploration usually results in a poor decision, so we must identify a suitable exploration-exploitation tradeoff in order to utilize as few pulls as possible. In real world applications such as autonomous connected vehicles [17], the exact contexts are typically unknown from the environment due to faulty or compromised sensors. Under such situations, one needs to make decisions based on limited information.
Our goal in this paper is to present an algorithm for the distributed and federated contextual multi-armed bandit problem with \(M\) agents/learners when the exact context is unknown. In our setting \(M\) agents work collaboratively and concurrently to learn optimal actions to maximize the total (cumulative) reward. For this, the agents communicate with a central server in a periodic fashion and update their local models by utilizing the global information from the central server. Bandit learning with hidden contexts creates more difficulties to the learner because the estimation of the reward function is based on noisy observations since the exact contexts are unknown. To overcome these challenges, we make use of a feature mapping technique used in [18] for single agent bandit setting to transform the problem. After this transformation, a new set of feature vectors is presented so that under this set of \(d\)-dimensional context feature vectors, the reward is an unbiased observation for the action choice. Motivated by the approach in [19], we propose a federated algorithm and prove the regret bound for linear reward function. Our approach is closely related to [19], the key difference is we consider a setting where the contexts are unobservable, while in [19] the contexts are observed by the agents. Our regret bound is order-wise the same as the bound given in [19] for context known setting.
This paper makes the following contributions.
* We model a distributed \(M\)-agent, federated stochastic linear contextual bandit where the agents face different bandits and the contexts being unknown.
* We present a Fed-PECD algorithm and proves the regret bound for distributed stochastic bandits with linear parametrized reward function when the exact context is hidden and the context distribution is available.
* We validated the performance of our approach and compared the different models via numerical simulations on synthetic data and on real-world movielens data.
The organization of the paper is as follows. In Section II we present the related work. In Section III we discuss the notations used in the paper, problem formulation, and a motivating scenario for the problem. In Section IV we present our algorithm and theoretical analysis of the algorithm. In
Section V we present the numerical experiments and in Section VI we give the concluding remarks.
## II Related Work
Distributed bandits have been widely studied for different learning models and communication models. Stochastic multi-agent linear contextual bandit problem was studied in [2] and a Upper Confidence Bound (UCB)-based algorithm was proposed with guarantees. The communication model considered in [2] consists of a central server that can communicate with all the agents. In [2] the agents observe the contexts while in this paper the contexts are unobservable. Later a stochastic MAB problem was studied in [20] in which the agents can observe the reward of the arm before pulling the arm at each iteration. The goal is not only to choose the optimal arm but also to select the optimal order of observing the arm before pulling. References [3, 4] considered a distributed contextual MAB problem when the contexts are hidden. An elimination-based algorithm was proposed and the regret and communications bounds were proved in [3, 4]. We note that the learning models in [2, 3, 4] considered a setting in which all the agents face the same bandit, on the other hand, in this paper, the agents face different bandits thus this work models heterogeneous agents. In [21] a multi-agent LB problem was considered with time-invariant action set (hence not contextual) with the focus on feature selection when the features are unknown.
In a decentralized MAB problem the communication model is defined by a graph and the agents are allowed to communicate only with their neighboring agents. Decentralized MAB problem have been studied in many papers including [5, 6, 7, 8, 9, 10, 11, 12] and different communications models were considered, such as each agent communicates with only two other agents [8], each agent communicate only once [7], and each agent can communicate with every other agent. We note that the model considered in this paper is a centralized and federated communication model.
Decentralized MAB problem was studied in the federated setting in [22, 23] with the primary focus on differential privacy. In [13] a fully-decentralized federated MAB problem was investigated where agents find the optimal arm with limited communication and can only share their rewards with neighbors. Reference [14, 15] considered a federated MAB problem where the local agent and central server collaborate to maximize the cumulative reward based on the local and global reward with and without personalization, respectively. Federated residual learning algorithm was introduced in [16] for the federated bandit problem where the agent chooses the optimal arm based on the global and local model with communication latency.
## III Notations and Problem Formulation
### _Notations_
The norm of a vector \(z\in\mathbb{R}^{d}\) with respect to a matrix \(V\in\mathbb{R}^{d\times d}\) is defined as \(\|z\|_{V}:=\sqrt{z^{\top}Vz}\) and \(|z|\) for a vector \(z\) denotes element-wise absolute values. Further, \({}^{\top}\) denotes matrix or vector transpose, \(A^{\dagger}\) denotes the pseudo-inverse of a matrix \(A\), and \(\langle\cdot,\cdot\rangle\) denotes inner product. For an integer \(N\), we define \([N]:=\{1,2,\ldots,N\}\). We use \(\|\cdot\|\) to denote the and the induced \(\ell_{2}\) norm.
### _Problem Formulation_
In this section, we first specify the standard linear contextual bandit problem below. Let \(\mathcal{A}\) be the action set, \(\mathcal{C}\) be the context set, and the environment is defined by a fixed and unknown reward function \(y:\mathcal{A}\times\mathcal{C}\rightarrow\mathbb{R}\). In linear bandit setting, at any time \(t\in\mathbb{N}\), the agent observes a context \(c_{t}\in\mathcal{C}\) and has to choose an action \(a_{t}\in\mathcal{A}\). Each context-action pair \((a,c)\), \(a\in\mathcal{A}\) and \(c\in\mathcal{C}\), is associated with a feature vector \(\phi_{a,c}\in\mathbb{R}^{d}\), i.e., \(\phi_{a_{t},c_{t}}=\phi\left(a_{t},c_{t}\right)\). Upon selection of an action \(a_{t}\), the agent observes a reward \(y_{t}\in\mathbb{R}\)
\[y_{t}:=\langle\theta^{\star},\phi_{a_{t},c_{t}}\rangle+\eta_{t}, \tag{1}\]
where \(\theta^{\star}\in\mathbb{R}^{d}\) is the unknown reward parameter, \(\langle\theta^{\star},\phi_{a_{t},c_{t}}\rangle\)\(r\left(a_{t},c_{t}\right)\) is the expected reward for action \(a_{t}\) at time \(t\), i.e., \(r\left(a_{t},c_{t}\right)=\mathbb{E}[y_{t}]\), and \(\eta_{t}\) is \(\sigma\)-Gaussian, additive noise. The goal is to choose optimal actions \(x_{t}^{\star}\) for all \(t\in T\) such that the cumulative reward, \(\sum_{t=1}^{T}y_{t}\), is maximized. This is equivalent to minimizing the cumulative (pseudo)-regret denoted as
\[\mathcal{R}_{T}=\sum_{t=1}^{T}\langle\theta^{\star},\phi_{a_{t}^{\star},c_{t} }^{\dagger}\rangle-\sum_{t=1}^{T}\langle\theta^{\star},\phi_{a_{t},c_{t}}^{ \dagger}\rangle. \tag{2}\]
Here \(a_{t}^{\star}\) is the optimal/best action for context \(c_{t}\) and \(a_{t}\) is the action chosen by the agent for context \(c_{t}\).
In this paper, we consider a federated linear contextual bandit problem consisting of \(M\) agents with _heterogeneous data_. The agent's goal is to learn collaboratively and concurrently. Each agent \(i\in[M]\) is a user in the system whose user profile or feature vector is denoted by \(c_{i}\in\mathbb{R}^{d}\) and all agents have the same action set \(\mathcal{A}=[K]\). The goal of the agents is to learn to choose optimal actions for their respective feature vector \(c_{i}\) based on their local observation. To achieve this, at every time step the agents choose actions from their action set and receive the reward \(y_{i,t}=r_{a_{i},i}+\eta_{i,t}\) where \(r_{a_{i},i}\) is the unknown expected reward for choosing action \(a_{i,t}\) for agent \(i\) and \(\eta_{i,t}\) is the random noise.
We consider a linear reward function \(r_{a,i}\), \(r_{a,i}=\phi_{a,c_{t}}^{\top}\theta_{a_{i}}\), where \(\phi_{a,c_{t}}\) is the feature vector of agent \(i\) action \(a\) after some feature mapping, and \(\theta_{a}\) is the unknown fixed parameter for action \(a\). Note that the rewards received by the agents for the same action are different as it is a function of their feature vector, thus capturing the heterogeneous nature of the agents. Also, the reward parameter \(\theta_{a}\) is also action dependent. Further, the reward parameter \(\theta_{a}\) for each \(a\in[K]\) is the same for all the agents. Such a model hence captures the heterogeneous nature of the agents while still allowing a collaborative learning paradigm due to the shared parameter \(\{\theta_{a}\}_{a\in[K]}\). The linear structure of the reward function captures the intrinsic correlation between rewards for different agents pulling the same arm with the same parameter \(\theta_{a}\).
Our communication model consists of a central server in the system which can communicate with each agent periodically with zero latency. Each agent shares its local estimate with the central server, the server then aggregates the estimates and broadcasts a global estimate to each agent. Each agent now updates their local model. Note that communication is
always a major bottleneck, and we need to carefully consider its usage to keep the communication cost as small as possible.
For our contextual bandit problem, there are two set of information that need to be kept private for each agent, feature vector \(c_{i}\) for each agent \(i\) and reward \(y_{i,t}\) for each agent \(i\) at time \(t\). Agents can only share their estimates with the central server to obtain the global estimated model, without sharing raw data. In this way, the local data is always private for each agent and hence the communication model is federated. We have the following standard assumptions on the parameters.
**Assumption 1**.: _There exist constants \(s\geqslant 0\), \(0\leqslant\ell,L\leqslant 1\) such that \(\left\|\theta_{a}\right\|_{2}\leqslant s\), \(0\leqslant\ell\leqslant\left\|\phi_{a,c_{i}}\right\|_{2}\leqslant L\leqslant 1\), for all \(i\in[M]\) action \(a\in[K]\)._
**Assumption 2**.: _Each element \(\eta_{i,t}\) of the noise sequence \(\left\{\eta_{i,t}\right\}_{i=1,t=1}^{M,m}\) is a 1-subgaussian sampled independent with \(\mathbb{E}[\eta_{i,t}]=0\), \(\mathbb{E}[\epsilon^{\lambda_{1}\eta_{i,t}}]\leqslant e^{\frac{\lambda^{2}}{ \lambda}}\) for any \(\lambda>0\)._
In this work, we consider a setting in which each agent corresponds to a user, i.e., a context. The feature vector \(c_{i}\) cannot be observed by the agent \(i\), and only the context distribution \(\mu_{i}\) is available. Therefore, the environment chooses the context distribution \(\mu_{i}\in\mathcal{P}\left(\mathcal{C}\right)\) for each agent \(i\), the agent can only observe \(\mu_{i}\) but not \(c_{i}\). The agent then chooses an action and receives a reward. Our goal is to learn an optimal mapping \(\mu_{i}\to a_{i}^{+}\), where \(a_{i}^{+}\) is the optimal action for user/agent \(i\), for all \(i\in[M]\) of context distribution to action that maximizes the cumulative reward. In other words, we try to find the optimal estimated action that minimizes the cumulative regret.
\[\mathcal{R}\left(T\right)=\sum_{i=1}^{M}\sum_{t=1}^{T}\langle\theta_{a_{i}^{+} },\phi_{a_{i}^{+},c_{i}}\rangle-\sum_{i=1}^{M}\sum_{t=1}^{T}\langle\theta_{a_{ i,t}},\phi_{a_{i},c_{i}}\rangle. \tag{3}\]
where \(a_{i}^{+}\in[K]\) is the optimal action for agent \(i\).
## IV The Proposed Algorithm and Guarantee
### _Algorithm: Federated Phased Elimination with Context Distribution (Fed-PECD)_
In our algorithm, we have an agent part and a central server part. The agents and the server work in phases and the length of each phase is \(f^{p}+K\), where \(K\) is the number of agents, and \(p\) is the current number of phases. The agents communicate with the central server at the end of each phase and update their local models. We denote \(\mathcal{A}_{i}^{p}\) as the active arm set for agent \(i\) in phase \(p\) and \(\mathcal{A}^{p}=\cup_{i=1}^{M}\mathcal{A}_{i}^{p}\) as the active arm set for all the agents in phase \(p\), where \(M\) is the total number of agents. Let \(\mathcal{R}_{a}^{p}=\{i:a\in\mathcal{A}_{i}^{p}\}\) as the set of agents with active arm \(a\) in phase \(p\), \(\mathcal{T}_{a,i}^{p}\) is the number of times we pulled action \(a\) for agent \(i\) in phase \(p\). Now we elaborate on the process of our algorithm.
In the initialization phase, the agents perform exploration by pulling the arms \(a\in[K]\) and receive the rewards \(y_{a,i}\) for the chosen action. Using the \(y_{a,i}\), each agent will first obtain a local estimate of the reward parameter \(\hat{\theta}_{a,i}^{0}\), which they will share with the central server and the central server aggregates all the local estimates of the agents to obtain a global estimate \(\hat{\theta}_{a}\) and \(V_{a}\). Then the central server broadcasts \(\left(\hat{\theta}_{a},V_{a}\right)\) pairs to each agent and the agents upon receiving this information will update their local models before the next phase. This completes the initialization phase and the agents proceed to the first phase of the learning algorithm.
At each phase \(p\), after receiving \(\left(\hat{\theta}_{a},V_{a}\right)\) pairs, each agent first computes the estimated reward \(\hat{p}_{a,j}^{p}\) and confidence interval \(u_{a,i}^{p}\) for each action \(a\). By using the estimate pair \(\left(\hat{p}_{a,i}^{p},u_{a,i}^{p}\right)\), the agent finds the optimal action and updates the active action set as in Line 9 in Algorithm IV.1. In other words, the actions that result in a low reward are eliminated. The agents communicate the current active action set with the central server, which is then used to obtain \(\mathcal{A}^{p}\) and \(\mathcal{R}_{a}^{p}\). The central server will also compute \(f_{a,i}^{p}\) and broadcast to each agent \(i\) by solving the multi-agent G-optimal design [1, Chapter 21] to find the distribution \(\pi_{a,i}\) for each action \(a\) of each agent.
After receiving \(f_{a,i}^{p}\) from the central server, the agents choose the actions from the active action set according to \(f_{a,i}^{p}\) and find the average reward. Then each agent updates \(\hat{\theta}_{a,i}^{p}\), shares it with the central server, and updates its potential global matrix with this new \(\hat{\theta}_{a,i}^{p}\). Note that the agents do not share \(\psi_{a,\mu_{i}}\) or the chosen actions with the central server, thus the local data is private and agents only share their estimates with the central server, and thus our algorithm is _federated_.
```
Input:T, M, K, \(\alpha\), \(f^{p}\)
1: Nature chooses \(\mu_{t}\in\mathcal{P}\left(\mathcal{C}\right)\) and learner observes \(\mu_{t}\)
2: Set \(\Psi_{t}=\{\psi_{a,\mu_{i}}:a\in\mathcal{A}\}\) where \(\left\{\psi_{a,\mu_{i}}:=\mathbb{E}_{c_{i}\sim\mu_{t}}[\phi_{a,c_{i}}]\right\}\)
3:Initialization: Pull each arm \(a\in[K]\) and receive reward \(y_{a,i};\hat{\theta}_{a,i}^{0}\leftarrow\frac{y_{a,i}y_{a,\mu_{i}}}{\|\psi_{a, \mu_{i}}\|^{2}}\); Send \(\left\{\hat{\theta}_{a,i}^{0}\right\}_{a\in[K]}\) to the server; \(\mathcal{A}_{i}^{0}\leftarrow[K];p\gets 1\)
4:while not reaching the time horizon \(T\)do
5: Receive \(\left\{(\hat{\theta}_{a}^{p},V_{a})\right\}_{a\in\mathcal{A}^{p-1}}\) from the server
6:for\(a\in\mathcal{A}_{i}^{p-1}\)do
7:\(\hat{p}_{a,i}^{p}\leftarrow\psi_{a,\mu_{i}}^{p}\hat{\theta}_{a}^{p},\quad u_{a,i }^{p}\leftarrow\sqrt{10}\alpha\left\|\psi_{a,\mu_{i}}\right\|_{V_{a}^{p}}/\ell\)
8:endfor
9:\(\hat{a}_{i}^{p}\leftarrow\arg\max_{a\in\mathcal{A}_{i}^{p-1}}\hat{p}_{a,i}^{p}, \quad\mathcal{A}_{i}^{p}\leftarrow\{a\in\mathcal{A}_{i}^{p-1}\mid\hat{p}_{a,i}^{p }+u_{a,i}^{p}\geqslant\hat{p}_{a_{i}^{p},i}^{p}-u_{a_{i}^{p},i}^{p}\}\)
10: Send \(\mathcal{A}_{i}^{p}\) to the central server
11: Receive \(f_{a,i}^{p}\) for all \(a\in\mathcal{A}_{i}^{p}\)
12:for\(a\in\mathcal{A}_{i}^{p}\)do
13: Pull arm \(a\) for \(f_{a,i}^{p}\) times and receive rewards \(\left\{y_{i,i}\right\}_{t\in\mathcal{T}_{a,i}^{p}}\)
14:endfor
15: Send \(\left\{\hat{\theta}_{a,i}^{p}\right\}_{a\in\mathcal{A}_{i}^{p}}\) to the server; pull \(\hat{a}_{i}^{p}\) until phase length equals \(f^{p}+K\)
16:\(p\gets p+1\)
17:endwhile
```
**Algorithm IV.1** Fed-PECD: agent \(i\)
We now present the regret and communication bounds for our algorithm.
**Theorem 1**.: _Consider time horizon \(T\) that consists of \(H\) phases with \(f^{p}=cn^{p}\), where \(c\) and \(n>1\) are fixed integers,
and \(n^{p}\) denotes the \(p\)th power of \(n\). Let_
\[\alpha=\min\{\sqrt{2\log\frac{2MKH}{\delta}},\sqrt{2\log\frac{KH}{\delta}}+d\log \left(ke\right)\}\]
_where \(k>1\) is a number satisfying \(kd\geqslant 2log\left(KH/\delta\right)+dlog\left(ke\right)\). Then with probability (w.p) at least \(1-\delta\) the cumulative regret of our algorithm scales in_
\[O\left(\frac{L}{\ell}\sqrt{dKMT\left(\log(K(\log T)/\delta)+\min\{d,\log M\} \right)}\right)\]
_and communication cost scales in \(O\left(Md^{2}K\log T\right)\)._
### _Theoretical Analysis of Algorithm_
To prove the regret and communication bounds for our algorithm, we use a similar approach as in [19]. We extend the approach in [19] to the case where the contexts are hidden and not observable. We show that even when the contexts are unobservable our algorithm achieves the same order-wise bounds as in [19], which was actually shown for the case where the contexts are observable. We first prove the preliminary lemmas and then prove the main result.
Proof of Theorem 1 relies on the lemmas we present below. We first define an event \(\mathcal{E}\left(\alpha\right)\) as \(\mathcal{E}\left(\alpha\right):=\{\exists p\in[H],i\in[M],a\in\mathcal{A}_{i} ^{p-1},\left|p_{a,i}^{p}-r_{a,i}\right|\geqslant u_{a,i}^{p}=\alpha\sigma_{a, i}^{\alpha}\}\), where \(\alpha=\min\left\{\sqrt{2\log\left(\frac{2MKH}{\delta}\right)},\sqrt{2\log \left(\frac{KH}{\delta}\right)}+d\log\left(ke\right)\right\}\), and \(\sigma_{a,i}^{p}:=\frac{\sqrt{10}}{\ell}\left\|\psi_{a,\mu}\right\|_{V_{a}^{p}}\). We refer to \(\mathcal{E}\left(\alpha\right)\) as a "bad" event and \(\mathcal{E}^{c}\left(\alpha\right)\) as a "good" event. We define \(\mathcal{F}_{p}:=\left\{\hat{\theta}_{a,i}^{p}\right\}_{p\in[H],i\in[M],a\in \mathcal{A}_{i}^{p}}\), the information available at the end of phase \(p\).
**Lemma 2**.: _Let \(\zeta\in\mathbb{R}^{n}\) be an 1-sub-Gaussian random vector conditioned on \(\mathcal{F}_{p-1}\) and \(A\in\mathbb{R}^{n\times n}\) be an \(\mathcal{F}_{p-1}\)-measurable matrix. Let \(\lambda>0\) and let \(\left(I_{n}-2\lambda AA^{\top}\right)>0\). Then, we have_
\[\mathbb{E}\left[e^{\lambda\|A\zeta\|^{2}}\mid\mathcal{F}_{p-1}\right]\leq\sqrt {\frac{1}{det\left(I_{n}-2\lambda AA^{\top}\right)}}.\]
Proof.: For proof of this lemma, see Lemma 4 in [19] (supplementary material).
Next, we show that \(t_{a,i}^{p}-r_{a,i}\) is a conditionally sub-Gaussian random variable for all phase \(p\in[H]\), any agent \(i\in[M]\), and arm \(a\in\mathcal{A}_{i}^{p-1}\).
**Lemma 3**.: _At phase \(p\in[H]\), for any agent \(i\in[M]\), and arm \(a\in\mathcal{A}_{i}^{p-1},p_{a,i}^{p}-r_{a,i}\) is a conditionally sub-Gaussian random variable, i.e., \(\mathbb{E}\left[\exp\left(\lambda\left(p_{a,i}^{p}-r_{a,i}\right)\right)\mid \mathcal{F}_{p-1}\right]\leqslant\exp\left(\frac{\lambda^{2}\left(\sigma_{a,i}^ {p}\right)^{2}}{2}\right)\), for any \(\lambda\in\mathbb{R}\), where \(\sigma_{a,i}^{p}:=\frac{\sqrt{10}}{\ell}\left\|\psi_{a,\mu}\right\|_{V_{a}^{p}}\)._
Proof.: Let \(\xi_{a,i}^{p-1}\) be the sum of the independent sub-Gaussian noise incurred during the collaborative exploration step in phase \(p-1\), i.e., \(\xi_{a,i}^{p-1}:=\sum_{t\in\mathcal{T}_{a,i}^{p-1}}\eta_{i,t}\). We know \(f_{a,i}^{p-1}\) and \(\xi_{a,i}^{p-1}\) are conditionally \(\sqrt{f_{a,i}^{p-1}}\)-sub-Gaussian random variables. Recall the definition of local estimators,
\[\hat{\theta}_{a,i}^{p-1}=(\frac{1}{f_{a,i}^{p-1}}\sum_{t\in\mathcal{ T}_{a,i}^{p-1}}\!\!y_{i,t})\frac{\psi_{a,\mu_{i}}}{\left\|\psi_{a,\mu_{i}} \right\|^{2}}=(\phi_{a,i}^{\top}\theta_{a}+\frac{\xi_{a,i}^{p-1}}{f_{a,i}^{p-1 }})\frac{\psi_{a,\mu_{i}}}{\left\|\psi_{a,\mu_{i}}\right\|^{2}}\] \[=(\psi_{a,\mu_{i}}^{\top}\theta_{a}+\frac{\xi_{a,i}^{p-1}}{f_{a,i }^{p-1}}+\phi_{a,c_{i}}^{\top}\theta_{a}-\psi_{a,\mu_{i}}^{\top}\theta_{a}) \frac{\psi_{a,\mu_{i}}}{\left\|\psi_{a,\mu_{i}}\right\|^{2}}\] \[=\frac{\psi_{a,\mu_{i}}\psi_{a,\mu_{i}}^{\top}}{\left\|\psi_{a, \mu_{i}}\right\|^{2}}\theta_{a}+\frac{\psi_{a,\mu_{i}}\xi_{a,i}^{p-1}}{f_{a,i }^{p-1}\left\|\psi_{a,\mu_{i}}\right\|^{2}}+\frac{\psi_{a,\mu_{i}}(\phi_{a,c_ {i}}^{\top}-\psi_{a,\mu_{i}}^{\top})}{\left\|\psi_{a,\mu_{i}}\right\|^{2}} \theta_{a}. \tag{4}\]
We know \(\hat{r}_{a,i}^{p}=\psi_{a,\mu_{i}}^{\top}\hat{\theta}_{a}^{p}\) and \(V_{a}^{p}=(\sum_{j\in\mathcal{R}_{a}^{p-1}}f_{a,j}^{p-1}\frac{\psi_{a,\mu_{j}} \psi_{a,\mu_{j}}^{\top}}{\left\|\psi_{a,\mu_{j}}\right\|^{2}})^{\dagger}\). Thus \(\hat{r}_{a,i}^{p}-r_{a,i}=\psi_{a,\mu_{i}}^{\top}\hat{\theta}_{a}^{p}-\phi_{a,c_ {i}}^{\top}\theta_{a}\)
\[=\psi_{a,\mu_{i}}^{\top}V_{a}^{p}\left(\sum_{j\in\mathcal{R}_{a} ^{p-1}}f_{a,j}^{p-1}\hat{\theta}_{a,j}^{p-1})-\phi_{a,c_{i}}^{\top}\theta_{a}\] \[=\psi_{a,\mu_{i}}^{\top}V_{a}^{p}\left(\sum_{j\in\mathcal{R}_{a} ^{p-1}}f_{a,j}^{p-1}\left(\frac{\psi_{a,\mu_{j}}\psi_{a,\mu_{j}}^{\top}}{ \left\|\psi_{a,\mu_{j}}\right\|^{2}}\theta_{a}+\frac{\psi_{a,\mu_{j}}\xi_{a,j}^{p- 1}}{f_{a,j}^{p-1}\left\|\psi_{a,\mu_{j}}\right\|^{2}}\right.\right.\] \[+\left.\left.\frac{\psi_{a,\mu_{j}}(\phi_{a,c_{j}}^{\top}-\psi_{a, \mu_{j}}^{\top})}{\left\|\psi_{a,\mu_{j}}\right\|^{2}}\theta_{a}\right)- \phi_{a,c_{i}}^{\top}\theta_{a} \tag{5}\] \[\leqslant\psi_{a,\mu_{i}}^{\top}V_{a}^{p}(\sum_{j\in\mathcal{R}_{a} ^{p-1}}\frac{e_{a,j}}{\left\|\psi_{a,\mu_{j}}\right\|}(\xi_{a,j}^{p-1}+2f_{a,j }^{p-1}))+(\psi_{a,\mu_{i}}^{\top}-\phi_{a,c_{i}}^{\top})\theta_{a}\] (6) \[\leqslant\psi_{a,\mu_{i}}^{\top}V_{a}^{p}\left(\sum_{j\in \mathcal{R}_{a}^{p-1}}\frac{e_{a,j}}{\left\|\psi_{a,\mu_{j}}\right\|}(\xi_{a,j}^ {p-1}+2f_{a,j}^{p-1})\right)+2 \tag{7}\]
Eq. (5) follows from Eq. (4). Eq. (6) follows from \(\left(\phi_{a,c_{j}}^{\top}-\psi_{a,u_{j}}^{\top}\right)\theta_{a}\leqslant 2\), \(e_{a,j}=\frac{\psi_{a,j}}{\left\|\psi_{a,j}\right\|}\), and \(V_{a}^{p}=\left(\sum_{j\in R_{a}^{p-1}}f_{a,j}^{p-1}\frac{\psi_{a,u_{j}}\psi_{a, u_{j}}^{\top}}{\left\|\psi_{a,u_{j}}\right\|}\right)^{\dagger}\). Finally, Eq. (7) follows from \(\left(\phi_{a,c_{j}}^{\top}-\psi_{a,u_{j}}^{\top}\right)\theta_{a}\leqslant 2\).
Given \(f_{a,i}^{p-1},\xi_{a,i}^{p-1}\) are conditionally \(\sqrt{f_{a,i}^{p-1}}\)-sub-Gaussian random variables. Thus \(\left(\xi_{a,i}^{p-1}+2f_{a,j}^{p-1}\right)\) is a conditionally \(\sqrt{5f_{a,j}^{p-1}}\) sub-Gaussian random variable. Also Eq. (6) is a linear combination of \(\left(\xi_{a,i}^{p-1}+2f_{a,j}^{p-1}\right)\) and \(\left(\psi_{a,u_{i}}^{\top}-\phi_{a,c_{i}}^{\top}\right)\theta_{a}\). Thus, given \(\mathcal{F}_{p-1}\), \(\hat{r}_{a,i}^{p}-r_{a,i}\) is a conditionally sub-Gaussian random variable, whose parameter can be bounded as
\[\mathbb{E}_{c_{j}\sim\mu}[\hat{r}_{a,i}^{p}-r_{a,i}]^{2}\] \[\leqslant\mathbb{E}_{c_{j}\sim\mu}[\psi_{a,u_{j}}^{\top}V_{a}^{p} \left(\sum_{j\in R_{a}^{p-1}}\frac{e_{a,j}}{\left\|\psi_{a,u_{j}}\right\|} \left(\xi_{a,i}^{p-1}+2f_{a,j}^{p-1}\right)+\left(\psi_{a,u_{i}}^{\top}-\phi_{ a,c_{i}}^{\top}\right)\theta_{a}]^{2}\] \[\leqslant 2\sum_{j\in R_{a}^{p-1}}\left[\psi_{a,u_{j}}^{\top}V_{a}^{p }\left(\frac{e_{a,j}}{\left\|\psi_{a,u_{j}}\right\|}\left(\xi_{a,j}^{p-1}+2f_{a,j}^{p-1}\right)\right]^{2}\right. \tag{8}\] \[=2\sum_{j\in R_{a}^{p-1}}\left(\psi_{a,u_{j}}^{\top}V_{a}^{p}\left( \frac{e_{a,j}}{\left\|\psi_{a,u_{j}}\right\|}\right)^{2}\cdot 5f_{a,j}^{p-1}\right.\] \[=\sum_{j\in R_{a}^{p-1}}\psi_{a,u_{j}}^{\top}V_{a}^{p}f_{a,j}^{p- 1}e_{a,j}e_{a,j}^{\top}V_{a}^{p}\psi_{a,u_{j}}\frac{10}{\left\|\psi_{a,u_{j}} \right\|^{2}}\] \[\leqslant\frac{10}{\ell^{2}}\psi_{a,u_{j}}^{\top}V_{a}^{p}\psi_{a, u_{j}}=(\sigma_{a,i}^{p})^{2}. \tag{9}\]
Eq. (8) follows from \(\left(a+b\right)^{2}\leqslant 2a^{2}+2b^{2}\) and uses the fact that \(\psi_{a,u_{i}}=\mathbb{E}_{c_{j}\sim\mu_{i}}[\phi_{a,c_{j}}\mid\mathcal{F}_{p -1},\mu_{i},c_{i}]\). Eq. (9) follows from \(\left\|\psi_{a,\mu_{j}}\right\|\geqslant\ell\), \(A\left(A\right)^{\dagger}A=A\) for all \(i\), \(a\).
Next, in Lemma 5 we show that the probability of bad events is only less than \(\delta\) for our algorithm. Proof of Lemma 5 uses the result below.
**Lemma 4**.: _From the definition of \(\sigma_{a,i}^{p}:=\frac{\sqrt{10}}{\ell}\left\|\psi_{a,u_{i}}\right\|_{V^{p}}\), where \(V_{a}^{p}\) is a symmetric matrix, for all \(a\in[K],i\in[M]\), and \(p\in[H]\), we have \(\frac{\ell\sigma_{a,i}^{p}}{\sqrt{10\ell}}\geqslant 1\)._
Proof.: We have \(\left\|\psi_{a,u_{i}}\right\|_{V_{a}^{p}}^{2}-\psi_{a,u_{i}}^{\top}\psi_{a,u_{ i}}=\psi_{a,u_{i}}^{\top}\left(V_{a}^{p}-I\right)\psi_{a,u_{i}}\). Since \(V_{a}^{p}\) is a symmetric matrix, \(V_{a}^{p}-I\) is also a symmetric matrix. By using the eigendecomposition of the symmetric matrix \(V_{a}^{p}-I\), we know there exists an orthogonal matrix \(U\) such that \(\Lambda=diag\left(\lambda_{1},\lambda_{2},\cdots,\lambda_{n}\right)=U\left(V_{a} ^{p}-I\right)U^{\top}\), where \(\lambda_{i}>0\) is the \(i\)-th eigenvalue of \(V_{a}^{p}-I\). Therefore we get
\[\left\|\psi_{a,u_{i}}\right\|_{V_{a}^{p}}^{2}-\psi_{a,u_{i}}^{\top}\psi_{a,u_{i} }=\psi_{a,u_{i}}^{\top}U^{\top}\Lambda U\psi_{a,u_{i}}=\left\|\Lambda^{\frac{1} {2}}U\psi_{a,u_{i}}\right\|_{2}^{2}\geqslant 0\]
and
\[\frac{1}{L}\left(\left\|\psi_{a,u}\right\|_{V_{a}^{p}}^{2}-\psi_{a, u_{i}}^{\top}\psi_{a,u_{i}}\right) =\frac{\ell^{2}\left(\sigma_{a,i}^{p}\right)^{2}}{10L}-\frac{\psi _{a,u_{i}}^{\top}\psi_{a,u_{i}}}{L}\] \[\geqslant\frac{\ell^{2}\left(\sigma_{a,i}^{p}\right)^{2}}{10L}-1 \geqslant 0.\]
Thus \(\frac{\ell\sigma_{a,i}^{p}}{\sqrt{10\ell}}\geqslant 1\) and this completes the proof.
**Lemma 5**.: _For the Fed-PECD algorithm, \(\mathbb{P}\left[\mathcal{E}\left(\alpha\right)\right]\leq\delta\)._
Proof.: Let \(\alpha=\min\left\{\alpha_{1},\alpha_{2}\right\}\), where
\[\alpha_{1}=\sqrt{2\log\left(2MKH/\delta\right)}\text{and}\alpha_{2}=\sqrt{2 \log\left(KH/\delta\right)+d\log\left(ke\right)}.\]
From Theorem 1, we know \(k>1\) and \(k\geqslant\left\{\frac{a_{i}^{2}}{d}\right\}\). This choice of \(k\) requires \(\alpha_{2}^{2}\geqslant d\). Based on the definition of \(\mathcal{E}\left(\alpha\right)\)
\[\mathcal{E}\left(\alpha\right)=\mathcal{E}\left(\min\left\{\alpha_{1},\alpha_{2 }\right\}\right)\supset\mathcal{E}\left(\max\left\{\alpha_{1},\alpha_{2}\right\} \right),\]
which implies that \(\mathbb{P}[\mathcal{E}\left(\alpha\right)]=\max\left(\mathbb{P}[\mathcal{E} \left(\alpha_{1}\right)],\mathbb{P}[\mathcal{E}\left(\alpha_{2}\right)]\right)\). Therefore, we need to prove that \(\mathbb{P}[\mathcal{E}\left(\alpha_{i}\right)]\leqslant\delta\), for all \(i\in\{1,2\}\). In the following, we bound \(\mathbb{P}[\mathcal{E}\left(\alpha_{1}\right)]\) and \(\mathbb{P}[\mathcal{E}\left(\alpha_{2}\right)]\) separately.
(i) Bound \(\mathbb{P}[\mathcal{E}\left(\alpha_{1}\right)]\). Based on Lemma 3 and Hoeffding's inequality, we have
\[\mathbb{P}[\left\|\hat{r}_{a,i}^{p}-r_{a,i}\right|\geqslant\alpha_{1} \sigma_{a,i}^{p}\mid\mathcal{F}_{p}]\] \[\leqslant 2\exp\left(-\frac{\alpha_{1}^{2}\left(\sigma_{a,i}^{p} \right)^{2}}{2\left(\sigma_{a,i}^{p}\right)^{2}}\right)=2\exp\left(-\frac{ \alpha_{1}^{2}}{2}\right)=\frac{\delta}{MKH}.\]
Then, by using the union bound, we get
\[\mathbb{P}[\mathcal{E}\left(\alpha_{1}\right)] =\mathbb{P}[\exists p\in[H],i\in[M],a\in\mathcal{A}_{i}^{p-1 },|\hat{r}_{a,i}^{p}-r_{a,i}|\geqslant\alpha_{1}\sigma_{a,i}^{p}]\] \[\leqslant\left\|\psi_{a,u}\right\|_{V_{a}^{p}}\Bigg{\|}\sum_{j\in \mathcal{R}_{a}^{p-1}}\frac{e_{a,j}}{\left\|\psi_{a,u}\right\|}\left(\xi_{a,j}^{p-1 }+2f_
for a given phase \(p\) and arm \(a\). Note that
\[\Xi_{a,p}=\frac{1}{2}\sum_{i,j\in\mathcal{R}_{a}^{p-1}}\left(\frac{ \ell\left(\frac{\xi_{a,i}^{p-1}+2f_{a,i}^{p-1}}{\sqrt{5}\left\|\psi_{a,\mu_{j}} \right\|}\right.}{\sqrt{5}\left\|\psi_{a,\mu_{j}}\right\|}\right.\] \[\left.\cdot\left(\sum_{k\in\mathcal{R}_{a}^{p-1}}f_{a,k}^{p-1}e_ {a,k}e_{a,k}^{\top}\right)^{\dagger}\cdot\frac{\ell\left(\frac{\xi_{a,i}^{p-1} +2f_{a,i}^{p-1}}{\sqrt{5}\left\|\psi_{a,\mu_{j}}\right\|}\right.}{\sqrt{5} \left\|\psi_{a,\mu_{j}}\right\|}\right).\]
In the following proof, we use matrix form for \(\Xi_{a,p}\). Since \(f_{a,i}^{p-1}\) may equal 0 under the Block-Coordinate Ascent (BCA) algorithm [19] (Algorithm 3 in supplementary), and when this occurs, agent \(i\) does not choose arm \(a\) during the collaborative exploration step in phase \(p\), even though \(a\) is in the active arm set. In this case, \(\xi_{a,i}^{p-1}=0\). Therefore, we define \(\zeta_{a,i}\) as follows.
\[\zeta_{a,i}=\left\{\begin{array}{cc}\frac{\ell\left(\frac{\xi_{a,i}^{p-1}+2f _{a,i}^{p-1}}{\sqrt{5}\sqrt{\zeta_{a,i}^{p-1}}\|\psi_{a,\mu_{j}}\|},&\text{if }f_{a,i}^{p-1}\neq 0,\\ 0,&\text{if }f_{a,i}^{p-1}=0.\end{array}\right.\]
Then, we know \(\left\{\zeta_{a,i}\right\}_{i\in\mathcal{R}_{a}^{p-1}}\) are conditionally independent 1-sub-Gaussian random variables. we define vector \(\zeta:=\left\{\zeta_{a,i}\right\}_{i\in\mathcal{R}_{a}^{p-1}}\) and matrix \(A:=\left(a_{i,j}\right)_{i,j\in\mathcal{R}_{a}^{p-1}}\) where
\[a_{i,j}=\sqrt{f_{a,i}^{p-1}}e_{a,i}^{\top}\left(\sum_{k\in\mathcal{R}_{a}^{p-1 }}f_{a,k}^{p-1}e_{a,k}e_{a,k}^{\top}\right)^{\dagger}e_{a,j}\sqrt{f_{a,j}^{p- 1}}.\]
Here \(A\) is a symmetric matrix and
\[\sum_{k\in\mathcal{R}_{a}^{p-1}}a_{i,k}a_{k,j}=\sum_{k\in\mathcal{R}_{a}^{p-1 }}\left(\sqrt{f_{a,i}^{p-1}}e_{a,i}^{\top}\left(\sum_{k\in\mathcal{R}_{a}^{p- 1}}f_{a,k}^{p-1}e_{a,k}e_{a,k}^{\top}\right)^{\dagger}\right.\]
\[\left.\cdot e_{a,k}\sqrt{f_{a,k}^{p-1}}\sqrt{f_{a,k}^{p-1}}e_{a,k}^{\top} \left(\sum_{k\in\mathcal{R}_{a}^{p-1}}f_{a,k}^{p-1}e_{a,k}e_{a,k}^{\top}\right) ^{\dagger}e_{a,j}\sqrt{f_{a,j}^{p-1}}\right)\]
\[=\sqrt{f_{a,i}^{p-1}}e_{a,i}^{\top}\left(\sum_{k\in\mathcal{R}_{a}^{p-1}}f_{a,k}^{p-1}e_{a,k}e_{a,k}^{\top}\right)^{\dagger}\cdot\left(\sum_{k\in\mathcal{ R}_{a}^{p-1}}f_{a,k}^{p-1}e_{a,k}e_{a,k}^{\top}\right)\]
\[\cdot\left(\sum_{k\in\mathcal{R}_{a}^{p-1}}f_{a,k}^{p-1}e_{a,k}e_{a,k}^{\top} \right)^{\dagger}\cdot e_{a,j}\sqrt{f_{a,j}^{p-1}}\]
\[=\sqrt{f_{a,i}^{p-1}}e_{a,i}^{\top}V_{a}^{p}e_{a,j}\sqrt{f_{a,j}^{p-1}}=a_{i, j}.\]
Therefore, we know \(A^{2}=A\), which implies that all the eigenvalues of \(A\) are either 1 or 0. Also, we can get
\[\text{trace}\left(A\right)=\sum_{i\in\mathcal{R}_{a}^{p-1}}a_{i,i}\] \[=\sum_{i\in\mathcal{R}_{a}^{p-1}}\sqrt{f_{a,i}^{p-1}}e_{a,i}^{ \top}(\sum_{k\in\mathcal{R}_{a}^{p-1}}f_{a,k}^{p-1}e_{a,k}e_{a,k}^{\top})^{ \dagger}e_{a,i}\sqrt{f_{a,i}^{p-1}}\] \[=V_{a}^{p}(\sum_{i\in\mathcal{R}_{a}^{p-1}}f_{a,i}^{p-1}e_{a,i}e _{a,i}^{\top})^{\dagger}=V_{a}^{p}(V_{a}^{p})^{\dagger}=\text{rank}\left(V_{a}^ {p}\right)=d_{a}^{p}.\]
We know that all eigenvalues are either 1 or 0. Since the \(\text{rank}\left(A\right)=d_{a}^{p}\), there are exactly \(d_{a}^{p}\) eigenvalues that are equal to 1, and the rest of the eigenvalues are all 0. Thus, \(\text{rank}\left(A\right)=d_{a}^{p}\leqslant d\).
From the definition of \(\zeta\) and since \(A^{2}=A\), we have \(\Xi_{a,p}=\frac{1}{2}\zeta^{\top}A\zeta=\frac{1}{2}\|A\zeta\|^{2}\), where \(\zeta\) is a conditionally 1 - sub-Gaussian random vector. Here, \(A\) is an \(\mathcal{F}_{p}\)-measurable matrix. Therefore, for any \(\lambda\in(0,1/2)\), we have
\[\mathbb{P}[\left(\sqrt{\Xi_{a,p}}+\frac{2\ell}{\sqrt{10L}}\right) ^{2}\geqslant\alpha_{2}^{2}\mid\mathcal{F}_{p-1}]\] \[\leqslant\mathbb{P}[2\Psi_{a,p}+\frac{4\ell^{2}}{5L}\geqslant \alpha_{2}^{2}\mid\mathcal{F}_{p-1}]\] \[=\mathbb{P}[\|A\zeta\|^{2}+\frac{4\ell^{2}}{5L}\geqslant\alpha_{2}^ {2}\mid\mathcal{F}_{p-1}]\] \[=\mathbb{P}[e^{\lambda\|A\zeta\|^{2}+\frac{4\ell^{2}\lambda}{5L}} \geqslant e^{\lambda\frac{4\ell^{2}\lambda}{5L}}\mid\mathcal{F}_{p-1}]\] \[\leqslant e^{-4\alpha_{2}^{2}}\mathbb{E}[e^{\lambda\|A\zeta\|^{2}+ \frac{4\ell^{2}\lambda}{5L}}\mid\mathcal{F}_{p-1}] \tag{12}\] \[\leqslant e^{-4\lambda_{2}^{2}+\frac{4\ell^{2}\lambda}{5L}}\sqrt{ \frac{1}{det\left(I_{d}-2\lambda A^{2}\right)}}\] (13) \[=e^{-4\lambda_{2}^{2}+\frac{4\ell^{2}\lambda}{3L}}\left(1-2\lambda \right)^{-d_{a}^{p}/2}\] (14) \[\leqslant e^{-4\lambda_{2}^{2}+\frac{4\ell^{2}\lambda}{5L}}\left(1-2 \lambda\right)^{-d/2}\]
Eq. (12) follows from Markov's inequality, Eq. (13) follows from Lemma 2, Eq. (14) follows from the fact that the eigenvalues of \(A\) are either 1 or 0 and there are exactly \(d_{a}^{p}\) number of 1's.
By choosing \(\lambda=\frac{\alpha_{2}^{2}-d}{2\alpha_{2}^{2}}\in\left(0,\frac{1}{2}\right)\), we have
\[\mathbb{P}[\left(\sqrt{\Xi_{a,p}}+\frac{2\ell}{\sqrt{10L}}\right) ^{2}\geqslant\alpha_{2}^{2}\mid\mathcal{F}_{p-1}]\] \[\leqslant\left(\frac{\alpha_{2}^{2}}{d}\right)^{d/2}e^{-\frac{ \alpha_{2}^{2}-d}{2}+\frac{4\ell^{2}\lambda}{5L}}\leqslant\frac{\delta}{KH}\]
The last inequality follows by the following analysis:
\[\left(\frac{\alpha_{2}^{2}}{d}\right)^{d/2}e^{-\frac{\alpha_{2}^{2}-d}{2}+ \frac{4\ell^{2}\lambda}{5L}}\leqslant\frac{\delta}{KH}\] \[\Leftrightarrow\frac{d}{2}\left(2\log\alpha_{2}-\log d\right)+ \frac{d}{2}+\frac{4\ell^{2}\lambda}{5L}\] \[-\frac{1}{2}\left(\sqrt{2\log(\frac{KH}{\delta})+d\log\left(ke \right)}\right)^{2}\leqslant\log(\frac{\delta}{KH})\] \[\Leftrightarrow d\log\alpha_{2}\leqslant\frac{d\log\left(dk \right)}{2}-\frac{4\ell^{2}\lambda}{5L}\Leftrightarrow\alpha_{2}^{2}\leqslant dke^{- \frac{\delta\ell^{2}\lambda}{5L}}\Leftrightarrow\alpha_{2}^{2}\leqslant dk.\]
**Lemma 6**.: _If \(\mathcal{E}^{c}\left(\alpha\right)\) occurs, we must have \(a_{i}^{+}\in\mathcal{A}_{i}^{p}\), i.e., any optimal arm will never be eliminated._
Proof.: See Appendix of [24].
Using Lemma 6 we present the bound of the regret for the good events below.
**Lemma 7**.: _If \(\mathcal{E}^{c}\left(\alpha\right)\) occurs, the regret of Fed-PECD in phase \(p\) is upper bounded by \(\frac{4\sqrt{10}\alpha L}{\ell}\sqrt{dKM}\frac{f^{p}+K}{\sqrt{f^{p-1}}}\)._
Proof.: See Appendix of [24].
**Corollary 8**.: _Let \(\sigma_{a,i}^{p}=\frac{\sqrt{10}}{\ell}\left\|\psi_{a,\mu_{i}}\right\|_{V_{q} ^{p}}\). Then, under Fed-PECD, for any \(p\in[H],a\in\mathcal{A}_{i}^{p}\), we have \(\sum_{i\in[M]}\max_{a\in\mathcal{A}_{i}}\left(\sigma_{a,i}^{p}\right)^{2} \leq\frac{104KL^{2}}{\ell^{2}f^{p-1}}\)._
Proof.: Corollary 8 can be verified based on the fact that
\[\sum_{i\in[M]}\max_{a\in\mathcal{A}_{i}^{p}}\left(\sigma_{a,i}^{ p}\right)^{2} \leq\frac{10L^{2}}{\ell^{2}}\sum_{i\in[M]}\max_{a\in\mathcal{A}_{i }^{p}}\frac{\psi_{a,\mu_{i}}^{\top}}{\left\|\psi_{a,\mu_{i}}\right\|_{2}}V_{q} ^{p}\frac{\psi_{a,\mu_{i}}}{\left\|\psi_{a,\mu_{i}}\right\|_{2}}\] \[\leq\frac{104KL^{2}}{\ell^{2}f^{p-1}}.\]
The last inequality follows the same argument as in Lemma 7 from Eqs. (23)-(26).
Using the lemmas, now we present the proof of Theorem 1 for completeness. Proof of Theorem 1 follows from the earlier lemmas and uses similar steps as in [19]. The regret bound only differs in terms of a scaling factor from that in [19], and hence our bound for unobservable contexts is order-wise same as the case where the contexts are observable.
_Proof of Theorem 1:_ Recall the superscript \(p\) of \(n\) is the exponent. Then, with probability at least \(1-\delta\) and \(n>1\), the total regret over the \(H\) phases can be bounded as:
\[R\left(T\right)=\sum_{p=1}^{H}R_{p}\] \[\leq\sum_{p=1}^{H}\frac{4\sqrt{10}\alpha L}{\ell}\sqrt{dKM}\frac{ f^{p}+K}{\sqrt{f^{p-1}}}\] \[=\sum_{p=1}^{H}\frac{4\sqrt{10}\alpha L}{\ell}\sqrt{dKM}\left( \sqrt{c}n^{\frac{p+1}{2}}+K\frac{n^{-\frac{p-1}{2}}}{\sqrt{c}}\right) \tag{15}\] \[=\frac{4\sqrt{10}\alpha L}{\ell}\sqrt{dKM}[\sqrt{c}\left(\frac{1 -\sqrt{n}^{H+1}}{1-\sqrt{n}}-1\right)\sqrt{n}\] \[\qquad\qquad\qquad+K\frac{1}{\sqrt{c}\left(\frac{1-\sqrt{n}^{H +1}}{1-\sqrt{n}}-1\right)\frac{1}{\sqrt{n}}}]\] (16) \[=\frac{4\sqrt{10}\alpha L}{\ell}\sqrt{dKM}\left(\sqrt{c}n\frac{ n^{\frac{H+1}{2}}-\sqrt{n}}{\sqrt{n}-1}+\frac{K}{\sqrt{c}\frac{\sqrt{n}^{H }-1}{\sqrt{n}-1}}\right)\] \[=\frac{4\sqrt{10}\alpha L}{\ell}\sqrt{dKM}\left(\sqrt{c}n\frac{ n^{\frac{H+1}{2}}-\sqrt{n}}{\sqrt{n}-1}+\frac{K}{\sqrt{cn}-\sqrt{c}}\right)\] \[=\frac{4\sqrt{10}\alpha L}{\ell}\sqrt{dKM}\left(\frac{\sqrt{n} \sqrt{n-1}}{\sqrt{n}-1}\sqrt{cn}n\frac{n^{H-1}}{n-1}+\frac{K}{\sqrt{cn}-\sqrt{ c}}\right)\] (17) \[=\frac{4\sqrt{10}\alpha L}{\ell}\sqrt{dKM}\left(\frac{\sqrt{n^{2 }-n}}{\sqrt{n}-1}\sqrt{T}+\frac{K}{\sqrt{cn}-\sqrt{c}}\right)\]
Eq. (15) follows from \(f^{p}=cn^{p}\). Eq. (16) follows from \(\sum_{n=0}^{N-1}r^{n}=\frac{1-N}{1-r}\). Eq. (17) follows from \(n^{\frac{H}{2}}-1\leq\sqrt{n^{H}-1}\). Eq. (18) follows from \(T=\sum_{p-1}^{H}f^{p}+KH\geq\sum_{p-1}^{H}cn^{p}=cn\frac{n^{H-1}}{n-1}\).
Since
\[\alpha=O\left(\sqrt{\log\left(K\left(\log T\right)/\delta\right)+\min\{d,\log M \}}\right),\]
the regret scales in
\[O\left(\frac{L}{\ell}\sqrt{dKM(\log(K(\log T)/\delta)+\min\{d,\log M\}}\left( \sqrt{T}+K\right)\right).\]
When \(K=O\left(\sqrt{T}\right)\), the cumulative regret scales as
\[O\left(\frac{L}{\ell}\sqrt{dKMT(\log(K(\log T)/\delta)+\min\{d,\log M\}} \right).\]
We define communication cost as the number of scalars communicated between the server and the agents. The communication cost bound follows from Theorem 1 in [19].
## V Numerical Experiments
We validated the performance of our algorithm using both synthetic and real-world datasets. We considered two different settings, (i) _exact_ in which the actual feature vector \(c_{i}\) is known to all the agents and (ii) _hidden_ in which the feature vector is unknown and the agents only observe a distribution \(\mu_{i}\). We compared the performance of our algorithm by varying the number of agents to see the effect of agents on the learning process. All the experiments were conducted using Python. We set \(\delta=0.1\), \(T=2^{17}\), \(f^{p}=2^{p}\), \(p\in\{1,2,\cdots,17\}\) and run 100 trials. To compare the performance for different number of agents, we set \(M=\{50,100,150\}\).
**Synthetic data:** In this data-set we set \(K=10\) and \(d=3\). We generated the feature vectors \(\phi_{a,c}\) randomly from a uniform distribution. From \(\phi_{a,c}\), we constructed \(\psi_{a,c}\) after adding zero-mean noise. The suboptimal reward gap and \(\|\phi_{a,c}\|_{2}\) of the data lie in \([0.2,0.4]\) and \([0.5,1]\), respectively. We set \(\theta_{a}=[1,0,0]\). We present the plots showing variations of the cumulative regret with respect to the execution time for different settings and for different numbers of agents in Figure 1. Each point in the plot was averaged over 100 independent trials. Our experimental results given in Figure 0(a) shows that the exact setting outperforms the hidden setting, as expected since the agents observe the actual contexts. We varied the number of agents as \(M=50,100,150\) and compared the results as shown in Figure 0(b). It demonstrates that as the number of the agents increases, the cumulative regret decreases. This is expected since when more number of agents work collaboratively, each agent receive more information at the end of each phase thus accelerating the learning process. We ran for \(2^{17}\) time-period and 100 independent trails.
**MovieLens data:** We used MovieLens-100K data [25] to evaluate the performance of our algorithm. We first get the rating matrix \(r_{a,i}\in\mathbb{R}^{943\times 1682}\) by using a non-negative matrix factorization \(r_{a,i}=WH\), where \(W\in\mathbb{R}^{943\times 3}\), \(H\in\mathbb{R}^{3\times 1682}\)
We used the k-mean clustering algorithm with \(k=30\) on \(H\) to cluster the action set into \(30\) clusters. Thus the number of actions \(K=30\) and \(\theta_{a}\), for \(a\in[K]\), is the center of the respective cluster. We set \(M=100\) by randomly selecting \(100\) users from the dataset. For this experiment, we noticed that the suboptimal reward gap for the data lies in \([0.01,0.8]\) and \(\|\phi_{a,c}\|_{2}^{2}\) lies in \([0.4,0.8]\). We present the plots showing the variation of the cumulative regret with respect to the execution time for the movielens data for different settings and for different numbers of agents in Figure 1. The reward \(r(a_{i},c_{i})\) is bounded above by \(1\), and the observation noise \(\eta_{i}\) is set as Gaussian with zero mean and standard deviation \(10^{-3}\). In this experiment, as expected, the variant that does not observe the context (hidden) is outperformed by the variant that uses the context observation (exact) for the estimation as shown in Figure 0(c). We varied the number of agents as \(M=50,100,150\) and ran for a time-period of \(2^{17}\) and the plots are shown in Figure 0(d).
## VI Conclusion
In this work, we studied a distributed and federated contextual MAB problem with unknown contexts where \(M\) agents face different bandit problems and the goal of the agents is to minimize the total cumulative regret. We considered a setting where the exact contexts are hidden and hence unobservable to the the agents. In our model, each agent shares the local estimates with the central server and receives the aggregated global estimates from the server and based on this information each agent updates their local model. We proposed an elimination-based algorithm, Fed-PECD, and proved the regret and communication bounds for linearly parametrized reward function. We evaluated the performance of our algorithm using synthetic and movielen data and compared with a baseline approach.
|
2309.02373
|
nanoT5: A PyTorch Framework for Pre-training and Fine-tuning T5-style
Models with Limited Resources
|
State-of-the-art language models like T5 have revolutionized the NLP
landscape, but their computational demands hinder a large portion of the
research community. To address this challenge, we present nanoT5, a
specially-optimized PyTorch framework for efficient pre-training and
fine-tuning of T5 models. Drawing on insights from optimizer differences and
prioritizing efficiency, nanoT5 allows a T5-Base model to be pre-trained on a
single GPU in just 16 hours, without any loss in performance. With the
introduction of this open-source framework, we hope to widen the accessibility
to language modelling research and cater to the community's demand for more
user-friendly T5 (Encoder-Decoder) implementations. We make our contributions,
including configurations, codebase, pre-training insights, and pre-trained
models, available to the public.
|
Piotr Nawrot
|
2023-09-05T16:35:41Z
|
http://arxiv.org/abs/2309.02373v2
|
# nanoT5: A PyTorch Framework for Pre-training and Fine-tuning
###### Abstract
State-of-the-art language models like T5 have revolutionized the NLP landscape, but their computational demands hinder a large portion of the research community. To address this challenge, we present nanoT5, a specially-optimized PyTorch framework for efficient pre-training and fine-tuning of T5 models. Drawing on insights from optimizer differences and prioritizing efficiency, nanoT5 allows a T5-Base model to be pre-trained on a single GPU in just 16 hours, without any loss in performance. With the introduction of this open-source framework, we hope to widen the accessibility to language modelling research and cater to the community's demand for more user-friendly T5 (Encoder-Decoder) implementations. We make our contributions, including configurations, codebase, pre-training insights, and pre-trained models, available to the public.
## 1 Introduction
The transformative power of large pre-trained language models such as GPT-3 Brown et al. (2020), T5 Raffel et al. (2019), and PaLM Chowdhery et al. (2022) is undeniable. However, their massive computational requirements remain a barrier for many researchers. Notably, models like T5 require extensive datasets and significant computational resources for their pre-training Raffel et al. (2019). Furthermore, many open-source implementations lean heavily on TPU accelerators Shazeer (2020), which are not as available to the academic community as GPUs.
Recognizing this gap, we introduce nanoT5, a resource-efficient, open-source PyTorch framework designed for the pre-training and fine-tuning of T5 models. Inspired by pioneering efforts such as nanoGPT Karpathy (2021) and Cramming Geiping and Goldstein (2022), nanoT5 uniquely concentrates on enhancing the training pipeline specifically for T5 encoder-decoder models. Our framework includes optimized configurations and scripts, enabling researchers to pre-train a T5-Base model with 248M parameters on a single GPU in just 16 hours. Every facet, from data preprocessing and model architecture to the learning rate schedule, has been tuned for both efficiency and adaptability. With nanoT5, users can seamlessly initiate model pre-training within minutes of accessing our GitHub repository.
This paper underscores two main innovations: First, we delve into the nuances between the Adam and Adafactor optimizer performances as detailed in Havinga), suggesting a version of AdamW Loshchilov and Hutter (2017), augmented with matrix-wise learning rate scaling based on root mean square. This variant showcases better speed and robustness compared to the default Adafactor Shazeer and Stern (2018). Second, we demonstrate that T5 models trained with nanoT5, housing around 250M parameters, can achieve performance akin to the publicly-available checkpoints while requiring 150x less pre-training data.
Our primary motivation stems from the growing demand for reproducible and tuned baselines Kaddour et al. (2023), enabling fast and small-scale hypothesis validation in the evolving realm of large pre-trained Transformers. With nanoT5, we address a gap highlighted by community requests 123, providing an approachable platform for working with T5 (Encoder-Decoder) architecture. To our understanding, nanoT5 pioneers the effort to reproduce T5 v1.1 pre-training using PyTorch, deviating from prior Jax/Flax implementations. We invite the community to explore our training configurations, codebase, and pre-trained models, all of which are available at [https://github.com/PiotrNawrot/nanoT5](https://github.com/PiotrNawrot/nanoT5).
Footnote 1: [https://github.com/google-research/net-to-test-transfer-transformer/issues/172](https://github.com/google-research/net-to-test-transfer-transformer/issues/172)
Footnote 2: [https://github.com/google-research/net/learn/issues/1892](https://github.com/google-research/net/learn/issues/1892)
Footnote 3: [https://github.com/google-research/issues/2079](https://github.com/google-research/issues/2079)
Related Work
The landscape of open-source repositories tailored for efficient pre-training of Transformer language models is vast. Notably, nanoGPT (Karpathy, 2021) sheds light on decoder-only models, while Cramming (Geiping and Goldstein, 2022) homes in on the optimal pre-training of the encoder-only BERT architecture (Devlin et al., 2019). Contrastingly, with nanoT5, we sought to bridge the existing gap by providing a standalone research template tailored for the T5-style (Encoder-Decoder) models.
To expedite the training process of nanoT5 we incorporated various optimizations. These encompass mixed precision training (Micikevicius et al., 2017), compiled runtimes (Narang et al., 2021), and more. Additionally, we delved into the potential of efficient training methodologies such as recent optimizers (Chen et al., 2023; Liu et al., 2023), and fast attention mechanism (Dao et al., 2022), which are elaborated further in Section 4.3. It's crucial to note that while we evaluated various efficient algorithms, we consciously opted against those, such as (Nawrot et al., 2022; Shazeer et al., 2017), that would modify the core model structure. Instead, our intent with nanoT5 was to cultivate a straightforward baseline for further research endeavors. The standout contribution of our work in terms of efficient training algorithms is the AdamW variant, with the RMS matrix scaling, which improves T5 pre-training convergence.
## 3 Methodology
Our validation strategy seeks to replicate the T5-base pre-training outcomes detailed in (Shazeer, 2020) and the fine-tuning results of Tk-Instruct on the Super Natural-Instructions (SNI) meta-dataset (Wang et al., 2022).
### Training pipeline
We've devised a comprehensive training pipeline prioritizing efficient data management, low-level optimizations, and coding simplicity, all while preserving the core model and training logic:
* **Dataset Handling:** Given the extensive volume of the C4 dataset, which exceeds 300GB, our repository implements concurrent data downloading with model training. This optimization speeds up the commencement of T5 model pre-training to a few minutes.
* **Exposure and Simplicity:** Our methodology aims to strike a balance between adaptability and abstraction. With tools such as the HuggingFace Accelerator (Sylvain Gugger, 2022), we abstract tasks like checkpoint management and tensor operations. Experiment tracking is realized via neptune.ai (Neptune team, 2019), and we've employed hydra (Yadan, 2019) for coordinated hyperparameter handling.
* **Efficiency:** We've leveraged the optimizations of PyTorch 2.0 (Paszke et al., 2019), and employed mixed-precision training in line with established optimization guidelines 45. Footnote 4: [https://nuggingface.com/docs/learn/examples/pdf_train_gpu_core](https://nuggingface.com/docs/learn/examples/pdf_train_gpu_core)
* **Flexibility:** Our repository is designed with adaptability in mind, offering support for multi-GPU training, gradient accumulation, and gradient checkpointing. This ensures users can reproduce our results on a variety of GPUs beyond the A100 and can experiment with configurations larger than the T5-base size emphasized in this study. Additionally, we provide support for both CPUs and Apple's ARM M1 chips.
### Pre-training
Our experiments strictly follow the T5-v1.1-base training configuration (Shazeer, 2020), where the model itself comprises of roughly 248M parameters. The C4 dataset (Raffel et al., 2019), sourced directly from Huggingface, undergoes tokenization via the Wordpiece tokenizer (Schuster and Nakajima, 2012), with the original model's vocabulary. During pre-processing, 15% of input data is masked using sentinel tokens, setting the neural network's target as the prediction of these tokens, leveraging its decoder. Consistent with the original study, we've set the batch size at 128 examples, with inputs of 512 tokens and outputs of 114 tokens. Optimization is facilitated through the Adafactor optimizer (Shazeer and Stern, 2018), combined with the Inverse-Square-Root (ISR) learning rate schedule. The model is trained for \(2^{16}\) steps. For more details please refer to the original work.
### Fine-tuning
Our fine-tuning employs the Super Natural-Instructions (SNI) meta-dataset (Wang et al., 2022), which has been previously used for fine-tuning
models like FlanT5 Chung et al. (2022), BLOOM Scao et al. (2022), and Tk-Instruct Wang et al. (2022). To assess the correctness of our fine-tuning setup, and the efficiency of our pre-training, we decided to reproduce the Tk-Instruct methodology.
### Reproducibility
Ensuring that our work can be reliably replicated is a core focus of our methodology. To facilitate this, we have taken the following measures:
* **Model Weights:** We make the model's weights available on the HuggingFace Hub. These can be downloaded and used for fine-tuning on the SNI dataset with nanoT5.
* **Loss Curves:** We openly share both the pre-training and fine-tuning loss curves to provide insight into the model's learning dynamics.
* **Hyperparameters:** All hyperparameters used in our experiments have been released.
* **Environment and Hardware:** In our repository we offer comprehensive instructions on how to recreate our environment, including detailed information about our hardware. This encompasses specifications of our CPU and GPU, as well as the relevant driver versions.
* **Statistical Robustness:** To ensure the validity of our results, each experiment was conducted three times with different random seeds.
## 4 Results
### Reproducing Pre-Training
By following the original experimental setup described above, we achieved a negative log-likelihood of \(1.995\) on the held-out set, which is slightly inferior to the reference.
In exploring alternative optimization methods, we tested the AdamW optimizer as a potential replacement for the original Adafactor. While AdamW theoretically promises greater training stability by directly estimating the second moment of the gradients (as opposed to Adafactor's low-rank approximation), our training with AdamW diverged. This behavior mirrors findings from a study on T5 pre-training (Havinga). Upon further investigation, we identified that matrix-wise learning rate (LR) scaling using its root mean square (RMS) 6 was the crucial element ensuring Adafactor's convergence. After augmenting AdamW with this extra LR scaling, which we will refer to as RMS scaling, it not only converged but also exhibited improved stability during pre-training and operated slightly faster, thanks to the direct retrieval of the second moment from memory instead of approximating it.
Footnote 6: For more details please refer to [18], Section 4.3.3, Section 4.4.3, Section 5.3.
Nonetheless, when combined with the Inverse-Square-Root LR schedule, AdamW's performance was still outpaced by Adafactor. By replacing the ISR schedule with a Cosine LR Schedule, we achieved a superior negative log-likelihood of 1.953 on the held-out set, significantly surpassing Adafactor with the ISR schedule. The specific results of these experiments can be found in Table 2. A comparison of the training loss curves using different optimizers (Adafactor vs. AdamW) and schedules (ISR vs. Cosine) is provided in Figure 2.
### Fine-Tuning Performance Across
Different Pre-Training Durations
Our fine-tuning configuration strictly aligns with that of Tk-Instruct. However, there remains some ambiguity regarding whether Tk-Instruct was initialized from a regular checkpoint (google/t5-v1_1-base) or from a version specifically tailored for Lan
Figure 1: Downstream performance of models across various pre-training durations, including existing T5-base variants accessible through Huggingface Hub.
Figure 2: Training loss curves contrasting different optimizers and learning rate schedules.
guage Modelling (google/t5-base-lm-adapt). To cover all bases, we evaluated both, and successfully reproduced the original results.
Figure 1 presents a performance comparison of the model we trained in various time increments (ranging from 4 to 24 hours) against the original T5-base-v1.1 model weights from Huggingface Hub and its language modeling-adapted version. Notably, our model, trained for 16 hours on a single GPU, lagged by only 0.2 RougeL on average compared to the original T5-base-v1.1. This is an impressive result given the vast disparity in training data (the T5 paper indicates training on approximately 150x more data than we did). The language modeling-adapted checkpoint outperformed both the original T5-base-v1.1 model and ours, but this language modeling model adaptation extends beyond the scope of this study. A single fine-tuning step in our setup took approximately 0.18s, culminating in roughly an hour for the entire fine-tuning process.
### Efficiency Statistics
Table 1 showcases the efficiency metrics from our pre-training experiments. It details the time taken for a single pre-training step and the overall pre-training time based on our default configuration described in Section 3.2. A noteworthy observation is that, because of the large batch size (128) used for pre-training, for numerical precisions other than BF16 we need to increase the number of gradient accumulation steps from 1 to 2.
Attempts at Boosting EfficiencyIn our pursuit of efficiency, we experimented with various strategies, albeit with limited success:
* **Optimization Algorithms**: We assessed the performance of recent optimizers like Lion Chen et al. (2023) and Sophia Liu et al. (2023). However, neither outperformed the AdamW with RMS scaling.
* **Positional Embeddings**: We tried replacing T5's learned relative positional embeddings with ALiBi Press et al. (2021). Although such a switch had the potential to reduce the number of parameters, leading to faster training and inference rates, and paving the way for integrating Flash Attention Dao et al. (2022) (currently limited to non-parametric bias), our trials revealed that training with ALiBi was more volatile and yielded suboptimal pre-training loss.
* **FP16 Precision**: Unfortunately, all our attempts using FP16 precision consistently diverged.
## 5 Conclusions
In this study, we demonstrated the feasibility of pre-training a substantial model like T5 under resource constraints, specifically using a single A100 GPU within a 24-hour timeframe. Through selection of optimization methods and configurations, we achieved results comparable to large-scale training settings. Our intention in sharing the codebase, configurations, and training logs is to bridge the gap between research accessibility and computational resource limitations in the NLP domain. We invite and welcome community suggestions to further refine and enhance our approach.
Moving forward, we aim to enrich our codebase by incorporating additional training objectives, such as those suggested by Tworkowski et al. (2023); Tay et al. (2022), in hopes of further optimizing the training pipeline.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline
**Mixed Precision** & **Torch 2.0 compile** & **Grad Acc** & **Time per 1 Pre-training step** & **Total Pre-training time** \\ \hline FP32 & No & 2 & \(\sim\)4.10s & \(\sim\)74.6h \\ \hline TF32 & No & 2 & \(\sim\)1.39s & \(\sim\)25.3h \\ \hline BF16 & No & 2 & \(\sim\)1.30s & \(\sim\)23.7h \\ \hline TF32 & Yes & 2 & \(\sim\)0.95s & \(\sim\)17.3h \\ \hline BF16 & Yes & 1 & \(\sim\)0.56s & \(\sim\)10.2h \\ \hline \end{tabular}
\end{table}
Table 1: Efficiency metrics across various configuration settings during pre-training, with the “Total Pre-training Time” column referencing \(2^{16}\) steps following the default config.
\begin{table}
\begin{tabular}{|c|c|c|} \hline & **Inverse-Square-Root** & **Cosine** \\ \hline
**Adafactor** & 1.995 & 1.993 \\ \hline
**AdamW** & 2.040 & **1.953** \\ \hline \end{tabular}
\end{table}
Table 2: Comparison of negative log-likelihood scores on the held-out set of C4 using different optimization methods and learning rate schedules.
## Acknowledgements
This work was supported by the UKRI Centre for Doctoral Training in Natural Language Processing, funded by the UKRI (grant EP/S022481/1) and the University of Edinburgh, School of Informatics and School of Philosophy, Psychology & Language Sciences.
|
2304.13282
|
Machine Vision-Based Crop-Load Estimation Using YOLOv8
|
Labor shortages in fruit crop production have prompted the development of
mechanized and automated machines as alternatives to labor-intensive orchard
operations such as harvesting, pruning, and thinning. Agricultural robots
capable of identifying tree canopy parts and estimating geometric and
topological parameters, such as branch diameter, length, and angles, can
optimize crop yields through automated pruning and thinning platforms. In this
study, we proposed a machine vision system to estimate canopy parameters in
apple orchards and determine an optimal number of fruit for individual
branches, providing a foundation for robotic pruning, flower thinning, and
fruitlet thinning to achieve desired yield and quality.Using color and depth
information from an RGB-D sensor (Microsoft Azure Kinect DK), a YOLOv8-based
instance segmentation technique was developed to identify trunks and branches
of apple trees during the dormant season. Principal Component Analysis was
applied to estimate branch diameter (used to calculate limb cross-sectional
area, or LCSA) and orientation. The estimated branch diameter was utilized to
calculate LCSA, which served as an input for crop-load estimation, with larger
LCSA values indicating a higher potential fruit-bearing capacity.RMSE for
branch diameter estimation was 2.08 mm, and for crop-load estimation, 3.95.
Based on commercial apple orchard management practices, the target crop-load
(number of fruit) for each segmented branch was estimated with a mean absolute
error (MAE) of 2.99 (ground truth crop-load was 6 apples per LCSA). This study
demonstrated a promising workflow with high performance in identifying trunks
and branches of apple trees in dynamic commercial orchard environments and
integrating farm management practices into automated decision-making.
|
Dawood Ahmed, Ranjan Sapkota, Martin Churuvija, Manoj Karkee
|
2023-04-26T04:46:03Z
|
http://arxiv.org/abs/2304.13282v1
|
# Machine Vision-Based Crop-Load Estimation Using YOLOv8
###### Abstract
Shortage of labor in fruit crop production has become a significant challenge in recent years. Therefore, mechanized and automated machines have emerged as promising alternatives to labor-intensive orchard operations such as harvesting, pruning, and thinning. The use of mechanized and automated machines in fruit crop production has become a promising solution to the shortage of labor, as these technologies can accomplish labor-intensive tasks such as harvesting, pruning, and thinning. One of the key aspects of agricultural robots in accomplishing these tasks is their ability to identify tree canopy parts such as trunks and branches and estimating their geometric and topological parameters such as branch diameter, branch length, and branch angles. Having an estimate of the target crop-load, researchers then can work on automated pruning and thinning platforms that make more effective decisions to achieve optimal crop yields. In this study, we propose a machine vision system to estimate these canopy parameters in apple orchards. These parameters were then used to estimate an optimal number of fruit that individual branches could bear in a commercial orchard, which provides a basis for robotic pruning, flower thinning, and fruitlet thinning so that desired fruit yield and quality could be achieved. Utilizing color and depth information collected with an RGB-D sensor (Microsoft Azure Kinect DK, a YOLOv8-based instance segmentation technique was developed to identify trunks and branches of apple trees in the dormant season. We then applied a Principal Component Analysis technique to estimate branch diameter (used to calculate limb cross-sectional area or LCSA) and orientation. The estimated branch diameter was used to calculate the limb cross-sectional area (LCSA), which was then used as an input for crop-load estimation, as a larger LCSA indicates a higher potential fruit-bearing capacity of the branch. RMSE for branch diameter estimation was calculated to be 2.08 mm and for crop-load estimation to be 3.95. Based on the established management practices in commercial apple orchards, we estimated the target crop-load (number of fruit) for each segmented branch with a mean absolute error (MAE) of 2.99 and (ground truth crop-load was 6 apples per LCSA). Our study demonstrated a promising workflow with a high level of performance in identifying trunks and branches of apple trees in a dynamic commercial orchard environment and integrating farm management practices into automated decision-making.
Keywords:Object detection,YOLOv8, deep learning, machine vision, agricultural automation and robotics
## 1 Introduction
Around 30-40 % of the total value of the United States (U.S.) crops belong to specialty crops (Fuchs et al., 2021), which clearly shows the importance of this industry to U.S. agriculture and economic activities. More than 200 thousand seasonal workers are invited each year from other countries to work in the tree fruit orchards in U.S. performing field operations such as tree pruning, flower thinning, green fruit thinning, and harvesting. However, in recent decades, growers are facing challenges in finding enough farm labor to complete field operations in various specialty crops (Bogue, 2020). Recently, the farm labor crisis has become worse because of the global pandemic as the reduction of agricultural labor inputs due to the COVID-19 pandemic has resulted in an estimated loss of $309 million in agricultural production over the first year of Pandemic (March, 2020, to March, 2021; Bochtis et al., 2020; Lusk and Chandra, 2021). Thus, the agricultural production system is in critical need of automated and robotic machines that could operate in orchard environments to perform the various crop- and canopy-management operations (Sapkota et al., 2023; Q. Zhang et al., 2019)
Robots, with a robust machine vision and manipulation systems, have the potential to reduce human labor by making decisions and acting in real-time to perform specific or programmed tasks requiring repeatable accuracy(Bechar, 2021). Researchers and engineers, since the last two decades, have been investigating the development of mechanized and automated platforms that could mimic human operation in the orchard environment such as automated fruit picking (Hua et al., 2019; Huang et al., 2020; Kondo et al., 1996; Verbiest et al., 2021), automated tree pruning(He and Schupp, 2018; Liu et al., 2012), robotic shoot thinning (Majeed et al., 2020, 2021) and automated flower thinning (Nielsen et al., 2011; C. Zhang et al., 2022).
Despite the tremendous research and development efforts in recent years, there has been no reported automated machines commercially adopted so far for crop-load management operations such as selective tree pruning and fruit
thinning in the real-world environment. Due to the complex, unstructured/uncontrolled environment, and unpredictable variability in lighting, landscape, and atmospheric conditions, the demands of motions of robots for the automated crop-load management operation changes often in time and space, making it an even more complex problem (Bechar and Vigneault, 2016). Crop-load management in U.S. orchards follows specific guidelines, such as tree pruning, flower thinning, and fruit picking thinning (Q. Zhang et al., 2019). Nonetheless, efficiently managing the desired amount of crop-load through an automated decision system remains a significant challenge for researchers and engineers. They are currently working on developing a precise guidance system for robots to manage the crop-load according to these guidelines.
Fruit tree pruning involves selectively cutting and removal of certain branches of a tree following some guidelines (provided by researchers or experienced growers) that allow the fruits to grow uniformly at desired locations. Additionally, tree pruning ensures proper penetration of sunlight and air into the canopies and regulates enough fruiting site space to achieve better yield and crop quality. Likewise, the thinning operation involves removal of a portion of flower bloom and/or immature fruits (fruitlets) to ensure desired quality (including size) of the remaining fruit. These crop-load management operations in tree fruit production is a balancing act between maximizing yield (crop-load) while optimizing fruit quality and ensuring adequate return bloom (Robinson et al., 2014). Thus, to effectively perform most of these crop-load management operations using a robotic system, it is critical to estimate the target crop-load in each of these branches, which is determined, in commercial apple farming, based on tree traits information such as trunk size, branch diameter, and branch length. The branches support the growth of leaves, fruits, flowers and buds, whose geometric properties can provide insights about normal growth, fruiting and flowering information because they are the substantial indicator of crop growth and yield (Nyambati and Kioko, 2018). Therefore, automated estimation of geometric parameters/traits (e.g. length and diameter) of fruit tree branches could provide crucial information to further advance robotic systems for various crop-load management operations.
Over the past decade, various 3D reconstruction and branch recognition techniques have been proposed and tested in commercial orchards. Karkee et al. (2014) used a time-of-flight-based 3D camera for the 3D reconstruction of apple trees and showed how such a geometrical representation could be useful in developing pruning rules. Tabb et al. (2017) used a robot vision system (Robotic System for Tree Shape Estimation (RoTSE)) to reconstruct apple trees in the dormant season using a shape-from-silhouette method and use the structure to measure geometric attributes of apple trees such as branch structure, diameters, lengths, and branch angles. Likewise, Tong et al. (2022), and Zhang et al. (2017 and 2020) are some recent studies in detecting branches in apple trees using deep learning techniques such as Regions-Convolutional Neural Network (R-CNN), Faster R-CNN, Mask R-CNN, and Cascade Mask R-CNN respectively. Song et al. (2021) have recently developed a handheld device for measuring the diameter at breast height (DBH) of a tree using a digital camera, laser ranging, and image recognition. The handheld device was designed to perform instant, automated, and non-contact DBH measurements. However, it had a measurement bias up to -1.78 mm. Likewise, Yuan et al. (2021) developed an Intelligent Electronic Device (IED) to measure DBH and tree height through high-precision hall angle sensors and a dual in-line package (DIP) sensor. However, the author in this study reported an estimation bias of 1.6 mm compared with the caliper reading. Similarly, to measure tree DBH remotely, Fan et al. used RGB-D images collected using a mobile phone and simultaneous localization and mapping (SLAM). However, the author reported a large measurement bias in this case as well while validating the study (Fan et al., 2018). Likewise, in order to perform outdoor measurement of tree stem DBH, McGlade et al. recently made used a RGB-D sensor called Kinect V2 where the author recorded 51 individual urban trees from one viewing angle at a distance of 1 m to 5 m away using various Field of View (FOV) settings on the depth sensor (McGlade et al., 2020). The author then implemented a circle-fitting approach on resultant point clouds to estimate the DBH. The study provided higher RMSE error compared to the ground truth of stem and was only feasible for those trees where non-circular and irregular stems were removed.
Despite all these studies, extracting accurate 3D attributes of tree structures in unstructured and dynamic orchard environments remains challenging due to numerous obstacles such as occlusions, lighting variations, background noise, and inherent limitations of sensors (Akbar et al., 2016). Additionally, the existing studies regarding the tree traits identification (e.g. branch and trunk detection) could not provide any insights on how to use that detected branch/trunk information through an automated platform for crop-load management in commercial agriculture. Although studies have been reported on estimating crop-load in apple orchards using machine vision-based apple detection and counting (Aggelopoulou et al., 2011; Gongal et al., 2016; Ji et al., 2012; Linker et al., 2012), this
approach does not satisfy the need of estimating target crop-load during the early growing season because the fruiting information cannot increase the fruits if the orchard is under cropped. Thus, to facilitate the development and enable implementation of various crop-load management strategies and rules (e.g. pruning and thinning), it is necessary to acquire information related to the observable characteristics of the trees (such as their physical traits and features), which would facilitate the development and implementation of crop-load management strategies and rules, such as pruning and thinning.
Regardless of numerous research efforts, the existing studies have been able to propose only a theoretical overview of estimating tree trunk and branch diameters, without practical application in commercial farming environments. Furthermore, the utilization of branch size information for crop-load management in real-world orchard settings has been largely unaddressed. Consequently, the primary objective of this study is to develop a robot vision system capable of estimating the target crop-load in a commercial apple orchard during the dormant season. This estimation will not only provide insights into the potential fruit-bearing capacity of individual branches but also assist in making informed automated pruning decisions, as farmers typically prune commercial trees during this period. By bridging the gap between theoretical understanding and practical application, this research aims to contribute to the development of efficient and accurate crop-load management strategies, enabling improved decision-making in commercial apple orchards. The following materials and methods section will detail the approach and techniques employed to achieve these objectives.
## 2 Materials and Methods
This study can be divided into four steps; i) acquire image data; ii) detect and segment the branches and trunks of apple trees using a deep learning approach; iii) Extract the 3D point clouds from the segmented branch mask and estimate the limb cross-section area (LCSA); iv) Estimate the desired crop-load in number on a particular branch section by integrating a commercial management approach. Figure 1 shows the functional workflow diagram of this study.
Figure 1: Workflow diagram of the study on machine vision system for precise crop load estimation in commercial orchard using deep learning method and branch diameter property
### Study site and data acquisition
This study was carried out in a commercial apple orchard (Allan Brothers Orchard, Figure 2a) located in Prosser, Washington, United States. The orchard was planted with an Envy variety trained to a V-trellis architecture. Trees were planted in 2009 with a row spacing of 9.0 ft and a plant spacing of 3.0 ft. A set of RGB-D images were acquired using an Intel RealSense 435i camera (Figure 2b) (Intel RealSense Technology, California, USA) and a Microsoft Azure Kinect DK AI camera (Figure 2c) (Microsoft Azure, Redmond, Washington) in December 2021 and November 2022 respectively. RGB images from the Intel RealSense camera were used to implement the deep learning method to identify the branches and trunk of apple trees whereas RGB-D images from Microsoft Azure camera were used to estimate the diameter and LCSA of the branch for target crop-load estimation.
Structured light (SL) and stereoscopy (SC) techniques were by the Intel RealsenseD435i camera to estimate 3D information of the scene whereas Microsoft Azure AI camera was based on principle of Time-of-Flight (ToF) of light. Table 1 presents the additional specifications of the vision sensors used in this study.
Cameras were positioned at 1 m (approximate) distance from the tree trunk, and images were captures from various camera heights above the ground including sunny and cloudy conditions.
\begin{table}
\begin{tabular}{c c c c c c c} \hline Camera & Principle & Measuring Range(m) & Depth Resolution & RGB Max Resolution & Frame rate & FoV Depth & Price \\ \hline Realsense D435 & SL+SC & 0.11-10 & 1280 x 720 & 1920 x 1080 & 90 & 85.2\({}^{\circ}\times 58^{\circ}\) & 314 \\ Microsoft Azure & ToF & 0.5-5.4 & 640 x 576 & 4000 x 300 & 30 & 120\({}^{\circ}\times 120^{\circ}\) & 399 \\ \hline \end{tabular}
\end{table}
Table 1: Specification of the sensors/cameras used in this study (SL: Structured light; SC: Stereo Camera)
Figure 2:(a) Commercial apple orchard used in this study (Prosser, Washington). (b) Intel Realsense D435i RGB-D camera, (c) Microsoft Azure Kinect DK camera; and (d) Example image of apple trees used in the study.
### Detection and segmentation of trunk and branch using YOLOv8 model
YOLO (You Only Look Once) is a widely used object detection framework that is known for its high accuracy and performance (Redmon et al., 2016). YOLO was first introduced by Redmon et al. (2016)on the paper "You Only Look Once: Unified, Real-Time Object Detection", which is called a single-stage detector because it does everything in one step (Jiang et al., 2022). YOLOv8 is the latest version of this framework which was created in January, 2023 by Ultralytics (Ulralytics, Maryland, USA), works by dividing an image into a grid of smaller regions and then predicting a bounding box and class probabilities for each object that is present in each region. The YOLOv8 algorithm uses the Darknet-53 architecture to improve the feature extraction process, leading to more accurate object detection. DarkNet-53 is a convolutional neural network with 53 layers and can classify images into 1,000 object categories. This network is divided into smaller stages, and then connects these stages in a partial way, allowing for better feature reuse and gradient propagation.
One of the key improvements in YOLOv8 over previous versions is that it incorporates a technique called PS (Pseudo Ensemble or Pseudo Supervision), which involves the use of multiple models with different configurations during the training process. These models are trained on the same dataset, but with different hyperparameters, leading to a more diverse set of predictions. During inference, the predictions from different models are combined to produce a final prediction, leading to better accuracy and robustness. This technique is particularly useful when there is a limited amount of annotated training data available, as it allows the model to learn from its own predictions and generate a more diverse and accurate output. Figure 3 is the architecture of YOLOv8 algorithm implemented in this study for the detection of branch and trunk in apple tree images.
Earlier version, YOLOv5, included an efficient backbone network, a higher-resolution feature pyramid, and improved anchor boxes that adapt to different object shapes and sizes(Ge et al., 2021). YOLOv5 also incorporated advanced training techniques such as cutcut mixed mosaic augmentation, self-adversarial training, and a focal loss function to further improve its performance (X. Zhang et al., 2021). At its core, YOLOv5 works by first dividing the input image into a grid of cells and predicting the likelihood of an object being present within each cell (Ge et al., 2021). Each grid cell predicts a fixed number of bounding boxes, along with the class probabilities for each box. The network then refines the bounding box predictions based on the contents of the cell and the surrounding cells in the feature map. This allows YOLO models to accurately detect objects at different scales and locations within the image while in contrast to other YOLOs, YOLOv8 combines the architecture of YOLOv4, DarkNet-53, and PS to improve the accuracy and robustness of object detection. There are five different versions of YOLOv8 as YOLOv8n-seg, YOLOv8s-seg, YOLOv8m-seg, YOLOv8l-seg and YOLOv8x-seg.
In this study, we employed the YOLOv8-based deep learning approach (Figure 3) to detect and segment the trunks and branches of apple trees using RGB images from the Intel RealSense 435i camera. The input data consisted of 474 annotated images, partitioned into training, validation, and test sets in an 8:1:1 ratio. There were1,141 labels for the tree trunk and 2,369 labels for the tree branches connected to the trunk created in the format of the COCO dataset generated using Labelbox (Labelbox, San Fransisco, US).
The YOLOv8 model was chosen due to its ability to efficiently detect objects while maintaining high accuracy. The model's architecture (Figure 4) was configured with DarkNet-53 for improved feature extraction and the Pseudo Ensemble (PS) technique for enhanced robustness in predictions. The model's output included bounding boxes and class probabilities for each detected trunk and branch, providing critical information for further analysis. The selected parameters, such as learning rate, batch size, and optimizer, were carefully chosen through multiple training and debugging trials to optimize the model's performance. The final hyperparameters were set to ensure a balance between model accuracy and computational efficiency. For instance, a batch size of 16 allowed for faster training while maintaining model stability, and an initial learning rate of 0.01 facilitated effective weight updates. Momentum and weight decay parameters were also included to accelerate convergence and prevent overfitting, respectively. The deep learning framework was implemented using PyTorch, and the YOLOv8 model was trained on a Linux System with an eight-core Intel i7 CPU and an RTX 3070 graphics card. By employing this state-of-the-art object detection method, we were able to accurately identify and segment trunks and branches of apple trees, which served as a crucial step towards estimating limb cross-section area and determining the desired crop-load on individual branch sections. Additional information about metrics applied to image during model training is presented in Table 2.
Figure 4: Functional diagram of YOLOv8 (Ultralytics (Version 8.0.0) [Computer Software]. Https://Github.Com/Ultralytics/Ultralytics)
Figure 3:Processing image through YOLOv8 object detection model to detect the trunk and branch. The diagram shows the surface architecture of YOLOv8
### Principal Component Analysis for 3D branch orientation
Principal Component Analysis (PCA) is a widely utilized unsupervised machine learning technique that identifies patterns and relationships in data without relying on pre-existing labels or guidance. By reducing data dimensionality, PCA enhances interpretability while minimizing information loss (Abdi & Williams, 2010). It is instrumental in identifying the most significant features in specific datasets. In this study, PCA was employed to determine the principal components with the highest variation in the 3D point clouds extracted from the branch masks detected by the YOLOv8 model in RGB images. The 3D point clouds extracted from the branch masks were represented by the direction and magnitude of variation. For this analysis, data normalization was necessary, as unscaled data with different measurement units could distort the comparison of the magnitude of variance among various features. Data normalization was achieved by subtracting the mean and dividing it by the standard deviation for each variable, transforming all variables to the same scale, explained as equation 1.
\[\mathbf{Z}=\frac{\mathbf{x}\ -\ \mu}{\mathbf{\sigma}}\]
#### Equation 1
Following data standardization, the covariance matrix was computed. This computation estimated how data deviated from the mean across multiple dimensions. In this study, three dimensions of point clouds were considered as three unique characteristics, and a 3x3 covariance matrix was constructed to analyze the correlation between different dimensions. The diagonal elements of the matrix reflected the variance of each feature, while the non-diagonal elements represented the variance between two distinct features. This information was vital for feature set reduction, as it enabled the detection of redundant features and the assessment of the cumulative proportion of variance contributed by each feature.
\[X=\text{Point values is X-axis}\] \[Y=\text{Point values in Y-axis}\] \[Z=\text{Point values in Z-axis}\]
\[\text{Cov}(X,X)\quad cov(X,Y)\quad cov(X,Z)\] \[\text{Cov}(Y,X)\quad cov(Y,Y)\quad cov(Y,Z)\] \[\quad cov(Z,X)\quad cov(Z,Y)\quad cov(Z,Z)\]
#### Equation 2
To identify the principal components, the covariance matrix was first computed, followed by the calculation of eigenvalues and eigenvectors. Principal components are uncorrelated new variables that account for varying
\begin{table}
\begin{tabular}{l r} \hline
**Methods Applied** & **Value** \\ \hline Hue augmentation (fraction) & **0.015** \\ \hline Saturation augmentation (fraction) & 0.7 \\ \hline Value augmentation (fraction) & 0.4 \\ \hline Rotation & 0.0 \\ \hline Translation & 0.1 \\ \hline Scale & 0.5 \\ \hline Flip left-right (probability) & 0.5 \\ \hline Mosaic (probability) & 1.0 \\ \hline Weight decay & 0.0005 \\ \hline \end{tabular}
\end{table}
Table 2: Metrics and processes applied to the images
proportions of the variance. Three principal components were obtained from the three-dimensional data, with each principal component having one eigenvector representing direction and one eigenvalue representing variance magnitude. In this study, the 3D point clouds extracted from the branch mask had one eigenvector representing the direction of the 3D point clouds and another eigenvalue representing the diameter of the branch at that point. The second eigenvalue was obtained by drawing a perpendicular line from the branch's boundary. Since only the most significant direction vector for each branch was required, only the first principal component was utilized in this study.
### Branch Diameter Estimation
The point cloud of a branch segment, extracted from the branch mask identified by the YOLOv8 model and processed through PCA, was utilized to estimate the branch diameter in 3D space. To determine the appropriate location for diameter measurement on the branches, a comprehensive review of the literature and consultations with multiple commercial apple growers were conducted. It was found that the common practice in commercial orchards involves measuring the diameter at a region approximately 3 cm away from the trunk-branch boundary. This approach is based on the understanding that taking measurements less than 3 cm away from the trunk boundary yields more accurate results for the associated branch. In accordance with this guideline, branch diameters (ground truth data) were collected, as demonstrated in Figure 6.
For the 3D point clouds, as demonstrated in figure 5, the point of measurement is the normal plane cutting the branch points extremely close to the trunk end of the branch. We locate the normal plane of the branch and estimate its
Figure 5: Principal Component Analysis samples on scattered 3D point cloud data of branch section samples segmented by YOLOv8 object detection and segmentation model.
diameter by measuring its length along the normal axis using the orientation information retrieved using PCA. This method works well if the depth camera produces an accurate depth profile of the branch. It also relies on the instance segmentation model to capture sufficient segmentation information.
### Crop-Load Estimation
According to some recently reported studies, for the vast majority of apples, keeping 6 fruits per cm square limb cross-sectional area LCSA have been found to be best in terms of improved fruit set, fruit weight, mineral composition and return bloom. Sidhu et al have recently compared the efficacy of thinning based on artificial bud extinction (ABE) by keeping 3, 6 and 12 fruits per cm square LCSA on'scalate' apples and found 6 fruit per cm square to be the optimal approach[14]. Likewise, Anthony et al. have performed a comparative study on W28' apples (which is a combination of 'Enterprise' and 'Honeycrisp' apples) by adjusting 2,4,6, and 8 fruits per cm\({}^{2}\) trunk cross sectional area (TCSA) and concluded that the optimal fruit quality and bloom was identified at 6 fruits per cm\({}^{2}\)TCSA [15].
Additionally, to estimate the desired crop-load for each branch, we took insights from the farm manager and growers of the Olsen Bros Ranches, Inc. which is a fruit-producing and wholesale company located in Prosser. The growers described the "6 apples per cm\({}^{2}\) (LCSA)" concept used in apple farming as a way to estimate the number of apples a particular limb on a tree can support while maintaining the overall health and productivity of the tree. The approach is based on the idea that each limb of an apple tree can only support a certain amount of fruit based on its cross-sectional area. The guideline suggests that each square centimeter of a limb's cross-sectional area can support up to six apples, i.e if a particular limb has a cross-sectional area of X square centimeters, it can support up to 6X apples.
#### 2.5.1 Desired Crop Load = 6 x LCSA
Equation 3
The LCSA of the branch section was calculated by using the formula of area circle which is given as:
\[\textbf{LCSA}=\frac{\pi\textbf{d}^{2}}{\textbf{4}}\]
Equation 4
Figure 6: Ground truth collection for diameter of primary branch using digital calliper
Prior to adopting the "6 fruits per cm square LCSA" method for desired crop-load estimation in this study, a commercially available tool for crop load estimation was examined. This tool, as depicted in Figure 7, is utilized during manual pruning and thinning and offers guidance on the desired crop-load for each branch of an apple tree. The device, employed by Allan Bros Fruit Company in Washington state, measures the LCSA of each branch and provides target and desired crop-load values in numbers. This strategy, which suggests a specific number of fruits per cm2, allows growers to estimate the number of apples that each limb can support, enabling them to thin the fruit accordingly to achieve the desired crop-load. It is important to note that, while commercial operations may be referenced for the adoption of certain strategies, scientific findings should not solely rely on growers' practices or experiences, unless supported by a well-designed scientific user study.
### System Evaluation
The detection and segmentation performance of the YOLOv8 algorithm is assessed using three main evaluation indicators: Mean Intersection over Union (IoU), Average Precision (AP), and Mean Average Precision (mAP). The IoU, also referred to as the Jaccard index, quantifies the degree of overlap between the segmented mask and the target object, effectively gauging the accuracy of the segmentation process. On the other hand, AP measures the area enclosed by the recall rate, precision rate, and the horizontal axis, providing an evaluation of target detection performance. Lastly, the mAP serves as an aggregate metric, encompassing the performance of both target detection and instance segmentation, offering a comprehensive assessment of the YOLOv8 algorithm's efficacy. The calculation equations are as follows:
\[MIoU=\ =\ \frac{Area\ Overlap}{Area\ Union}=\frac{TP}{FP+TP+FN}\]
\[\textit{Equation 5}\]
\[mAP=\frac{TP}{TP+FP}\]
\[\textit{Equation 6}\]
\[mAR=\frac{TP}{TP+FN}\]
Equation 7
Figure 7: Device being used by commercial growers to estimate desired crop load using branch LCSA property
\[f1-Score=\frac{2(Precision*Recall)}{Precision+Recall}\]
#### Equation 8
where TP, TN, FP and FN are true positive, true negative, false positive and false negative respectively.
For evaluation of the branch diameter estimation and crop-load estimation (desired), Root Mean Squared Error (RMSE) was considered which is given as:
\[RMSE=\sqrt{(\frac{1}{n}\Sigma(predicted\_i-actual\_i)^{\wedge}2)}\]
#### Equation 9
In this equation, 'n' represents the total number of samples, while 'predicted_i' and 'actual_i' denote the predicted and actual values, respectively, for each sample 'i'. The actual target in this case refers to the true value of branch diameter, and the desired crop-load estimation is derived from the branch diameter estimation. Consequently, the RMSE of the branch diameter estimation effectively captures the error associated with the number of fruits that can be supported by each branch. This measure provides a useful reference for growers, enabling them to understand the degree of error they may encounter when using this technique for crop-load estimation.
Additionally, the estimated branch diameter and crop-load were compared with the respective ground truth values using Mean Absolute Error (MAE) given by:
\[MAE=(\frac{1}{n}\sum|predicted_{i}-actual_{i}|)\]
#### Equation 10
In addition to the other evaluation metrics, the estimated branch diameter and crop-load were also assessed using the Mean Absolute Percentage Error (MAPE), which is an evaluation metric that quantifies the average percentage difference between predicted and actual values. The MAPE is calculated using the following formula:
\[MAPE=(\frac{1}{n}\sum\left|\frac{predicted_{i}-actual_{i}}{actual_{i}}\right| *100\]
#### Equation 11
## 3 Results and Discussion
### YOLOv8 performance evaluation
Figure 8 presents representative examples of the tree trunk and branch detection and segmentation achieved using the
YOLOv8-based model. These results demonstrate the effectiveness of the model in accurately identifying and delineating the tree structures within the images. The visual outputs of the model not only highlight the potential of this approach in the context of the study but also offer insight into its practical application for tree analysis in orchard settings. Table 4 and 5 show the precision, recall and F1-score achieved by the model in segmenting tree trunks and branches respectively. Likewise, overall performances for both trunk and branches are presented in Table 6.
The table displays the quantitative results for precision, recall, F1 score, and [email protected] values for the five YOLOv8 models in segmenting trunks and branches. For trunk detection, YOLOv8x-seg achieved the highest precision (0.88) and recall (0.95), while YOLOv8l-seg demonstrated the lowest precision (0.80). In terms of branch detection, YOLOv8x-seg displayed the highest precision (0.81) and recall (0.83), whereas YOLOv8m-seg and YOLOv8s-seg had slightly lower precision values (0.78 and 0.80, respectively). Based on these findings, YOLOv8x-seg was identified as the optimal model for both trunk and branch detection, exhibiting superior performance across all evaluation metrics. Consequently, the YOLOv8x-seg model was selected for further analysis in estimating branch diameter and crop-load, allowing for a consistent and focused evaluation of its practical application in achieving the study's objectives. Moving forward, the remaining analysis will be conducted using the best/optimal model selected in this early stage of the results and discussion.
The goal of this study is not merely to evaluate the YOLOv8 models but to identify the best model for addressing the problem at hand, which is estimating branch diameter. The various YOLOv8 models possess different characteristics, as described in the methods section, which result in distinct strengths and weaknesses for object segmentation tasks. Therefore, it is crucial to evaluate their performance based on various tasks and metrics. From the results, it is evident that YOLOv8m-seg and YOLOv8l-seg models are highly accurate in identifying trunk-related objects, while YOLOv8m-seg demonstrates superior performance for branch-related objects.
The F1-score offers a balanced measure between precision and recall, indicating that the YOLOv8x-seg model demonstrated a favorable equilibrium for detecting tree trunks, while the YOLOv8m-seg model exhibited a desirable balance for detecting tree branches. These insights can be instrumental in choosing the most suitable model for the specific task of detecting tree trunks or branches.
The results suggested that the model correctly identified 95% of instances for both trunk and branch classes, although some false positives were generated. The mean average precision (mAP) values of 0.95 and 0.74 in Figure 9 (c) corresponded to the trunk and branch class detection, respectively, at a threshold of 0.5. The overall mAP of 0.85 for all classes combined at the same threshold indicated the model's overall performance in object detection. In the F1-confidence curve (Figure 9d), an F1-score of 0.83 at a confidence threshold of 0.58 demonstrated the trade-off between precision and recall for both trunk and branch classes at that specific threshold. A higher F1-score signified better model performance in terms of precision and recall. These findings provided insights into the model's performance and could be used to assess its suitability for detecting and segmenting trunks and branches in orchard images.
The performance of the YOLOv8 family of models for object detection and segmentation in an apple orchard setting has been evaluated using the relationship between [email protected] and FPS (Figure 10). The results show that YOLOv8x-seg model has the highest mAP of 0.88 for branch and 0.81 for trunk, while YOLOv8s-seg has the lowest mAP of 0.80 for trunk and 0.71 for branch among the models.
\begin{table}
\begin{tabular}{l c c c c c}
**Model** & **Precision** & **Recall** & **F1 Score** & **[email protected]** & **[email protected]:0.95** \\ \hline YOLOv8n-seg & 0.72 & 0.72 & 0.72 & 0.70 & 0.26 \\ YOLOv8s-seg & 0.71 & 0.72 & 0.72 & 0.73 & 0.28 \\ YOLOv8m-seg & 0.70 & **0.79** & **0.74** & 0.74 & **0.30** \\ YOLOv8l-seg & **0.73** & 0.70 & 0.71 & **0.75** & 0.29 \\ YOLOv8x-seg & **0.73** & 0.70 & 0.71 & 0.71 & 0.27 \\ \end{tabular}
\end{table}
Table 4: YOLOv8 model overall performances for both object class, Trunk and Branch
Figure 10: Model accuracy VS speed
Figure 9: Trunk and branch segmentation results achieved with YOLOv8 model; (a) Precision-confidence curve; (b) Recall-confidence curve; (c) Precision-Recall Curve; and (d) F1-Confidence curve
For branch detection and segmentation, the YOLOv8l-seg model outperformed all other models in terms of mAP and FPS, while YOLOv8n-seg had the lowest mAP and FPS values for branch detection and segmentation. However, it is noted that all models perform relatively worse for branch segmentation, with mAP around 0.75 or lower. When interpreting the [email protected] vs FPS graph, it is important to consider the trade-off between accuracy and speed. In this case, the YOLOv8x-seg and YOLOv8l-seg models are the most accurate and fastest for trunk and branch detection and segmentation, respectively. However, it is also important to note that the branch segmentation task is comparatively challenging because of smaller dimension and complex structure in the image space, and relatively worse performance compared to the same with trunks was as expected.
The results of this study suggest that the YOLOv8 family of models is effective for object detection and segmentation in an apple orchard setting. The [email protected] vs FPS graph provided a useful tool for evaluating model performance, as it allowed for a trade-off between accuracy and speed to be considered. It is important, however, to carefully consider the specific use case and task requirements when selecting a model. While we have suggested the YOLOv8x-seg and YOLOv8l-seg models for trunk and branch detection and segmentation, respectively, it is worth noting that the use of two separate models may not be practical, and further research is needed to find an optimal model for the specific task at hand. Regarding the examples of detection and segmentation in Figures 11 and 12, it is important to discuss them separately. Figure 11 provides examples of trunk and branch detection and segmentation using the YOLOv8 algorithm, while Figure 12 presents some instance segmentation results. We can observe from Figure 12 that the model was not able to accurately identify branches in some cases, indicating a need for further improvement in branch segmentation performance.
Figure 11: Detection and segmentation of tree trunks and branches in variable lighting condition and variable locations; (a, d, and f) Robust detection and segmentation of trunks and branches in the presence of complex background and low light at different tree heights; (b) detection and segmentation with a noisy background in the lower part of tree; (c) segmentation in cloudy and low light condition and in top part of the tree; (e) segmentation in low light and middle part of the tree
In spite of the high accuracy achieved in trunk and branch detection and segmentation, the YOLOv8 model still generated some false positives and false negatives in some cases. For instance, Figure 12 (a) illustrates a case where the model failed to identify a branch segment, indicated by the yellow circle. This failure to detect the branch was most likely caused by limitations in the model training. Similarly, the circled region in Figure 12 (b) shows where YOLOv8 failed to identify the branch due to the limited number of samples used in this study for training. To address this issue, (Verma et al., 2018) suggested that training the model with a larger dataset, containing more input features, can significantly improve the model's generalization ability to new and unseen data. Moreover, a larger dataset can enable the model to capture the subtle differences in branch structures, such as those present in different tree species, ages, and environmental conditions during the dormant season. Additionally, a larger dataset can help to mitigate the risk of overfitting, where the model becomes too specialized to the training dataset and performs poorly on new samples.
Previous studies have utilized Mask R-CNN, a two-stage deep learning technique, for branch detection during the dormant season of apple orchards. However, the results achieved in our study show superior performance in terms of both detection accuracy and speed. Further optimization of the feature extraction process and exploration of more sophisticated machine learning algorithms can help to improve the model's accuracy. It is important to validate the model's predictions on new data to ensure generalization to unseen examples of branch position in natural orchard environments. To achieve this, a larger number of training samples are required.
### Branch diameter estimation
The study showed robust capability in terms of estimating the diameter of apple tree branches. The approach of applying PCA on 3D point clouds of the segmented branch mask and drawing a perpendicular line (normal) to the point orientation was an effective method to estimate branch diameter in a natural orchard environment. To validate the diameter estimation method applied in this study, 43 samples of ground truth were compared with the predicted diameter value. Figure 13 shows the relationship between the actual value and predicted value of the branch diameters using the purposed machine vision system. The model achieved an RMSE of 2.08, which means that on average, the machine vision system's predictions for branch diameter are off by 2.08 mm compared to the ground truth measurements. Furthermore, a correlation coefficient of 0.82 was achieved which indicated a strong positive linear relationship between the predicted and actual diameter measurements. This means that as the predicted diameter measurements increase, the actual diameter measurements also tend to increase, and vice versa. This suggests that the study can generate more accurate results upon increasing the input dataset sample. The strength of the relationship suggests that the machine vision system's predictions for diameter estimation are generally reliable and consistent with the actual field measurements.
Figure 12: Examples of unsuccessful detection of branches; (a) Caused by low light condition; and (b) Caused by complex branch sample due to low light and shadow condition
Likewise, over-masking and under-masking has been noticed during the segmentation of YOLOv8 model. When its over-masks the region, the estimated diameter is greater than the actual diameter since nearby secondary branch points were masked as primary branch points because of the over-masking. In the case of under-masking the region, the estimated diameter would be smaller than the actual diameter because the diameter calculated would only be for the masked area. The figure 14 (a) and (b) illustrates two scenarios where YOLOV8 model over-masked and under-masked the branch region respectively which have affected the diameter estimation accuracy at those regions.
After converting the diameter value into limb cross-sectional area (LCSA) by using the property of area of the circle A = \(\pi\)d\({}^{2}\)/4, where A is the area and d is the branch diameter), estimated crop-load in each branch by the machine vision system was compared with the ground truth value. Figure 15 shows the normalized crop-load deviations for two category of diameters i.e 10-20 mm in diameter (smaller range) and the other one is 20-30 mm in diameter (bigger range), and Figure 16 shows the data pattern of 43 branch samples for target crop load estimation.
Figure 14: (a) over-masking of the branch segment (b) under-masking of the branch segment
Figure 13: Scatter plot for branch diameter estimation of 43 branch samples
The purpose machine vision-based crop-load estimation system achieved RMSE of 3.95 indicating that on average, the estimated crop-load may differ from the actual crop-load by up to 3.95 units. In the context of desired crop-load estimation during dormant season application, this means that the machine vision system can estimate the number of crops with a certain degree of accuracy, but there may still be some degree of error in the estimation. Thus, the estimated crop-load should be used as a guide, and farmers should also rely on their expertise and experience to make appropriate management decisions. The mean absolute error (MAE) of 2.99 was achieved for the desired crop-load estimation which indicates that, on average, the estimated crop-load values produced by the machine vision system are off by 2.99 units from the true values. For example, if the true crop-load value of a particular plant is 10, the estimated value by the machine vision system may be anywhere between 7.01 and 12.99. This level of error is relatively low and suggests that the machine vision system is performing well for crop-load estimation.
In terms of application, crop-load estimation is a critical aspect of fruit and crop management. Accurate crop-load estimation helps farmers determine the optimal time for harvesting, manage crop-load and yield, and allocate resources effectively. With a machine vision system that can provide accurate crop-load estimates, farmers can save time and resources, increase efficiency, and improve overall crop quality and productivity. However, for the complex object such as branch in apple orchards, this value has significant value.
By using this approach, growers can optimize their yields by producing high-quality fruit while also maintaining the overall health and productivity of the tree. However, it's important to note that the "6 apples per cm limb cross-section area" guideline is just a rough estimate, and growers may need to adjust the number of apples per limb based on a variety of factors, including the variety of apple, the age and health of the tree, and the growing conditions. Recently, Ranadeep et al (Sidhu et al., 2022b) have described the effects of different crop-load and thinning methods on the yield, nutrient content, fruit quality, and physiological disorders in 'Scilate' apples by studying plant physiology under 3 crop-load management approaches 3, 6, and 12 fruits per cm\({}^{2}\) LCSA. The study found that 6 fruits per cm\({}^{2}\) LCSA was the most effective method of managing crop-load and optimizing the fruit quality.
Figure 15: Target crop load deviation plots (Normalized) for branch diameter estimation
This kind of research is significant because it addresses a crucial need in orchard agriculture. Accurate estimation of crop-load is essential for optimal fruit yield, quality, and profitability in commercial orchards. Traditionally, crop-load estimation has been performed manually, which is time-consuming, labor-intensive, and prone to errors. The proposed machine vision system offers a more efficient and accurate alternative to manual crop-load estimation. By automating the process, the system can provide timely and accurate information about the crop-load, allowing farmers to adjust their management practices accordingly to optimize fruit yield and quality. Additionally, the system can reduce labor costs and improve productivity, making it a valuable tool for commercial orchard management. Therefore, this research is significant in advancing agricultural technology and improving the efficiency and profitability of commercial orchards.
One technical benefit of this system is its ability to accurately estimate crop-load without causing any damage to the fruit or the tree. Traditional methods of crop-load estimation involve physically counting fruits, which can be labor-intensive and can cause damage to the fruit or the tree. The machine vision-based system eliminates the need for physical counting and provides non-invasive crop-load estimation. This can lead to improved fruit quality, reduced tree damage, and increased productivity.
Another scientific benefit of the purposed machine vision system for estimating crop-load based on LCSA can also be helpful for automated pruning in orchards. During the dormant season, when the trees have shed their leaves, the machine vision system can accurately estimate the number of crops on each branch based on the LCSA, without the need for human intervention. This information can then be used to determine which branches need to be pruned to achieve the desired crop-load. Automated pruning systems that use machine vision can greatly increase efficiency and accuracy compared to manual pruning (Elfiky et al., 2015; You et al., 2022). By accurately estimating the desired crop-load on each branch, the machine vision system can optimize the pruning process to maximize yield and minimize waste. This can lead to increased profitability for orchard farmers and can also reduce the environmental impact of orchard agriculture by minimizing the use of chemicals and other inputs. Overall, the combination of
Figure 16: Comparison of crop load estimation accuracy of the purposed machine vision system with ground truth
machine vision-based crop-load estimation and automated pruning has the potential to revolutionize the way orchard agriculture is practiced.
Additionally, the purposed machine vision system can also be helpful in green fruitlet thinning as it provides an indirect measure of the potential fruiting capacity of the branch. The number of fruiting positions on a branch is directly related to the branch's cross-sectional area. Therefore, by estimating the branch's LCSA using a machine vision system, it is possible to estimate the potential number of fruiting positions on that branch. This information can be used to optimize fruit set and yield by selectively removing excess fruitlets during the green fruitlet thinning process. By removing fruitlets from branches with a higher estimated LCSA, growers can ensure that the remaining fruitlets have sufficient resources to develop into high-quality fruits. This approach can help to improve fruit size, reduce biennial bearing, and increase overall orchard productivity.
Furthermore, the advantage of this kind of system is its ability to optimize the use of resources such as water, fertilizer, and labor. By estimating the optimal crop-load for each branch, the system can help farmers adjust their management practices accordingly, which can lead to improved resource efficiency and reduced costs. This can also help in reducing the environmental impact of orchard agriculture by minimizing the use of resources such as water and fertilizer. Additionally, the system can provide highly accurate information on the crop-load, allowing farmers to make timely and informed decisions about their management practices. Overall, the machine vision-based target crop-load estimation system can provide significant benefits to orchard agriculture by improving productivity, reducing costs, and minimizing environmental impact.
## 4 Conclusion
In this study, a machine vision system was developed for desired crop-load estimation in branches of commercial orchards. The performance of the system was evaluated using the YOLOv8 model for trunk and branch segmentation, achieving high precision, recall, and f1 scores for trunk segmentation up to 0.888, 0.974, and 0.922 and for branch segmentation up to 0.739, 0.789, and 0.742, respectively. The machine vision system also demonstrated good accuracy for the diameter estimation of branches and LCSA calculation with an RMSE of 2.08. The crop-load estimation was achieved using branch LCSA, which enabled the machine vision system to estimate the desired or target number of crops with an MAE value of 2.99 by employing the farm management approach of "6 fruits per cm square LCSA". Thus, the proposed machine vision system provides a promising solution for efficient and accurate crop-load estimation in commercial orchards. To improve the accuracy of the system, further research can focus on developing deep learning models with larger datasets and exploring different image processing techniques. Additionally, future work can investigate the feasibility of integrating the machine vision system with automated pruning systems to optimize orchard management practices. Furthermore, the use of other crop-load estimation methods, such as fruit count, can be explored to enhance the accuracy of the system. Overall, the application of machine vision technology in orchard agriculture can lead to more efficient and sustainable orchard management practices.
**Acknowledgment:** The research is funded by AgAID Institute - Agricultural AI for Transforming Workforce and Decision Support, and United States Department of Agriculture National Institute of Food and Agriculture (USDA NIFA). The authors gratefully acknowledge Prof. Dr. Matthew Whiting for the guidance and Dave Allan (Allan Bros., Inc.) for providing access to the orchard during the data collection and field evaluation.
**Author Contributions**
D.A and R.S conceptualized, designed and performed the investigation, data analysis and wrote the manuscript. M.C designed the integrated camera system and provided critical reviews. M.K provided critical reviews and edited the manuscript.
Note: All authors reviewed and approved the internet archive version of this study and is under submission to _computers and electronics in the agriculture_ journal.
|
2310.18248
|
Regenerations and applications
|
Chen-Gounelas-Liedtke recently introduced a powerful regeneration technique,
a process opposite to specialization, to prove existence results for rational
curves on projective $K3$ surfaces. We show that, for projective irreducible
holomorphic symplectic manifolds, an analogous regeneration principle holds and
provides a very flexible tool to prove existence of uniruled divisors,
significantly improving known results.
|
Giovanni Mongardi, Gianluca Pacienza
|
2023-10-27T16:34:40Z
|
http://arxiv.org/abs/2310.18248v1
|
# Regenerations and Applications
###### Abstract.
Chen-Gounelas-Liedtke recently introduced a powerful regeneration technique, a process opposite to specialization, to prove existence results for rational curves on projective \(K3\) surfaces. We show that, for projective irreducible holomorphic symplectic manifolds, an analogous regeneration principle holds and provides a very flexible tool to prove existence of uniruled divisors, significantly improving known results.
Key words and phrases:rational curves, irreducible holomorphic symplectic manifolds 2020 Mathematics Subject Classification: 14H45, 14J42 (primary)
## 1. Introduction
Rational curves on \(K3\) surfaces have now been studied for decades, with motivations also coming from arithmetic geometry, (non-)hyperbolicity questions, and general conjectures on \(0\)-cycles. A natural generalization of \(K3\) surfaces is given by irreducible holomorphic symplectic (IHS) manifolds, which are compact, simply connected Kahler manifolds with \(H^{2,0}\) generated by a symplectic form. In any even dimension \(2n\), \(n\geq 2\), there are two known deformation classes (cf. [1]), one is given by Hilbert schemes of points on \(K3\) surfaces and their deformations (called varieties of \(K3^{[n]}\) type), and the other is given by deformations of an analogous construction using abelian surfaces (called varieties of generalized Kummer type). Two more deformation classes discovered by O'Grady exist in dimension \(6\) and \(10\) (cf. [11, 12]). For the basic theory of IHS manifolds we refer the reader to e.g. [1, 2].
In recent years rational curves on projective IHS manifolds have been actively investigated with different objectives and techniques, cf. e.g. [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11] and the references therein. Rational curves covering a divisor on an IHS manifold behave very well with respect to deformation theory, i.e. they deform in their Hodge locus inside the parameter space of deformations of the IHS manifold and keep covering a divisor. This has been one of the main properties used to prove existence results and, at the same time, one of the main limitations. Indeed, to produce a uniruled divisor in an ample linear system of a polarized IHS \((X,H)\) one would try and exhibit such an example on a special point \((X_{0},H_{0})\) in the same connected component of the corresponding moduli space. As proved in [10] in some cases it is impossible to do it with primitive rational curves. On the other hand in [10, 11, 12] this approach was successfully implemented to prove
that outside at most a finite number of connected components (precisely those not satisfying the necessary conditions given in [10]) of the moduli spaces of projective IHS manifolds of \(K3^{[n]}\) or generalized Kummer type, for all the corresponding points \((X,H)\) there exists a positive integer \(m\) such that the linear system \(|mH|\) contains a uniruled divisor covered by rational curves of primitive homology class Poincare-dual to that of \(|H|\). For a completely different proof (based on Gromov-Witten theory) of the existence of uniruled divisors covered by primitive rational curves on deformations of \(K3^{[n]}\) see [10, Theorem 0.1]. Due to the cases left out by [10, 10], respectively [11, 12], one could reasonably wonder whether uniruled divisors on such manifolds do always exist.
More recently Chen-Gounelas-Liedtke introduced in [13] a new viewpoint to prove existence results for rational curves on projective \(K3\) surfaces: regeneration, a process opposite to specialization. In this article we show that, for projective irreducible holomorphic symplectic manifolds, an analogous regeneration principle holds for uniruled divisors and provides a new and flexible tool to prove existence results. Combining this new viewpoint with results from [10, 12, 13] we are able to improve significantly the available results, in some cases passing from no known existence result at all to density of uniruled divisors in the classical topology.
To state our results we start with the following.
**Definition 1.1**.: Let \(\mathcal{X}\xrightarrow{}B\) be a family of IHS manifolds over a connected base. Let \(0\in B\) and let \(X_{0}\) be the corresponding fibre. Let \(D_{0}\subset X_{0}\) be an integral uniruled divisor. A _regeneration_\(\mathcal{D}\subset\mathcal{X}\) of \(D_{0}\) is a flat family of uniruled and generically integral divisors \(\mathcal{D}\xrightarrow{}B\) such that \(D_{0}\) is a component of the fiber \(\mathcal{D}_{0}\) of \(\mathcal{D}\) over \(0\).
A reducible divisor is called uniruled if all of its components are.
**Hypothesis 1.2**.: Let \(X\) be a projective IHS manifold. There exists a constant \(d\geq 0\) such that all primitive ample curve classes \([C]\in H_{2}(X,\mathbb{Z})\) satisfying \(q(C)>d\) have a representative \(R\in[C]\) such that \(R\) rules a prime divisor of class proportional to \([C]^{\vee}\).
Here, \([C]^{\vee}\) denotes the divisor \([D]\in\operatorname{NS}(X)\otimes\mathbb{Q}\) such that \(C\cdot E=q(D,E)\) for all divisors \(E\), where \(q\) is the Beauville-Bogomolov-Fujiki form on \(X\). A curve is said ample if its dual divisor is ample. Analogously, we define the curve dual to a divisor.
The above hypothesis, which may look slightly unnatural, is the higher dimension analogue of [13, Theorem A.1] and, as we will see below, can be shown to hold for IHS manifolds of \(K3^{[n]}\) and generalized Kummer type, thanks to previous work done in [10, 12, 13]. Our main novel contribution is the following result which, despite the simplicity of its proof, seems to provide the right viewpoint to tackle these kind of questions.
**Regeneration principle 1.3**.: _Let \(\mathcal{X}\xrightarrow{}B\) be a family of projective IHS manifolds with a central fibre \(\mathcal{X}_{0}\) satisfying hypothesis 1.2. Let \(D_{0}\subset\mathcal{X}_{0}\) be an integral uniruled divisor on the central fibre. Then \(D_{0}\) admits a regeneration._
The regeneration principle works perfectly on IHS manifold of \(K3^{[n]}\) or generalized Kummer type.
**Theorem 1.4**.: _Any integral uniruled divisor in a fiber of any family of projective IHS manifolds of \(K3^{[n]}\) or generalized Kummer type admits a regeneration._
Our first application is to show existence of ample uniruled divisors also for the connected components of the moduli spaces left out by [10, 11, 12].
**Theorem 1.5**.: _Let \((X,H)\) be a polarized IHS manifold of \(K3^{[n]}\) or generalized Kummer type, then there exists \(m\in\mathbb{N}\) and a uniruled divisor in \(|mH|\)._
In particular the applications to zero-cycles pointed out in [10, Theorems 1.7 and 1.8] now hold for all polarized IHS manifolds of \(K3^{[n]}\) or generalized Kummer type.
At the very general point in the \(K3^{[n]}\)-case we can drastically improve Theorem 1.5.
**Theorem 1.6**.: _Let \(\mathcal{M}\) be an irreducible component of the moduli space of polarized IHS manifolds of \(K3^{[n]}\)-type. Then any polarized IHS manifold \(X\) outside a possibly countable union of subvarieties of \(\mathcal{M}\) verifies the following: any pair of points \(x_{1},x_{2}\in X\) can be arbitrarily approximated by a chain of at most \(2n\) irreducible rational curves, each of which deforms in a family covering a divisor._
The above result can be seen as an effective non-hyperbolicity statement. The study of non-hyperbolicity of IHS manifolds dates back to Campana [1], with more recent important contributions by Verbitsky [14] and Kamenova-Lu-Verbitsky [15]. We refer the interested reader to [15] for a thorough discussion and a complete list of references.
We can also show the following less strong but more precise result, which was previously known only in dimension \(2\) by [1, Theorem 4.10].
**Theorem 1.7**.: _Let \(X\) be a projective IHS manifold of \(K3^{[n]}\) or Kummer type such that \(Bir(X)\) is infinite. Then \(X\) has infinitely many uniruled divisors._
We hope that this new viewpoint via regenerations could also lead to progress towards the existence of higher codimension algebraically coisotropic subvarieties.
**Acknowledgements.** We thank Claire Voisin for suggesting to apply the regeneration principle to non-hyperbolicity questions and G. Ancona, Ch. Lehn and K. O'Grady for useful comments on a preliminary version. G.M. was supported by PRIN2020 research grant "2020KKWT53", by PRIN2022 research grant "2022PEKYBJ" and is a member of the INDAM-GNSAGA. G.P. was supported by the CNRS International Emerging Actions (IEA) project "Birational and arithmetic aspects of orbifolds".
## 2. Regenerations
Proof of the Regeneration principle 1.3.: We can suppose that \(\mathcal{X}_{0}\) has Picard rank at least two and that \(D_{0}\) is not proportional to the polarization, otherwise by [10,
Corollary 3.5], we can deform a curve ruling \(D_{0}\) over all of \(B\), and obtain in this way a regeneration of \(D_{0}\).
Let \(C_{0}\) be the class of a minimal curve ruling \(D_{0}\). Let \(\mathcal{H}\in\operatorname{Pic}(\mathcal{X})\) be a relative polarization and \(H_{0}\) its restriction to the central fibre \(\mathcal{X}_{0}\). Let \(H_{0}^{\vee}\) be the (ample) class of a curve dual to \(H_{0}\). We can choose \(m\in\mathbb{N}\) big enough so that \(mH_{0}^{\vee}-C_{0}\) is ample, primitive and of square bigger than \(d_{0}\). Therefore, by Hypothesis 1.2, we have a rational curve \(R_{0}\in[mH_{0}^{\vee}-C_{0}]\) which rules an ample divisor \(F_{0}\) inside \(\mathcal{X}_{0}\).
As the divisor \(F_{0}\) is ample we have \(C_{0}\cdot F_{0}>0\). Hence, we can fix a point in \(C_{0}\cap F_{0}\) and pick a curve \(R_{0}\) in the ruling of \(F_{0}\) passing through this point. Notice that \(C_{0}\) cannot coincide with the ruling of \(F_{0}\), as \(C_{0}\) and \(R_{0}\) are not proportional (because the divisors they rule are not). In this way we obtain a connected rational curve of class \([C_{0}+R_{0}]\). By abuse of notation, we denote this curve by \(C_{0}+R_{0}\). By [1, Corollary 6.3], which generalizes [1, Corollary 3.5] to the reducible case, the curve \(C_{0}+R_{0}\) deforms in its Hodge locus \(\operatorname{Hdg}_{[\operatorname{C_{0}+R_{0}}]}\) of the class \([C_{0}+R_{0}]=[mH_{0}^{\vee}]\) and keeps ruling a divisor on each point of \(\operatorname{Hdg}_{[\operatorname{C_{0}+R_{0}}]}\). By construction, this Hodge locus coincides with \(B\), as \(C_{0}+R_{0}\) is a multiple of \(H_{0}^{\vee}\), and the result follows.
The following can be seen as a concentration of some of the main contributions of [1, 2, 2], namely the study of the monodromy orbits, constructions of examples and deformation theory.
**Proposition 2.1**.: _Hypothesis 1.2 holds for any family of manifolds of \(K3^{[n]}\) and Kummer type, and the constant \(d_{0}\) is \((2n-2)^{2}(n-1)\) and \((2n+2)^{2}(n+1)\) respectively._
Proof.: Let \((S,h_{S})\) be a polarized K3 of genus \(p\) and \((A,h_{A})\) a polarized abelian surface of type \((1,p-1)\). We denote by \(r_{n}\) the class of an exceptional rational curve which is the general fiber of the Hilbert-Chow morphism \(S^{[n]}\to S^{(n)}\) (resp. \(K_{n}(A)\subset A^{[n+1]}\to A^{(n+1)}\)) and by \(h_{S}\in H_{2}(S^{[n]},\mathbb{Z})\) (resp. \(h_{A}\in H_{2}(K_{n}(A),\mathbb{Z})\) ) the image of the class \(h_{S}\in H_{2}(S,\mathbb{Z})\) (resp. \(h_{A}\in H_{2}(A,\mathbb{Z})\)) under the inclusion \(H_{2}(S,\mathbb{Z})\hookrightarrow H_{2}(S^{[n]},\mathbb{Z})\) (resp. \(H_{2}(A,\mathbb{Z})\hookrightarrow H_{2}(K_{n}(A),\mathbb{Z})\)). Recall that \(q(h_{S})=2p-2=q(h_{A})\) and \(q(r_{n})\) equals \(1/(2n-2)\) in the \(K3^{[n]}\) case and \(1/(2n+2)\) in the Kummer case.
We take a primitive ample curve class \(C\in H_{2}(X,\mathbb{Z})\) such that \(q(C)>n-1\) (resp. \(n+1\) for Kummer type). By [1, Corollary 2.8] and [2, Theorem 4.2], the pair \((X,C)\) is deformation equivalent to the pair \((S^{[n]},h_{S}-2gr_{n})\) with \(2g\leq n-1\) or \((S^{[n]},h_{S}-(2g-1)r_{n})\) with \(2g\leq n\) (resp. \((K_{n}(A),h_{A}-2gr_{n})\) or \((K_{n}(A),h_{a}-(2g-1)r_{n})\) with \(2g\leq n-1\) ).
If \(p\leq g\), we would get a contradiction since
\[n-1\leq q(C)=q(h_{S})-4g^{2}\frac{1}{2(n-1)}=2(p-1)-4g^{2}\frac{1}{2(n-1)}\leq 2 (g-1)-4g^{2}\frac{1}{2(n-1)}\leq n-2\]
Therefore, \(p\geq g\) and by [1, Section 4.1] and [2, Proof of Proposition 2.1], the curves we obtain in \(S^{[n]}\) (resp. in \(K_{n}(A)\)) have a rational representative which covers a divisor by [1, Proposition 4.1] and [2, Proposition 1.1]. Such divisor
then deforms in its Hodge locus by [13, Proposition 3.1], and the proposition follows.
Proof of Theorem 1.4.: The result follows immediately from the combination of Proposition 2.1 and the Regeneration principle 1.3.
## 3. Applications
In this section we provide the proofs of the applications of the Regeneration principle to IHS manifolds of \(K3^{[n]}\)-type or generalized Kummer type.
Proof of Theorem 1.5.: Again the result follows from the combination of Proposition 2.1 and the Regeneration principle 1.3. Indeed, suppose that \((X,H)\) is a polarized IHS manifold of \(K3^{[n]}\)-type and let us consider a connected component \(\mathcal{M}\) of the moduli space of polarized IHS manifolds containing \((X,H)\). By [13, Theorem 2.5], there exists a point in \(\mathcal{M}\) which parametrizes the Hilbert scheme over a very general projective \(K3\)\((S,H_{S})\). Let us choose any rational curve \(C\) in \(S\), whose existence is guaranteed by Bogomolov-Mumford [12], see also [1, Section VIII.23], and let us consider the uniruled divisor \(D_{C}=\{Z\in S^{[n]}\text{ such that }supp(Z)\cap C\neq\emptyset\}\). We then apply the Regeneration principle 1.3 to \(D_{C}\), and obtain a regeneration of it on all IHS manifolds corresponding to points of \(\mathcal{M}\). As the very general element of \(\mathcal{M}\) has Picard rank one, the class of this regeneration is proportional to this unique class, hence our regeneration has class \(mH\) on \(X\), for some \(m\). For the generalized Kummer type we proceed the same way, by using [14, Theorem 4.2] and [15, Theorem 1.1] instead of the analogous results in the \(K3^{[n]}\)-type case.
More generally, we have the following result.
**Proposition 3.1**.: _Let \((X,H)\) be a projective IHS manifold of \(K3^{[n]}\) or Kummer type, and let \(D\in\operatorname{Pic}(X)\) be a divisor with \(q(D)\geq 0\) and \((D,H)>0\). Then there exists a uniruled divisor in \(|mD|\) for some \(m\in\mathbb{N}\)._
Proof.: The proof is analogous to Theorem 1.5, with an extension to the case of square zero classes. If \(D\) has positive square, instead of the moduli space of polarized IHS manifolds we consider the moduli space \(\mathcal{M}\) of lattice polarized IHS manifolds such that \(\operatorname{Pic}(X)\) contains a divisor of square \(q(D)\), and pick the connected component containing \((X,D)\). Let us choose a parallel transport operator \(\gamma\) on \(\mathcal{M}\) such that \(\gamma(X)\) has Picard rank \(1\). Therefore, \(\gamma(D)\) is ample on \(\gamma(X)\). By Theorem 1.5, a multiple of \(\gamma(D)\) is uniruled by a rational curve \(\gamma(C)\), which has class proportional to \(\gamma(D)^{\vee}\). Therefore by [13, Proposition 3.1], \(\gamma(C)\) deforms in its Hodge locus, which by construction contains \((X,D)\) and we obtain a rational curve \(C\) covering a multiple of \(D\). If \(q(D)=0\), we can suppose that \(D\) is nef by [16, Proposition 5.6], otherwise we follow the same reasoning as above to reduce to the nef case. As \(X\) is projective, we have an ample divisor \(H\in\operatorname{Pic}(X)\). Let \(L\) be the saturated lattice generated by \(D\) and \(H\), and let us consider the component \(\mathcal{M}\) of the moduli space of \(L\) lattice
polarized IHS manifolds containing \((X,L)\). Inside of \(\mathcal{M}\), by [13, Theorem 3.13] we can pick a point \(\gamma(X)\) such that \(\gamma(D)\) stays nef and there exists a prime exceptional divisor \(E\) on \(\gamma(X)\) such that \(q(\gamma(D),E)>0\)1. Let \(R\) be a curve ruling \(E\). As \(\gamma(D)\) is nef, there exists an \(m\in\mathbb{N}\) such that \(m\gamma(D)^{\vee}-R\) is an ample curve. Therefore, by Proposition 2.1, we produce a rational curve \(C\) of class \(m\gamma(D)^{\vee}-R\) which rules an ample divisor, and attach to it a rational tail \(R\), so that the connected curve \(C+R\) of class \(m\gamma(D)^{\vee}\) rules a divisor and deforms in its Hodge locus by [1, Corollary 6.3]. By construction, this Hodge locus contains \((X,D^{\vee})\), and the result follows.
Footnote 1: By the above cited theorem, the locus where a given extra class is algebraic is dense in \(\mathcal{M}\), and the locus where this class \(E\) has a fixed intersection with \(\gamma(D)\) is a proper Zariski closed subset of \(\mathcal{M}\), therefore the locus where the intersection is positive is non-empty.
To prove Theorem 1.6, we will use the following result of Chen and Lewis on \(K3\) surfaces. Let \(\mathcal{F}_{g}\) be the moduli space of polarized genus \(g\) K3 surfaces, and let \(\mathcal{S}_{g}\) be the universal surface over \(\mathcal{F}_{g}\). Let \(\mathcal{C}_{g,n}\) be the scheme of relative dimension one whose fibre over a point \((S,L)\in\mathcal{F}_{g}\) consists of all irreducible rational curves contained in \(|nL|\). Recall the following result.
**Theorem 3.2** (Theorem 1.1, [13]).: _The set \(\cup_{n\in\mathbb{N}}\mathcal{C}_{g,n}\) is dense in the strong topology inside \(\mathcal{S}_{g}\), for all \(g\geq 2\)._
From this one easily obtains the following.
**Corollary 3.3**.: _Let \(S\) be a general projective \(K3\) surface. Then any pair of points on \(S^{[n]}\) can be arbitrarily approximated by a chain of at most \(2n\) rational curves, each of which deforms in a family covering a divisor._
Proof.: Without loss of generality, we can suppose that the two points \(\xi_{i},\,i\in\{1,2\}\) correspond to reduced subschemes, and that \(\operatorname{supp}(\xi_{1})\cap\operatorname{supp}(\xi_{2})=\emptyset\) otherwise we can take arbitrarily close approximations by reduced subschemes with such property. Therefore we write
\[\xi_{i}=p_{1}^{i}+\ldots p_{n}^{i},\]
with \(p_{1}^{i},\ldots,p_{n}^{i}\) distinct points on \(S\) for \(i=1,2\). By Theorem 3.2, we have two ample irreducible curves \(R_{1}^{1},R_{1}^{2}\) arbitrarily near \(p_{1}^{1}\) and \(p_{1}^{2}\) respectively. As these curves are ample, the rational curve \(R_{1}=R_{1}^{1}\cup R_{1}^{2}\) is connected. Let us consider the rational curve \(R_{1}+p_{2}^{1}+\ldots p_{n}^{1}\) inside \(S^{[n]}\): this can be used to approximate the subschemes \(p_{1}^{1}+p_{2}^{1}+\ldots p_{n}^{1}\) and \(p_{1}^{2}+p_{2}^{1}+\ldots p_{n}^{1}\). Iterating the argument, one obtains a rational curve (union of two irreducible ample curves) \(R_{j}\) for all \(j\in\{1,\ldots n\}\) which approximates the two points \(p_{j}^{1}\) and \(p_{j}^{2}\). Considering the curve \(p_{1}^{2}+\cdots+p_{j-1}^{2}+R_{j}+p_{j+1}^{1}+\cdots+p_{n}^{1}\) one can approximate the points \(p_{1}^{2}+\cdots+p_{j-1}^{2}+p_{j}^{1}+p_{j+1}^{1}+\cdots+p_{n}^{1}\) and \(p_{1}^{2}+\cdots+p_{j-1}^{2}+p_{j}^{2}+p_{j+1}^{1}+\cdots+p_{n}^{1}\). Therefore, by taking the union of these curves we obtain a chain of \(2n\) rational irreducible curves which approximate the two points \(\xi_{1}\) and \(\xi_{2}\). By construction, each of these rational curves \(C\) deforms in a family which covers the divisor \(\{Z\in S^{[n]},\text{ such that }\operatorname{supp}(Z)\cap C\neq\emptyset\}\) and the corollary follows.
Proof of Theorem 1.6.: Let \(X\) be a very general IHS manifold in \(\mathcal{M}\). Let \(x_{1},x_{2}\in X\) be two points on it. Thanks to [10, Corollary 1.2] we can pick a point in \(\mathcal{M}\) which parametrizes the punctual Hilbert scheme of a very general projective \(K3\)\((S,H)\) arbitrarily close to \(X\) and two points \(\xi_{1},\xi_{2}\in S^{[n]}\) approximating \(x_{1}\) and \(x_{2}\) respectively. We take the chain \(R\) of \(2n\) rational curves approximating \(\xi_{1}\) and \(\xi_{2}\) given by Corollary 3.3. We can now apply the Regeneration principle 1.3 to regenerate the union of the divisors ruled by the deformations of the irreducible components of \(R\) to obtain a chain of rational curves on \(X\) satisfying the statement.
_Remark 3.4_.: Actually, using [1, Theorem 5.5] and the Regeneration principle, a simpler version of the proof above yields the existence of infinitely many uniruled divisors for the very general point of _any_ family \(\mathcal{X}\to B\) of projective IHS manifolds such that one of the fibres is the Hilbert scheme over a \(K3\) of odd Picard rank.
Proof of Theorem 1.7.: To prove the theorem we will show the existence of an ample uniruled divisor with infinite \(\operatorname{Bir}(X)\)-orbit. By [1, Theorem 1.1], as \(\operatorname{Bir}(X)\) is infinite, there exists an element \(g\in\operatorname{Bir}(X)\) of infinite order. Let \(D\) be an ample uniruled divisor, whose existence is granted by Theorem 1.5. We claim that the orbit of \(D\) via \(g\) is infinite, as otherwise a multiple of \(g\) would give an isometry of the lattice \(D^{\perp}\subset\operatorname{NS}(X)\). The latter is negative definite as \(D\) is ample and has therefore finite isometry group. Hence \(g\) would act with finite order on both \(D\) and \(D^{\perp}\), which is absurd and the claim follows.
We recall now the following well-known result for the reader's convenience. This tells us that Theorem 1.7 yields its conclusion only for a codimension at least one locus in the moduli space of projective IHS manifolds.
**Lemma 3.5**.: _Let \(X\) be a projective IHS manifold with \(\rho(X)=1\). Then \(\operatorname{Aut}(X)=\operatorname{Bir}(X)\) and it is a finite group._
Proof.: First of all recall that a birational map between two IHS manifolds sending an ample class into an ample class can be extended to an isomorphism. As such, when \(\rho(X)=1\), we have \(\operatorname{Aut}(X)=\operatorname{Bir}(X)\). By [10, Theorem 4.8] the group of automorphisms of a compact Kahler manifold that fix a Kahler class has only finitely many connected components. On the other hand the group of automorphisms of an IHS manifold \(X\) is discrete, since \(h^{0}(X,T_{X})=h^{0}(X,\Omega^{1}_{X})=0\). Hence \(\operatorname{Aut}(X)\) must be finite.
|
2310.07368
|
Probing the horizon of black holes with gravitational waves
|
Gravitational waves open the possibility to investigate the nature of compact
objects and probe the horizons of black holes. Some models of modified gravity
predict the presence of horizonless and singularity-free compact objects. Such
dark compact objects would emit a gravitational-wave signal which differs from
the standard black hole scenario. In this chapter, we overview the
phenomenology of dark compact objects by analysing their characteristic
frequencies in the ringdown and the emission of gravitational-wave echoes in
the postmerger signal. We show that future gravitational-wave detectors will
allow us to perform model-independent tests of the black hole paradigm.
|
Elisa Maggio
|
2023-10-11T10:36:38Z
|
http://arxiv.org/abs/2310.07368v1
|
# Probing the horizon of black holes with gravitational waves
###### Abstract
Gravitational waves open the possibility to investigate the nature of compact objects and probe the horizons of black holes. Some models of modified gravity predict the presence of horizonless and singularity-free compact objects. Such dark compact objects would emit a gravitational-wave signal which differs from the standard black hole scenario. In this chapter, we overview the phenomenology of dark compact objects by analysing their characteristic frequencies in the ringdown and the emission of gravitational-wave echoes in the postmerger signal. We show that future gravitational-wave detectors will allow us to perform model-independent tests of the black hole paradigm.
## 1 Tests of the black hole paradigm
Black holes (BHs) are the end result of the gravitational collapse and the most compact objects in the Universe. According to the no-hair theorems of general relativity (GR), any compact object heavier than a few solar masses is well described by the Kerr geometry [1; 2]. Kerr BHs are determined uniquely by two parameters, i.e., their mass \(M\) and angular momentum \(J\) defined through the dimensionless spin parameter \(\chi\equiv J/M^{2}\)[3]. Therefore, any observation of deviation from the properties of Kerr BHs would be an indication of departure from GR.
Gravitational waves (GWs) provide a unique channel for probing the nature of astrophysical sources. The GW signal emitted by the coalescence of compact binaries is characterized by three main stages: the _inspiral_, when the two bodies spiral in towards each other as they loose energy into gravitational radiation; the _merger_, when the two bodies coalesce; and the _ringdown_, when the final remnant relaxes to
|
2301.09404
|
$\mathbb{Z}_2\mathbb{Z}_4\mathbb{Z}_8$-Additive Hadamard Codes
|
The $\mathbb{Z}_2\mathbb{Z}_4\mathbb{Z}_8$-additive codes are subgroups of
$\mathbb{Z}_2^{\alpha_1} \times \mathbb{Z}_4^{\alpha_2} \times
\mathbb{Z}_8^{\alpha_3}$, and can be seen as linear codes over $\mathbb{Z}_2$
when $\alpha_2=\alpha_3=0$, $\mathbb{Z}_4$-additive or $\mathbb{Z}_8$-additive
codes when $\alpha_1=\alpha_3=0$ or $\alpha_1=\alpha_2=0$, respectively, or
$\mathbb{Z}_2\mathbb{Z}_4$-additive codes when $\alpha_3=0$. A
$\mathbb{Z}_2\mathbb{Z}_4\mathbb{Z}_8$-linear Hadamard code is a Hadamard code
which is the Gray map image of a
$\mathbb{Z}_2\mathbb{Z}_4\mathbb{Z}_8$-additive code. In this paper, we
generalize some known results for $\mathbb{Z}_2\mathbb{Z}_4$-linear Hadamard
codes to $\mathbb{Z}_2\mathbb{Z}_4\mathbb{Z}_8$-linear Hadamard codes with
$\alpha_1 \neq 0$, $\alpha_2 \neq 0$, and $\alpha_3 \neq 0$. First, we give a
recursive construction of $\mathbb{Z}_2\mathbb{Z}_4\mathbb{Z}_8$-additive
Hadamard codes of type $(\alpha_1,\alpha_2, \alpha_3;t_1,t_2, t_3)$ with
$t_1\geq 1$, $t_2 \geq 0$, and $t_3\geq 1$. Then, we show that in general the
$\mathbb{Z}_4$-linear, $\mathbb{Z}_8$-linear and
$\mathbb{Z}_2\mathbb{Z}_4$-linear Hadamard codes are not included in the family
of $\mathbb{Z}_2\mathbb{Z}_4\mathbb{Z}_8$-linear Hadamard codes with $\alpha_1
\neq 0$, $\alpha_2 \neq 0$, and $\alpha_3 \neq 0$. Actually, we point out that
none of these nonlinear $\mathbb{Z}_2\mathbb{Z}_4\mathbb{Z}_8$-linear Hadamard
codes of length $2^{11}$ is equivalent to a
$\mathbb{Z}_2\mathbb{Z}_4\mathbb{Z}_8$-linear Hadamard code of any other type,
a $\mathbb{Z}_2\mathbb{Z}_4$-linear Hadamard code, or a
$\mathbb{Z}_{2^s}$-linear Hadamard code, with $s\geq 2$, of the same length
$2^{11}$.
|
Dipak K. Bhunia, Cristina Fernández-Córdoba, Mercè Villanueva
|
2023-01-23T12:56:26Z
|
http://arxiv.org/abs/2301.09404v1
|
# \(\mathbb{Z}_{2}\mathbb{Z}_{4}\mathbb{Z}_{8}\)-Additive Hadamard Codes +
###### Abstract
The \(\mathbb{Z}_{2}\mathbb{Z}_{4}\mathbb{Z}_{8}\)-additive codes are subgroups of \(\mathbb{Z}_{2}^{\alpha_{1}}\times\mathbb{Z}_{4}^{\alpha_{2}}\times\mathbb{Z}_{8 }^{\alpha_{3}}\), and can be seen as linear codes over \(\mathbb{Z}_{2}\) when \(\alpha_{2}=\alpha_{3}=0\), \(\mathbb{Z}_{4}\)-additive or \(\mathbb{Z}_{8}\)-additive codes when \(\alpha_{1}=\alpha_{3}=0\) or \(\alpha_{1}=\alpha_{2}=0\), respectively, or \(\mathbb{Z}_{2}\mathbb{Z}_{4}\)-additive codes when \(\alpha_{3}=0\). A \(\mathbb{Z}_{2}\mathbb{Z}_{4}\mathbb{Z}_{8}\)-linear Hadamard code is a Hadamard code which is the Gray map image of a \(\mathbb{Z}_{2}\mathbb{Z}_{4}\mathbb{Z}_{8}\)-additive code. In this paper, we generalize some known results for \(\mathbb{Z}_{2}\mathbb{Z}_{4}\)-linear Hadamard codes to \(\mathbb{Z}_{2}\mathbb{Z}_{4}\mathbb{Z}_{8}\)-linear Hadamard codes with \(\alpha_{1}\neq 0\), \(\alpha_{2}\neq 0\), and \(\alpha_{3}\neq 0\). First, we give a recursive construction of \(\mathbb{Z}_{2}\mathbb{Z}_{4}\mathbb{Z}_{8}\)-additive Hadamard codes of type \((\alpha_{1},\alpha_{2},\alpha_{3};t_{1},t_{2},t_{3})\) with \(t_{1}\geq 1\), \(t_{2}\geq 0\), and \(t_{3}\geq 1\). Then, we show that in general the \(\mathbb{Z}_{4}\)-linear, \(\mathbb{Z}_{8}\)-linear and \(\mathbb{Z}_{2}\mathbb{Z}_{4}\)-linear Hadamard codes are not included in the family of \(\mathbb{Z}_{2}\mathbb{Z}_{4}\mathbb{Z}_{8}\)-linear Hadamard codes with \(\alpha_{1}\neq 0\), \(\alpha_{2}\neq 0\), and \(\alpha_{3}\neq 0\). Actually, we point out that none of these nonlinear \(\mathbb{Z}_{2}\mathbb{Z}_{4}\mathbb{Z}_{8}\)-linear Hadamard codes of length \(2^{11}\) is equivalent to a \(\mathbb{Z}_{2}\mathbb{Z}_{4}\mathbb{Z}_{8}\)-linear Hadamard code of any other type, a \(\mathbb{Z}_{2}\mathbb{Z}_{4}\)-linear Hadamard code, or a \(\mathbb{Z}_{2^{s}}\)-linear Hadamard code, with \(s\geq 2\), of the same length \(2^{11}\).
## 1 Introduction
Let \(\mathbb{Z}_{2^{s}}\) be the ring of integers modulo \(2^{s}\) with \(s\geq 1\). The set of \(n\)-tuples over \(\mathbb{Z}_{2^{s}}\) is denoted by \(\mathbb{Z}_{2^{s}}^{n}\). In this paper, the elements of \(\mathbb{Z}_{2^{s}}^{n}\) will also be called vectors. A code over \(\mathbb{Z}_{2}\) of length \(n\) is a nonempty subset of \(\mathbb{Z}_{2}^{n}\), and
it is linear if it is a subspace of \(\mathbb{Z}_{2}^{n}\). Similarly, a nonempty subset of \(\mathbb{Z}_{2^{s}}^{n}\) is a \(\mathbb{Z}_{2^{s}}\)-additive code if it is a subgroup of \(\mathbb{Z}_{2^{s}}^{n}\). A \(\mathbb{Z}_{2}\mathbb{Z}_{4}\mathbb{Z}_{8}\)-additive code is a subgroup of \(\mathbb{Z}_{2}^{\alpha_{1}}\times\mathbb{Z}_{4}^{\alpha_{2}}\times\mathbb{Z}_ {8}^{\alpha_{3}}\). Note that a \(\mathbb{Z}_{2}\mathbb{Z}_{4}\mathbb{Z}_{8}\)-additive code is a linear code over \(\mathbb{Z}_{2}\) when \(\alpha_{2}=\alpha_{3}=0\), a \(\mathbb{Z}_{4}\)-additive or \(\mathbb{Z}_{8}\)-additive code when \(\alpha_{1}=\alpha_{3}=0\) or \(\alpha_{1}=\alpha_{2}=0\), respectively, and a \(\mathbb{Z}_{2}\mathbb{Z}_{4}\)-additive code when \(\alpha_{3}=0\). The order of a vector \(u\in\mathbb{Z}_{2^{s}}\), denoted by \(o(u)\), is the smallest positive integer \(m\) such that \(mu=(0,\ldots,0)\). Also, the order of a vector \(\mathbf{u}\in\mathbb{Z}_{2}^{\alpha_{1}}\times\mathbb{Z}_{4}^{\alpha_{2}} \times\mathbb{Z}_{8}^{\alpha_{3}}\), denoted by \(o(\mathbf{u})\), is the smallest positive integer \(m\) such that \(m\mathbf{u}=(0,\ldots,0\mid 0,\ldots,0\mid 0,\ldots,0)\).
The Hamming weight of a vector \(u\in\mathbb{Z}_{2}^{n}\), denoted by \(\mathrm{wt}_{H}(u)\), is the number of nonzero coordinates of \(u\). The Hamming distance of two vectors \(u,v\in\mathbb{Z}_{2}^{n}\), denoted by \(d_{H}(u,v)\), is the number of coordinates in which they differ. Note that \(d_{H}(u,v)=\mathrm{wt}_{H}(u-v)\). The minimum distance of a code \(C\) over \(\mathbb{Z}_{2}\) is \(d(C)=\min\{d_{H}(u,v):u,v\in C,u\neq v\}\).
In [1], a Gray map from \(\mathbb{Z}_{4}\) to \(\mathbb{Z}_{2}^{2}\) is defined as \(\phi(0)=(0,0)\), \(\phi(1)=(0,1)\), \(\phi(2)=(1,1)\) and \(\phi(3)=(1,0)\). There exist different generalizations of this Gray map, which go from \(\mathbb{Z}_{2^{s}}\) to \(\mathbb{Z}_{2}^{2^{s-1}}\)[2, 3, 4, 5, 6]. The one given in [5] can be defined in terms of the elements of a Hadamard code [6], and Carlet's Gray map [2] is a particular case of the one given in [6] satisfying \(\sum\lambda_{i}\phi(2^{i})=\phi(\sum\lambda_{i}2^{i})\)[7]. In this paper, we focus on Carlet's Gray map [2], from \(\mathbb{Z}_{2^{s}}\) to \(\mathbb{Z}_{2}^{2^{s-1}}\), which is also a particular case of the one given in [8]. Specifically,
\[\phi_{s}(u)=(u_{s-1},u_{s-1},\ldots,u_{s-1})+(u_{0},\ldots,u_{s-2})Y_{s-1}, \tag{1}\]
where \(u\in\mathbb{Z}_{2^{s}}\); \([u_{0},u_{1},\ldots,u_{s-1}]_{2}\) is the binary expansion of \(u\), that is, \(u=\sum_{i=0}^{s-1}u_{i}2^{i}\) with \(u_{i}\in\{0,1\}\); and \(Y\) is the matrix of size \((s-1)\times 2^{s-1}\) whose columns are all the vectors in \(\mathbb{Z}_{2}^{s-1}\). Without loss of generality, we assume that the columns of \(Y_{s-1}\) are ordered in ascending order, by considering the elements of \(\mathbb{Z}_{2}^{s-1}\) as the binary expansions of the elements of \(\mathbb{Z}_{2^{s-1}}\). Note that \(\phi_{1}\) is the identity map, and \((u_{s-1},\ldots,u_{s-1})\) and \((u_{0},\ldots,u_{s-2})Y_{s-1}\) are binary vectors of length \(2^{s-1}\), and that the rows of \(Y_{s-1}\) form a basis of a first order Reed-Muller code after adding the all-one row. We define \(\Phi_{s}:\mathbb{Z}_{2^{s}}^{n}\rightarrow\mathbb{Z}_{2}^{2^{s-1}}\) as the component-wise extended map of \(\phi_{s}\). We can also define a Gray map \(\Phi\) from \(\mathbb{Z}_{2}^{\alpha_{1}}\times\mathbb{Z}_{4}^{\alpha_{2}}\times\mathbb{Z}_ {8}^{\alpha_{3}}\) to \(\mathbb{Z}_{2}^{n}\), where \(n=\alpha_{1}+2\alpha_{2}+4\alpha_{3}\), as follows:
\[\Phi(u_{1}\mid u_{2}\mid u_{3})=(u_{1},\Phi_{2}(u_{2}),\Phi_{3}(u_{3})),\]
for any \(u_{i}\in\mathbb{Z}_{2^{i}}^{\alpha_{i}}\), where \(1\leq i\leq 3\).
Let \(\mathcal{C}\subseteq\mathbb{Z}_{2^{s}}^{n}\) be a \(\mathbb{Z}_{2^{s}}\)-additive code of length \(n\). We say that its Gray map image \(C=\Phi_{s}(\mathcal{C})\) is a \(\mathbb{Z}_{2^{s}}\)-linear code of length \(n2^{s-1}\). Since \(\mathcal{C}\) is a subgroup of \(\mathbb{Z}_{2^{s}}^{n}\), it is isomorphic to an abelian structure \(\mathbb{Z}_{2^{s}}^{t_{1}}\times\mathbb{Z}_{2^{s-1}}^{t_{2}}\times\cdots\times \mathbb{Z}_{p}^{t_{s}}\), and we say that \(\mathcal{C}\), or equivalently \(C=\Phi_{s}(\mathcal{C})\), is of type \((n;t_{1},\ldots,t_{s})\). Note that \(|\mathcal{C}|=p^{st_{1}}p^{(s-1)t_{2}}\cdots p^{t_{s}}\). Similarly, if \(\mathcal{C}\subseteq\mathbb{Z}_{2}^{\alpha_{1}}\times\mathbb{Z}_{4}^{\alpha_{ 2}}\times\mathbb{Z}_{8}^{\alpha_{3}}\) is a \(\mathbb{Z}_{2}\mathbb{Z}_{4}\mathbb{Z}_{8}\)-additive code, we say that its Gray map image \(C=\Phi(\mathcal{C})\) is a \(\mathbb{Z}_{2}\mathbb{Z}_{4}\mathbb{Z}_{8}\)-linear code of length \(\alpha_{1}+2\alpha_{2}+4\alpha_{3}\). Since \(\mathcal{C}\) can be seen as a subgroup of \(\mathbb{Z}_{8}^{\alpha_{1}+\alpha_{2}+\alpha_{3}}\), it is isomorphic to an abelian structure \(\mathbb{Z}_{8}^{t_{1}}\times\mathbb{Z}_{4}^{t_{2}}\times\mathbb{Z}_{2}^{t_{3}}\), and we say that \(\mathcal{C}\), or equivalently \(C=\Phi(\mathcal{C})\), is of type \((\alpha_{1},\alpha_{2},\alpha_{3};t_{1},t_{2},t_{3})\). Note that \(|\mathcal{C}|=8^{t_{1}}4^{t_{2}}2^{t_{3}}\). Unlike linear codes over finite fields, linear codes over rings do not have a basis, but there exists a generator matrix for these codes having minimum number of rows, that is, \(t_{1}+\cdots+t_{s}\) rows. If \(\alpha_{1}=\alpha_{3}=0\) (respectively, \(\alpha_{1}=\alpha_{2}=0\)), then they coincide with \(\mathbb{Z}_{4}\)-additive codes (respectively, \(\mathbb{Z}_{8}\)-additive codes). If \(\alpha_{3}=0\), then they are also known as \(\mathbb{Z}_{2}\mathbb{Z}_{4}\)-additive codes, and their Gray map images as \(\mathbb{Z}_{2}\mathbb{Z}_{4}\)-linear codes. In the last case, we also say that the code, or equivalently the Gray map image of the code, is of type \((\alpha_{1},\alpha_{2};t_{1},t_{2})\). Note that there are no \(\mathbb{Z}_{2}\mathbb{Z}_{4}\mathbb{Z}_{8}\)-linear Hadamard codes neither with only \(\alpha_{1}=0\) nor with only \(\alpha_{2}=0\)[6, 8].
Two structural properties of codes over \(\mathbb{Z}_{2}\) are the rank and dimension of the kernel. The rank of a code \(C\) over \(\mathbb{Z}_{2}\) is simply the dimension of the linear span, \(\langle C\rangle\), of \(C\). The kernel of a code \(C\) over \(\mathbb{Z}_{2}\) is defined as \(\mathrm{K}(C)=\{\mathbf{x}\in\mathbb{Z}_{2}^{n}:\mathbf{x}+C=C\}\)[9, 10]. If the all-zero vector belongs to \(C\), then \(\mathrm{K}(C)\) is a linear subcode of \(C\). Note also that if \(C\) is linear, then \(K(C)=C=\langle C\rangle\). We denote the rank of \(C\) as \(\mathrm{rank}(C)\) and the dimension of the kernel as \(\mathrm{ker}(C)\). These parameters can be used to distinguish between nonequivalent codes, since equivalent ones have the same rank and dimension of the kernel.
A binary code of length \(n\), \(2n\) codewords and minimum distance \(n/2\) is called a Hadamard code. Hadamard codes can be constructed from Hadamard matrices [11, 12]. Note that linear Hadamard codes are in fact first order Reed-Muller codes, or equivalently, the dual of extended Hamming codes [12, Ch.13 SS3]. The \(\mathbb{Z}_{2^{s}}\)-additive codes such that after the Gray map \(\Phi_{s}\) give Hadamard codes are called \(\mathbb{Z}_{2^{s}}\)-additive Hadamard codes and the corresponding images are called \(\mathbb{Z}_{2^{s}}\)-linear Hadamard codes. Similarly, the \(\mathbb{Z}_{2}\mathbb{Z}_{4}\mathbb{Z}_{8}\)-additive codes such that after the Gray map \(\Phi\) give Hadamard codes are called \(\mathbb{Z}_{2}\mathbb{Z}_{4}\mathbb{Z}_{8}\)-additive Hadamard codes and the corresponding images are called \(\mathbb{Z}_{2}\mathbb{Z}_{4}\mathbb{Z}_{8}\)-linear Hadamard codes.
It is known that \(\mathbb{Z}_{4}\)-linear Hadamard codes (that is, \(\mathbb{Z}_{2}\mathbb{Z}_{4}\)-linear Hadamard
code with \(\alpha_{1}=0\)) and \(\mathbb{Z}_{2}\mathbb{Z}_{4}\)-linear Hadamard codes with \(\alpha_{1}\neq 0\) can be classified by using either the rank or the dimension of the kernel [13, 14]. Moreover, in [15], it is shown that each \(\mathbb{Z}_{4}\)-linear Hadamard code is equivalent to a \(\mathbb{Z}_{2}\mathbb{Z}_{4}\)-linear Hadamard code with \(\alpha_{1}\neq 0\). Later, in [7, 16, 17, 18], an iterative construction for \(\mathbb{Z}_{p^{s}}\)-linear Hadamard codes is described, the linearity is established, and a partial classification by using the dimension of the kernel is obtained, giving the exact amount of nonequivalent such codes for some parameters. In [19], a complete classification of \(\mathbb{Z}_{8}\)-linear Hadamard codes by using the rank and dimension of the kernel is provided, giving the exact amount of nonequivalent such codes. For any \(t\geq 2\), the full classification of \(\mathbb{Z}_{p}\mathbb{Z}_{p^{2}}\)-linear Hadamard codes of length \(p^{t}\), with \(\alpha_{1}\neq 0\), \(\alpha_{2}\neq 0\), and \(p\geq 3\) prime, is given in [20, 21, 22], by using just the dimension of the kernel.
This paper is focused on \(\mathbb{Z}_{2}\mathbb{Z}_{4}\mathbb{Z}_{8}\)-linear Hadamard codes with \(\alpha_{1}\neq 0\), \(\alpha_{2}\neq 0\), and \(\alpha_{3}\neq 0\), generalizing some results given for \(\mathbb{Z}_{2}\mathbb{Z}_{4}\)-linear Hadamard codes with \(\alpha_{1}\neq 0\) in [14, 23] related to the construction, linearity, kernel and classification of such codes. These codes are also compared with the \(\mathbb{Z}_{4}\)-linear, \(\mathbb{Z}_{8}\)-linear, and in general \(\mathbb{Z}_{2^{s}}\)-linear Hadamard codes with \(s\geq 2\). This paper is organized as follows. In Section 2, we recall the definition of the Gray map considered in this paper and some of its properties. In Section 3, we describe the construction of \(\mathbb{Z}_{2}\mathbb{Z}_{4}\mathbb{Z}_{8}\)-linear Hadamard codes of type \((\alpha_{1},\alpha_{2},\alpha_{3};t_{1},t_{2},t_{3})\) with \(\alpha_{1}\neq 0\), \(\alpha_{2}\neq 0\), and \(\alpha_{3}\neq 0\). We see that they are not included neither in the family of \(\mathbb{Z}_{4}\)-linear Hadamard codes, nor in the family of \(\mathbb{Z}_{8}\)-linear Hadamard codes, nor in the family of \(\mathbb{Z}_{2}\mathbb{Z}_{4}\)-linear Hadamard codes with \(\alpha_{1}\neq 0\). Indeed, we see that all the nonlinear \(\mathbb{Z}_{2}\mathbb{Z}_{4}\mathbb{Z}_{8}\)-linear Hadamard codes of lenght \(2^{t}\) with \(\alpha_{1}\neq 0\), \(\alpha_{2}\neq 0\), and \(\alpha_{3}\neq 0\) are not equivalent to any \(\mathbb{Z}_{2}\mathbb{Z}_{4}\mathbb{Z}_{8}\)-linear Hadamard code of any other type, any \(\mathbb{Z}_{2}\mathbb{Z}_{4}\)-linear Hadamard code, and any \(\mathbb{Z}_{2^{s}}\)-linear Hadamard code, with \(s\geq 2\), of the same length \(2^{t}\).
## 2 Preliminary results on the Gray map
In this section, we focus on the generalized Gray maps considered in this paper for elements of \(\mathbb{Z}_{4}\) and \(\mathbb{Z}_{8}\), and in general of \(\mathbb{Z}_{2^{s}}\), \(s\geq 2\). We include some of its properties used in the paper.
We consider the Carlet's Gray map from \(\mathbb{Z}_{2^{s}}\) to \(\mathbb{Z}_{2}^{2^{s-1}}\)[2] given in (1). For \(s=2\) and \(s=3\), the Gray maps \(\phi_{2}\) and \(\phi_{3}\) considered in the paper for the
elements of \(\mathbb{Z}_{4}\) and \(\mathbb{Z}_{8}\), respectively, are the following:
\[\begin{array}{llll}\phi_{2}:&\mathbb{Z}_{4}\longrightarrow\mathbb{Z}_{2}^{2}& \phi_{3}:&\mathbb{Z}_{8}\longrightarrow\mathbb{Z}_{2}^{4}\\ &0\mapsto(0,0)&0\mapsto(0,0,0,0)\\ &1\mapsto(0,1)&1\mapsto(0,1,0,1)\\ &2\mapsto(1,1)&2\mapsto(0,0,1,1)\\ &3\mapsto(1,0)&3\mapsto(0,1,1,0)\\ &4\mapsto(1,1,1,1)\\ &5\mapsto(1,0,1,0)\\ &6\mapsto(1,1,0,0)\\ &7\mapsto(1,0,0,1).\end{array}\]
From [16], we have the following results:
**Corollary 2.1**: _[_16_]_ _Let \(\lambda,\mu\in\mathbb{Z}_{2}\). Then, \(\phi_{s}(\lambda\mu 2^{s-1})=\lambda\phi_{s}(\mu 2^{s-1})=\lambda\mu\phi_{s}(2^{s-1})\)._
**Corollary 2.2**: _[_16_]_ _Let \(u,v\in\mathbb{Z}_{2^{s}}\). Then, \(\phi_{s}(2^{s-1}u+v)=\phi_{s}(2^{s-1}u)+\phi_{s}(v)\)._
**Proposition 2.1**: _[_16_]_ _Let \(u,v\in\mathbb{Z}_{2^{s}}\). Then,_
\[d_{H}(\phi_{s}(u),\phi_{s}(v))=\mathrm{wt}_{H}(\phi_{s}(u-v)).\]
By Proposition 2.1, the \(\mathbb{Z}_{2}\mathbb{Z}_{4}\mathbb{Z}_{8}\)-linear codes obtained from the Gray map \(\Phi\) are distance invariant, that is, the Hamming weight distribution is invariant under translation by a codeword. Therefore, their minimum distance coincides with the minimum weight.
## 3 Construction of \(\mathbb{Z}_{2}\mathbb{Z}_{4}\mathbb{Z}_{8}\)-additive Hadamard codes
The description of generator matrices having minimum number of rows for \(\mathbb{Z}_{4}\)-additive, \(\mathbb{Z}_{2^{s}}\)-additive, and in general for \(\mathbb{Z}_{p^{s}}\)-additive Hadamard codes, with \(s\geq 2\) and \(p\) prime, are given in [13], [7], and [16], respectively. Similarly, generator matrices having minimum number of rows for \(\mathbb{Z}_{p}\mathbb{Z}_{p^{2}}\)-additive Hadamard codes with \(\alpha_{1}\neq 0,\alpha_{2}\neq 0\) and \(p\) prime, as long as an iterative construction of these matrices, are given in [14, 23] when \(p=2\) and in [20, 21, 24] when \(p\geq 3\). In this section, we generalize these results for \(\mathbb{Z}_{2}\mathbb{Z}_{4}\mathbb{Z}_{8}\)-additive
Hadamard codes with \(\alpha_{1}\neq 0\), \(\alpha_{2}\neq 0\), and \(\alpha_{3}\neq 0\). Specifically, we define an iterative construction for the generator matrices of these codes and establish that they generate \(\mathbb{Z}_{2}\mathbb{Z}_{4}\mathbb{Z}_{8}\)-additive Hadamard codes.
Let \(\mathbf{0},\mathbf{1},\mathbf{2},\ldots,\mathbf{7}\) be the vectors having the elements \(0,1,2,\ldots,7\) repeated in each coordinate, respectively. If \(A\) is a generator matrix of a \(\mathbb{Z}_{2}\mathbb{Z}_{4}\mathbb{Z}_{8}\)-additive code, that is, a subgroup of \(\mathbb{Z}_{2}^{\alpha_{1}}\times\mathbb{Z}_{4}^{\alpha_{2}}\times\mathbb{Z}_ {8}^{\alpha_{3}}\) for some integers \(\alpha_{1},\alpha_{2},\alpha_{3}\geq 0\), then we denote by \(A_{1}\) the submatrix of \(A\) with the first \(\alpha_{1}\) columns over \(\mathbb{Z}_{2}\), \(A_{2}\) the submatrix with the next \(\alpha_{2}\) columns over \(\mathbb{Z}_{4}\), and \(A_{3}\) the submatrix with the last \(\alpha_{3}\) columns over \(\mathbb{Z}_{8}\). We have that \(A=(A_{1}\mid A_{2}\mid A_{3})\), where the number of columns of \(A_{i}\) is \(\alpha_{i}\) for \(i\in\{1,2,3\}\).
Let \(t_{1}\geq 1\), \(t_{2}\geq 0\), and \(t_{3}\geq 1\) be integers. Now, we construct recursively matrices \(A^{t_{1},t_{2},t_{3}}\) having \(t_{1}\) rows of order \(8\), \(t_{2}\) rows of order \(4\), and \(t_{3}\) rows of order \(2\) as follows. First, we consider the following matrix:
\[A^{1,0,1}=\left(\begin{array}{cc|c|c}1&1&2&4\\ 0&1&1&1\end{array}\right). \tag{2}\]
Then, we apply the following constructions. If we have a matrix \(A^{\ell-1,0,1}=(A_{1}\mid A_{2}\mid A_{3})\), with \(\ell\geq 2\), we may construct the matrix
\[A^{\ell,0,1}=\left(\begin{array}{cc|c|c}A_{1}&A_{1}&M_{1}&A_{2}&A_{2}&A_{2}& A_{2}&M_{2}&A_{3}&A_{3}&\cdots&A_{3}\\ \mathbf{0}&\mathbf{1}&\mathbf{1}&\mathbf{0}&\mathbf{1}&\mathbf{2}&\mathbf{3}& \mathbf{1}&\mathbf{0}&\mathbf{1}&\cdots&\mathbf{7}\end{array}\right), \tag{3}\]
where \(M_{1}=\{\mathbf{z}^{T}:\mathbf{z}\in\{2\}\times\{0,2\}^{\ell-1}\}\) and \(M_{2}=\{\mathbf{z}^{T}:\mathbf{z}\in\{4\}\times\{0,2,4,6\}^{\ell-1}\}\). We perform construction (3) until \(\ell=t_{1}\). If we have a matrix \(A^{t_{1},\ell-1,1}=(A_{1}\mid A_{2}\mid A_{3})\), with \(t_{1}\geq 1\) and \(\ell\geq 1\), we may construct the matrix
\[A^{t_{1},\ell,1}=\left(\begin{array}{cc|c|c}A_{1}&A_{1}&M_{1}&A_{2}&A_{2}&A_ {2}&A_{2}&A_{3}&A_{3}&A_{3}&A_{3}\\ \mathbf{0}&\mathbf{1}&\mathbf{1}&\mathbf{0}&\mathbf{1}&\mathbf{2}&\mathbf{3}& \mathbf{0}&\mathbf{2}&\mathbf{4}&\mathbf{6}\end{array}\right), \tag{4}\]
where \(M_{1}=\{\mathbf{z}^{T}:\mathbf{z}\in\{2\}\times\{0,2\}^{t_{1}+\ell-1}\}\). We repeat construction (4) until \(\ell=t_{2}\). Finally, if we have a matrix \(A^{t_{1},t_{2},\ell-1}=(A_{1}\mid A_{2}\mid A_{3})\), with \(t_{1}\geq 1\), \(t_{2}\geq 0\), and \(\ell\geq 2\), we may construct the matrix
\[A^{t_{1},t_{2},\ell}=\left(\begin{array}{cc|c|c}A_{1}&A_{1}&A_{2}&A_{2}&A_ {3}&A_{3}\\ \mathbf{0}&\mathbf{1}&\mathbf{0}&\mathbf{2}&\mathbf{0}&\mathbf{4}\end{array} \right). \tag{5}\]
We repeat construction (5) until \(\ell=t_{3}\). Thus, in this way, we obtain \(A^{t_{1},t_{2},t_{3}}\).
Summarizing, in order to achieve \(A^{t_{1},t_{2},t_{3}}\) from \(A^{1,0,1}\), first we add \(t_{1}-1\) rows of order \(8\) by applying construction (3) \(t_{1}-1\) times, starting from
up to obtain \(A^{t_{1},0,1}\); then we add \(t_{2}\) rows of order \(4\) by applying construction (4) \(t_{2}\) times, up to generate \(A^{t_{1},t_{2},1}\); and, finally, we add \(t_{3}-1\) rows of order \(2\) by applying construction (5) \(t_{3}-1\) times to achieve \(A^{t_{1},t_{2},t_{3}}\). Note that in the first row there is always the row \(({\bf 1}\mid{\bf 2}\mid{\bf 4})\).
**Example 3.1**: _By using the constructions described in (3), (4), and (5), we obtain the following matrices \(A^{2,0,1}\), \(A^{1,1,1}\) and \(A^{1,1,2}\), respectively, starting from the matrix \(A^{1,0,1}\) given in (2):_
\[A^{2,0,1}=\left(\begin{array}{cccc|c|c}11&11&22&2222&4444&44444444\\ 01&01&02&1111&0246&1111111\\ 00&11&11&0123&1111&01234567\end{array}\right), \tag{6}\]
\[A^{1,1,1}=\left(\begin{array}{cccc|c|c}11&111&22&2222&4444\\ 01&01&01&02&1111&1111\\ 00&11&11&0123&0246\end{array}\right), \tag{7}\]
\[A^{1,1,2}=\left(\begin{array}{cccc|c|c}111&1111&222222&222222&4444&4444\\ 0101&0101&021111&02111&1111&1111\\ 0011&0011&110123&110123&0246&0246\\ 000&1111&000000&222222&0000&4444\end{array}\right).\]
In order to obtain \(A^{2,1,1}\), we start with \(A^{1,0,1}\), we apply construction (3) to obtain \(A^{2,0,1}=(A_{1}\mid A_{2}\mid A_{3})\) given in (6), and then we apply (4) to obtain
\[A^{2,1,1}=\left(\begin{array}{cccc|c|c}&&2222&&&&\\ A_{1}&A_{1}&0022&A_{2}&A_{2}&A_{2}&A_{2}&A_{3}&A_{3}&A_{3}\\ &&0202&&&&\\ {\bf 0}&{\bf 1}&{\bf 1}&{\bf 0}&{\bf 1}&{\bf 2}&{\bf 3}&{\bf 0}&{\bf 2}&{\bf 4 }&{\bf 6}\end{array}\right).\]
The \(\mathbb{Z}_{2}\mathbb{Z}_{4}\mathbb{Z}_{8}\)-additive code generated by \(A^{t_{1},t_{2},t_{3}}\) is denoted by \(\mathcal{H}^{t_{1},t_{2},t_{3}}\), and the corresponding \(\mathbb{Z}_{2}\mathbb{Z}_{4}\mathbb{Z}_{8}\)-linear code \(\Phi(\mathcal{H}^{t_{1},t_{2},t_{3}})\) by \(H^{t_{1},t_{2},t_{3}}\).
**Lemma 3.1**: _Let \(t_{1}\geq 1\) and \(t_{2}\geq 0\) be integers. Let \(\mathcal{H}^{t_{1},t_{2},1}\) be the \(\mathbb{Z}_{2}\mathbb{Z}_{4}\mathbb{Z}_{8}\)-additive code of type \((\alpha_{1},\alpha_{2},\alpha_{3};t_{1},t_{2},1)\) generated by the matrix \(A^{t_{1},t_{2},1}\). Then, \(2^{t_{1}+t_{2}}=\alpha_{1}\), \(4^{t_{1}+t_{2}}=\alpha_{1}+2\alpha_{2}\) and \(8^{t_{1}}4^{t_{2}}=\alpha_{1}+2\alpha_{2}+4\alpha_{3}\)._
**Proof.** First, we prove this lemma for the code \(\mathcal{H}^{t_{1},0,1}\) by induction on \(t_{1}\geq 1\). Note that the lemma is true for the code \(\mathcal{H}^{1,0,1}\) of type \((2,1,1;1,0,1)\). Assume that the lemma is true for the code \(\mathcal{H}^{t_{1},0,1}\) of type \((\alpha_{1},\alpha_{2},\alpha_{3};t_{1},0,1)\), that is,
\[2^{t_{1}}=\alpha_{1},4^{t_{1}}=\alpha_{1}+2\alpha_{2}\ and\ 8^{t_{1}}= \alpha_{1}+2\alpha_{2}+4\alpha_{3}. \tag{8}\]
By using construction (3), the type of \({\cal H}^{t_{1}+1,0,1}\) is \((\alpha^{\prime}_{1},\alpha^{\prime}_{2},\alpha^{\prime}_{3};t_{1}+1,0,1)\), where
\[\alpha^{\prime}_{1}=2\alpha_{1},\alpha^{\prime}_{2}=2^{t_{1}}+4 \alpha_{2}\ and\ \alpha^{\prime}_{3}=4^{t_{1}}+8\alpha_{3}. \tag{9}\]
Thus, from (8) and (9), \(2^{t_{1}+1}=2\alpha_{1}=\alpha^{\prime}_{1}\), \(4^{t_{1}+1}=4\alpha_{1}+8\alpha_{2}=2\alpha_{1}+2\alpha_{1}+8\alpha_{2}= \alpha^{\prime}_{1}+2\alpha^{\prime}_{2}\) and \(8^{t_{1}+1}=8\alpha_{1}+16\alpha_{2}+32\alpha_{3}=2\alpha_{1}+(2\alpha_{1}+8 \alpha_{2})+(4\alpha_{1}+8\alpha_{2}+32\alpha_{3})=2\alpha_{1}+(2^{t_{1}+1}+8 \alpha_{2})+(4^{t_{1}+1}+32\alpha_{3})=\alpha^{\prime}_{1}+2\alpha^{\prime}_{2 }+4\alpha^{\prime}_{3}\). Therefore, the lemma is true for the code \({\cal H}^{t_{1},0,1}\).
Next, we prove this lemma for the code \({\cal H}^{t_{1},t_{2},1}\) by induction on \(t_{2}\geq 0\). Assume that the lemma holds for the code \({\cal H}^{t_{1},t_{2},1}\) of type \((\alpha_{1},\alpha_{2},\alpha_{3};t_{1},t_{2},1)\), that is,
\[2^{t_{1}+t_{2}}=\alpha_{1},4^{t_{1}+t_{2}}=\alpha_{1}+2\alpha_{2},\ and\ 8^{t_{1}}4^{t_{2}}=\alpha_{1}+2\alpha_{2}+4\alpha_{3}. \tag{10}\]
By using construction (4), the type of \({\cal H}^{t_{1},t_{2}+1,1}\) is \((\alpha^{\prime}_{1},\alpha^{\prime}_{2},\alpha^{\prime}_{3};t_{1},t_{2}+1,1)\), where
\[\alpha^{\prime}_{1}=2\alpha_{1},\alpha^{\prime}_{2}=2^{t_{1}+t_{2 }}+4\alpha_{2}\ and\ \alpha^{\prime}_{3}=4\alpha_{3}. \tag{11}\]
Thus, from (10) and (11), \(2^{t_{1}+(t_{2}+1)}=2\alpha_{1}=\alpha^{\prime}_{1}\), \(4^{t_{1}+(t_{2}+1)}=4\alpha_{1}+8\alpha_{2}=2\alpha_{1}+2\alpha_{1}+8\alpha_{2 }=\alpha^{\prime}_{1}+2^{t_{1}+t_{2}+1}+8\alpha_{2}=\alpha^{\prime}_{1}+2 \alpha^{\prime}_{2}\) and \(8^{t_{1}}4^{t_{2}+1}=4\alpha_{1}+8\alpha_{2}+16\alpha_{3}=2\alpha_{1}+(2 \alpha_{1}+8\alpha_{2})+16\alpha_{3}=\alpha^{\prime}_{1}+(2^{t_{1}+t_{2}+1}+8 \alpha_{2})+4\alpha^{\prime}_{3}=\alpha^{\prime}_{1}+2\alpha^{\prime}_{2}+4 \alpha^{\prime}_{3}\). Therefore, the lemma is true for the code \({\cal H}^{t_{1},t_{2}+1,1}\). This completes the proof.
**Proposition 3.1**: _Let \(t_{1}\geq 1\), \(t_{2}\geq 0\), and \(t_{3}\geq 1\) be integers. Let \({\cal H}^{t_{1},t_{2},t_{3}}\) be the \(\mathbb{Z}_{2}\mathbb{Z}_{4}\mathbb{Z}_{8}\)-additive code of type \((\alpha_{1},\alpha_{2},\alpha_{3};t_{1},t_{2},t_{3})\) generated by the matrix \(A^{t_{1},t_{2},t_{3}}\). Then,_
\[\begin{array}{l}\alpha_{1}=2^{t_{1}+t_{2}+t_{3}-1},\\ \alpha_{1}+2\alpha_{2}=4^{t_{1}+t_{2}}2^{t_{3}-1},\\ \alpha_{1}+2\alpha_{2}+4\alpha_{3}=8^{t_{1}}4^{t_{2}}2^{t_{3}-1}.\end{array} \tag{12}\]
**Proof.** We prove this result for the code \({\cal H}^{t_{1},t_{2},t_{3}}\) by induction on \(t_{3}\geq 1\). By Lemma 3.1, the proposition is true for \(t_{3}=1\), that is, for the code \({\cal H}^{t_{1},t_{2},1}\). Assume that it holds for the code \({\cal H}^{t_{1},t_{2},t_{3}}\) of type \((\alpha_{1},\alpha_{2},\alpha_{3};t_{1},t_{2},t_{3})\), that is, (12) holds. By using construction (5), the type of \({\cal H}^{t_{1},t_{2},t_{3}+1}\) is \((\alpha^{\prime}_{1},\alpha^{\prime}_{2},\alpha^{\prime}_{3};t_{1},t_{2},t_{3}+1)\), where
\[\alpha^{\prime}_{1}=2\alpha_{1},\alpha^{\prime}_{2}=2\alpha_{2}, \ \mbox{and}\ \alpha^{\prime}_{3}=2\alpha_{3}. \tag{13}\]
Thus, from (12) and (13), \(2^{t_{1}+t_{2}+t_{3}}=2\alpha_{1}=\alpha_{1}^{\prime}\), \(4^{t_{1}+t_{2}}2^{t_{3}}=2\alpha_{1}+4\alpha_{2}=\alpha_{1}^{\prime}+2\alpha_{2}^ {\prime}\) and \(8^{t_{1}}4^{t_{2}}2^{t_{3}}=2\alpha_{1}+4\alpha_{2}+8\alpha_{3}=\alpha_{1}^{ \prime}+2\alpha_{2}^{\prime}+4\alpha_{3}^{\prime}\). Therefore, the proposition is true for the code \(\mathcal{H}^{t_{1},t_{2},t_{3}+1}\). This completes the proof.
**Corollary 3.1**: _Let \(t_{1}\geq 1\), \(t_{2}\geq 0\), and \(t_{3}\geq 1\) be integers. Let \(\mathcal{H}^{t_{1},t_{2},t_{3}}\) be the \(\mathbb{Z}_{2}\mathbb{Z}_{4}\mathbb{Z}_{8}\)-additive code of type \((\alpha_{1},\alpha_{2},\alpha_{3};t_{1},t_{2},t_{3})\) generated by the matrix \(A^{t_{1},t_{2},t_{3}}\). Then,_
\[\alpha_{1}=2^{t_{1}+t_{2}+t_{3}-1},\] \[\alpha_{2}=4^{t_{1}+t_{2}}2^{t_{3}-2}-2^{t_{1}+t_{2}+t_{3}-2},\] \[\alpha_{3}=8^{t_{1}}4^{t_{2}-1}2^{t_{3}-1}-4^{t_{1}+t_{2}-1}2^{t_ {3}-1}.\]
**Remark 3.1**: _By Corollary 3.1, we have that the \(\mathbb{Z}_{2}\mathbb{Z}_{4}\mathbb{Z}_{8}\)-additive codes \(\mathcal{H}^{t_{1},t_{2},t_{3}}\) of type \((\alpha_{1},\alpha_{2},\alpha_{3};t_{1},t_{2},t_{3})\) generated by the matrix \(A^{t_{1},t_{2},t_{3}}\), so constructed recursively from (3), (4), and (5), satisfy that \(\alpha_{1}\neq 0\), \(\alpha_{2}\neq 0\), and \(\alpha_{3}\neq 0\)._
**Remark 3.2**: _We can see the construction of the generator matrices \(A^{t_{1},t_{2},t_{3}}\) as a generalization of the recursive construction of the generator matrices of the \(\mathbb{Z}_{2}\mathbb{Z}_{4}\)-additive Hadamard codes of type \((\alpha_{1},\alpha_{2};t_{1},t_{2})\) with \(\alpha_{1}\neq 0\) and \(\alpha_{2}\neq 0\), given in [23]. Note that if we do not consider the coordinates over \(\mathbb{Z}_{8}\) in constructions (3), (4), and (5), we have that (3) and (4) become_
\[A^{\ell,1}=\left(\begin{array}{ccccc}A_{1}&A_{1}&M_{1}&A_{2}&A_{2}&A_{2}&A_{ 2}\\ \mathbf{0}&\mathbf{1}&\mathbf{1}&\mathbf{0}&\mathbf{1}&\mathbf{2}&\mathbf{3} \end{array}\right), \tag{14}\]
_where \(A^{\ell-1,1}=(A_{1}\mid A_{2})\) and \(M_{1}=2A_{1}=\{\mathbf{z}^{T}:\mathbf{z}\in\{2\}\times\{0,2\}^{\ell-1}\}\) (up to a column permutation); and construction (5) become_
\[A^{t_{1},\ell}=\left(\begin{array}{cc|cc}A_{1}&A_{1}&A_{2}&A_{2}\\ \mathbf{0}&\mathbf{1}&\mathbf{0}&\mathbf{2}\end{array}\right), \tag{15}\]
_where \(A^{t_{1},\ell-1}=(A_{1}\mid A_{2})\). Then, starting from the following matrix:_
\[A^{1,1}=\left(\begin{array}{cc|cc}1&1&2\\ 0&1&1\end{array}\right), \tag{16}\]
_and applying (14) and (15) in the same way as above, we obtain the generator matrices \(A^{t_{1},t_{2}}\) of the known \(\mathbb{Z}_{2}\mathbb{Z}_{4}\)-additive Hadamard codes of type \((\alpha_{1},\alpha_{2};t_{1},t_{2})\) with \(\alpha_{1}\neq 0\) and \(\alpha_{2}\neq 0\)[14, 23]. The \(\mathbb{Z}_{2}\mathbb{Z}_{4}\)-additive code generated by \(A^{t_{1},t_{2}}\) is denoted by \(\mathcal{H}^{t_{1},t_{2}}\), and the corresponding \(\mathbb{Z}_{2}\mathbb{Z}_{4}\)-linear code \(\Phi(\mathcal{H}^{t_{1},t_{2}})\) by \(H^{t_{1},t_{2}}\)._
When we include all the elements of \(\mathbb{Z}_{2^{i}}\), where \(1\leq i\leq 3\), as coordinates of a vector, we place them in increasing order. For a set \(S\subseteq\mathbb{Z}_{2^{i}}\) and \(\lambda\in\mathbb{Z}_{2^{i}}\), where \(i\in\{1,2,3\}\), we define \(\lambda S=\{\lambda j:j\in S\}\) and \(S+\lambda=\{j+\lambda:j\in S\}\). As before, when including all the elements in those sets as coordinates of a vector, we place them in increasing order. For example, \(2\mathbb{Z}_{8}=\{0,4,6,8\}\), \((\mathbb{Z}_{4},\mathbb{Z}_{4})=(0,1,2,3,0,1,2,3)\in\mathbb{Z}_{4}^{8}\) and \((\mathbb{Z}_{2}\mid\mathbb{Z}_{4}\mid 2\mathbb{Z}_{8},4\mathbb{Z}_{8})=(0,1 \mid 0,1,2,3\mid 0,2,4,6,0,4)\in\mathbb{Z}_{2}^{2}\times\mathbb{Z}_{4}^{4}\times \mathbb{Z}_{8}^{6}\).
**Lemma 3.2**: _Let \(1\leq i\leq 3\) and \(j\in\{0,1,\ldots,i-1\}\)._
1. _If_ \(\mu\in 2^{j}\mathbb{Z}_{2^{i}}\)_, then_ \(2^{j}\mathbb{Z}_{2^{i}}+\mu=2^{j}\mathbb{Z}_{2^{i}}\)_._
2. _If_ \(\mu\in 2^{j}\mathbb{Z}_{2^{i}}\)_, then_ \((2^{j}\mathbb{Z}_{2^{i}},\stackrel{{ m}}{{\ldots}},2^{j}\mathbb{Z}_{2 ^{i}})+\mu\mathbf{1}\)_, where_ \(m\geq 1\)_, is a permutation of the vector_ \((2^{j}\mathbb{Z}_{2^{i}},\stackrel{{ m}}{{\ldots}},2^{j}\mathbb{Z}_{2 ^{i}})\)_._
3. _If_ \(\mu\in 2\mathbb{Z}_{2^{i}}\)_, then_ \((\mathbb{Z}_{2^{i}}\backslash 2\mathbb{Z}_{2^{i}})+\mu=\mathbb{Z}_{2^{i}} \backslash 2\mathbb{Z}_{2^{i}}\)_._
4. _If_ \(\mu\in\mathbb{Z}_{2^{i}}\)_, then_ \((\mathbf{0},\ldots,\mathbf{2^{i}}-\mathbf{1})+(\mu,\stackrel{{ \ell\cdot 2^{i}}}{{\ldots}},\mu)\)_, where_ \(\ell\geq 1\) _and_ \(\mathbf{k}=(k,\stackrel{{\ell}}{{\ldots}},k)\) _for_ \(k\in\mathbb{Z}_{2^{i}}\)_, is a permutation of_ \((\mathbb{Z}_{2^{i}},\stackrel{{\ell\cdot}}{{\ldots}},\mathbb{Z}_{2 ^{i}})\)_._
**Proof.** Item 1 follows from the fact that \(\mathbb{Z}_{2^{i}}\) is a ring and \(2^{j}\mathbb{Z}_{2^{i}}\) is an ideal of \(\mathbb{Z}_{2^{i}}\). Item 2 follows from Item 1.
For Item 3, it \(x\in(\mathbb{Z}_{2^{i}}\backslash 2\mathbb{Z}_{2^{i}})+\mu\), then \(x-\mu\in\mathbb{Z}_{2^{i}}\backslash 2\mathbb{Z}_{2^{i}}\). Assume that \(x\notin\mathbb{Z}_{2^{i}}\backslash 2\mathbb{Z}_{2^{i}}\), so \(x\in 2\mathbb{Z}_{2^{i}}\). Since \(2\mathbb{Z}_{2^{i}}\) is an ideal of \(\mathbb{Z}_{2^{i}}\), we have that \(x-\mu\in 2\mathbb{Z}_{2^{i}}\), which is a contradiction. Thus, \(x\in\mathbb{Z}_{2^{i}}\backslash 2\mathbb{Z}_{2^{i}}\) and hence \((\mathbb{Z}_{2^{i}}\backslash 2\mathbb{Z}_{2^{i}})+\mu\subseteq\mathbb{Z}_{2^{i}} \backslash 2\mathbb{Z}_{2^{i}}\). In the same way, \((\mathbb{Z}_{2^{i}}\backslash 2\mathbb{Z}_{2^{i}})-\mu\subseteq\mathbb{Z}_{2^{i}} \backslash 2\mathbb{Z}_{2^{i}}\). Hence, \(\mathbb{Z}_{2^{i}}\backslash 2\mathbb{Z}_{2^{i}}\subseteq(\mathbb{Z}_{2^{i}} \backslash 2\mathbb{Z}_{2^{i}})+\mu\) and therefore \((\mathbb{Z}_{2^{i}}\backslash 2\mathbb{Z}_{2^{i}})+\mu=\mathbb{Z}_{2^{i}} \backslash 2\mathbb{Z}_{2^{i}}\).
For Item 4, note that \((\mathbf{0},\ldots,\mathbf{2^{i}}-\mathbf{1})+(\mu,\stackrel{{ \ell\cdot 2^{i}}}{{\ldots}},\mu)\) is a permutation of
\[(\mathbb{Z}_{2^{i}},\stackrel{{\ell\cdot}}{{\ldots}},\mathbb{Z}_{2 ^{i}})+(\mu,\stackrel{{\ell\cdot 2^{i}}}{{\ldots}},\mu). \tag{17}\]
Since \(\mathbb{Z}_{2^{i}}+\mu=\mathbb{Z}_{2^{i}}\), (17) is a permutation of \((\mathbb{Z}_{2^{i}},\stackrel{{\ell\cdot}}{{\ldots}},\mathbb{Z}_{2 ^{i}})\).
**Lemma 3.3**: _Let \(1\leq i\leq 3\), \(\lambda\in\mathbb{Z}_{2^{i}}\backslash 2\mathbb{Z}_{2^{i}}\), and \(u\in\mathbb{Z}_{2^{i}}^{n}\). Then,_
\[(u,\stackrel{{ 2^{i}}}{{\ldots}},u)+\lambda(\mathbf{0},\ldots, \mathbf{2^{i}}-\mathbf{1})\]
_is a permutation of \((\mathbb{Z}_{2^{i}},\stackrel{{ n}}{{\ldots}},\mathbb{Z}_{2^{i}})\)._
**Proof.** Since \(\lambda\in{\mathbb{Z}}_{2^{i}}\backslash{2\mathbb{Z}}_{2^{i}}\), \(\lambda({\bf 0},\ldots,{\bf 2^{i}-1})\) is a permutation of \(({\bf 0},\ldots,{\bf 2^{i}-1})\) and we may consider \(\lambda=1\). Then, \((u,\ldots,u)+({\bf 0},\ldots,{\bf 2^{i}-1})\) is a permutation of \((u_{1}+{\mathbb{Z}}_{2^{i}},\ldots,u_{n}+{\mathbb{Z}}_{2^{i}})=({\mathbb{Z}}_{2 ^{i}},\stackrel{{ n}}{{\ldots}},{\mathbb{Z}}_{2^{i}})\), where \(u=(u_{1},\ldots,u_{n})\).
**Lemma 3.4**: _Let \(u=(\mu,\stackrel{{ m}}{{\ldots}},\mu,2\mathbb{Z}_{4},\stackrel{{ n}}{{\ldots}},2\mathbb{Z}_{4},\mathbb{Z}_{4}\backslash{2 \mathbb{Z}}_{4},\stackrel{{ r}}{{\ldots}},\mathbb{Z}_{4} \backslash{2\mathbb{Z}}_{4})\in{\mathbb{Z}}_{4}^{m+2n+2r}\), where \(m,n,r\geq 0\) and \(\mu\in{\mathbb{Z}}_{4}\backslash{2\mathbb{Z}}_{4}=\{1,3\}\). Then,_
\[(u,u,u,u)+({\bf 0},{\bf 2},{\bf 0},{\bf 2})\]
_is a permutation of \((2\mathbb{Z}_{4},\stackrel{{ 4n}}{{\ldots}},2\mathbb{Z}_{4}, \mathbb{Z}_{4}\backslash{2\mathbb{Z}}_{4},\stackrel{{ 4r+2m}}{{\ldots}}, \mathbb{Z}_{4}\backslash{2\mathbb{Z}}_{4})\)._
**Proof.** By Items 1 and 3 of Lemma 3.2, \(u+{\bf 2}\) is a permutation of \((\mu+2,\stackrel{{ m}}{{\ldots}},\mu+2,2\mathbb{Z}_{4},\stackrel{{ n}}{{\ldots}},2\mathbb{Z}_{4},\mathbb{Z}_{4}\backslash{2\mathbb{Z}}_{4}, \stackrel{{ r}}{{\ldots}},\mathbb{Z}_{4}\backslash{2\mathbb{Z}}_{4})\). Let \({\bf k}=(\mu,\stackrel{{ m}}{{\ldots}},\mu)\). Since \(\mu\in\{1,3\}\), we have that \(({\bf k},{\bf k},{\bf k},{\bf k})+({\bf 0},{\bf 2},{\bf 0},{\bf 2})\) is a permutation of \((\mathbb{Z}_{4}\backslash{2\mathbb{Z}}_{4},\stackrel{{ 2m}}{{\ldots}}, \mathbb{Z}_{4}\backslash{2\mathbb{Z}}_{4})\). Therefore, \((u,u,u,u)+({\bf 0},{\bf 2},{\bf 0},{\bf 2})\) is a permutation of \((2\mathbb{Z}_{4},\stackrel{{ 4n}}{{\ldots}},2\mathbb{Z}_{4}, \stackrel{{ 4r+2m}}{{\ldots}},\mathbb{Z}_{4}\backslash{2\mathbb{Z}}_{4})\).
**Lemma 3.5**: _Let \(u=(\mu^{\prime},\stackrel{{ m^{\prime}}}{{\ldots}},\mu^{\prime}, \mu^{\prime\prime},\stackrel{{ m^{\prime}}}{{\ldots}},\mu^{\prime \prime},2\mathbb{Z}_{8},\stackrel{{ n^{\prime}}}{{\ldots}},2 \mathbb{Z}_{8},\stackrel{{ 2\mathbb{Z}_{8}}}{{\times}}2\mathbb{Z}_{8}, \stackrel{{ r^{\prime}}}{{\ldots}},\mathbb{Z}_{8}\backslash{2 \mathbb{Z}}_{8})\in{\mathbb{Z}}_{8}^{2m^{\prime}+4n^{\prime}+4r^{\prime}}\), where \(m^{\prime},n^{\prime},r^{\prime}\geq 0\) and \(\mu,\mu^{\prime}\in{\mathbb{Z}}_{8}\backslash{2\mathbb{Z}}_{8}=\{1,3,5,7\}\). Then,_
1. \((u,u,u,u)+({\bf 0},{\bf 2},{\bf 4},{\bf 6})\) _is a permutation of_ \((2\mathbb{Z}_{8},\stackrel{{ 4n^{\prime}}}{{\ldots}},2 \mathbb{Z}_{8},\mathbb{Z}_{8}\backslash{2\mathbb{Z}}_{8},\stackrel{{ 4r^{\prime}+2m^{\prime}}}{{\ldots}},\mathbb{Z}_{8}\backslash{2\mathbb{Z}}_{8})\)_;_
2. \((u,u,u,u)+({\bf 0},{\bf 4},{\bf 0},{\bf 4})\) _is a permutation of_ \((\mu^{\prime},\stackrel{{ 4m^{\prime}}}{{\ldots}},\mu^{\prime},\mu^{\prime}+4, \stackrel{{ 4m^{\prime}}}{{\ldots}},\mu^{\prime}+4,2\mathbb{Z}_{8}, \stackrel{{ 4n^{\prime}}}{{\ldots}},2\mathbb{Z}_{8},\stackrel{{ 2\mathbb{Z}_{8}}}{{\times}}2\mathbb{Z}_{8}, \stackrel{{ 4r^{\prime}}}{{\ldots}},\mathbb{Z}_{8}\backslash{2 \mathbb{Z}}_{8})\) _if_ \(\mu^{\prime}=\mu^{\prime\prime}\) _or_ \(\mu^{\prime}=\mu^{\prime\prime}+4\)_, or a permutation of_ \((2\mathbb{Z}_{8},\stackrel{{ 4n^{\prime}}}{{\ldots}},2 \mathbb{Z}_{8},\mathbb{Z}_{8}\backslash{2\mathbb{Z}}_{8},\stackrel{{ 4r^{\prime}+2m^{\prime}}}{{\ldots}},\mathbb{Z}_{8} \backslash{2\mathbb{Z}}_{8})\) _otherwise._
**Proof.** For Item 1, by Items 1 and 3 of Lemma 3.2, if \(j\in\{0,2,4,6\}\), then \(u+{\bf j}\) is a permutation of \((\mu^{\prime}+j,\stackrel{{ m^{\prime}}}{{\ldots}},\mu^{\prime}+j, \mu^{\prime\prime}+j,\stackrel{{ m^{\prime}}}{{\ldots}},\mu^{\prime \prime}+j,2\mathbb{Z}_{8},\stackrel{{ n^{\prime}}}{{\ldots}},2 \mathbb{Z}_{8},\stackrel{{ n^{\prime}}}{{\ldots}},\mathbb{Z}_{8} \backslash{2\mathbb{Z}}_{8})\). Let \({\bf k}^{\prime}=(\mu^{\prime},\stackrel{{ m^{\prime}}}{{\ldots}},\mu^{ \prime},\mu^{\prime\prime}\stackrel{{ m^{\prime}}}{{\ldots}},\mu^{ \prime\prime})\). Since \(\mu^{\prime},\mu^{\prime\prime}\in\{1,3,5,7\}\), we have that \(({\bf k}^{\prime},\stackrel{{ 4}}{{\ldots}},{\bf k}^{\prime})+({\bf 0},{\bf 2},{\bf 4},{\bf 6})\) is a permutation of \((\mathbb{Z}_{8}\backslash{2\mathbb{Z}}_{8},\stackrel{{ 2m^{\prime}}}{{\ldots}},\mathbb{Z}_{8} \backslash{2\mathbb{Z}}_{8})\) and hence \((u,u,u,u)+({\bf 0},{\bf 2},{\bf 4},{\bf 6})\) is a permutation of \((2\mathbb{Z}_{8},\stackrel{{ 4n^{\prime}}}{{\ldots}},2 \mathbb{Z}_{8},\mathbb{Z}_{8}\backslash{2\mathbb{Z}}_{8},\stackrel{{ 4r^{\prime}+2m^{\prime}}}{{\ldots}},\mathbb{Z}_{8} \backslash{2\mathbb{Z}}_{8})\).
For item 2, we have that \(({\bf k}^{\prime},\stackrel{{ 4}}{{\ldots}},{\bf k}^{\prime})+({\bf 0},{\bf 4},{\bf 0},{\bf 4})\) is a permutation of \((\mu^{\prime},\stackrel{{ 4m^{\prime}}}{{\ldots}},\mu^{\prime},\mu^{\prime}+4, \stackrel{{ 4m^{\prime}}}{{\ldots}},\mu^{\prime}+4)\) if \(\mu^{\prime}=\mu^{\prime\prime}\) or \(\mu^{\prime}=\mu^{\prime\prime}+4\), or a permutation of \((\mathbb{Z}_{8}\backslash{2\mathbb{Z}_{8},\stackrel{{ 2m^{\prime}}}{{\ldots}},\mathbb{Z}_{8} \backslash{2\mathbb{Z}}_{8})\) otherwise. Therefore, \((u,u,u,u)+({\bf 0},{\bf 4},{\bf 0},{\bf 4})\) is a permutation of \((\mu^{\prime},\stackrel{{ 4m^{\prime}}}{{\ldots}},\mu^{\prime},\mu^{\prime}+4, \stackrel{{ 4m^{\prime}}}{{\ldots}},\mu^{\prime}+4,2\mathbb{Z}_{8}, \stackrel{{ 4n^{\prime}}}{{\ldots}},2 \mathbb{Z}_{8},\mathbb{Z}_{8}\backslash{2\mathbb{Z}}_{8},\stackrel{{ 4r^{\prime}}}{{\ldots}},\mathbb{Z}_{8} \backslash{2\mathbb{Z}}_{8})\) if \(\mu^{\prime}=\mu^{\prime\prime}\) or \(\mu^{\prime}=\mu^{\prime\prime}+4\), or a permutation of \((2\mathbb{Z}_{8},\stackrel{{ 4n^{\prime}}}{{\ldots}},2 \mathbb{Z}_{8},\stackrel{{ 4r^{\prime}}}{{\ldots}},2 \mathbb{Z}_{8}\backslash{2\mathbb{Z}}_
**Lemma 3.6**: _Let \(u=(\mu,\stackrel{{ m}}{{\ldots}},\mu,4\mathbb{Z}_{8},\stackrel{{ n}}{{\ldots}},4\mathbb{Z}_{8},2\mathbb{Z}_{8}\backslash 4 \mathbb{Z}_{8},\stackrel{{ r}}{{\ldots}},2\mathbb{Z}_{8} \backslash 4\mathbb{Z}_{8})\in\mathbb{Z}_{8}^{m+2n+2r}\), where \(m,n,r\geq 0\) and \(\mu\in 2\mathbb{Z}_{8}\backslash 4\mathbb{Z}_{8}=\{2,6\}\). Then,_
1. \((u,u,u,u)+(\mathbf{0},\mathbf{2},\mathbf{4},\mathbf{6})\) _is a permutation of_ \((2\mathbb{Z}_{8},\stackrel{{ 2r+2n+m}}{{\ldots}},2\mathbb{Z}_{8})\)_;_
2. \((u,u,u,u)+(\mathbf{0},\mathbf{4},\mathbf{0},\mathbf{4})\) _is a permutation of_ \((4\mathbb{Z}_{8},\stackrel{{ 4n}}{{\ldots}},4\mathbb{Z}_{8},2\mathbb{Z}_{8} \backslash 4\mathbb{Z}_{8},\stackrel{{ 4r+2m}}{{\ldots}},2\mathbb{Z}_{8} \backslash 4\mathbb{Z}_{8})\)_._
**Proof.** By Item 1 of Lemma 3.2, if \(j\in\{0,4\}\), then \(u+\mathbf{j}\) is a permutation of \((\mu+j,\stackrel{{ m}}{{\ldots}},\mu+j,4\mathbb{Z}_{8},\stackrel{{ n}}{{\ldots}},4\mathbb{Z}_{8},2\mathbb{Z}_{8} \backslash 4\mathbb{Z}_{8},\stackrel{{ r}}{{\ldots}},2\mathbb{Z}_{8} \backslash 4\mathbb{Z}_{8})\). Similarly, if \(j\in\{2,6\}\), then \(u+\mathbf{j}\) is a permutation of \((\mu+j,\stackrel{{ m}}{{\ldots}},\mu+j,4\mathbb{Z}_{8},\stackrel{{ r}}{{\ldots}},4\mathbb{Z}_{8},2\mathbb{Z}_{8} \backslash 4\mathbb{Z}_{8},\stackrel{{ n}}{{\ldots}},2\mathbb{Z}_{8} \backslash 4\mathbb{Z}_{8})\). Let \(\mathbf{k}=(\mu,\stackrel{{ n}}{{\ldots}},\mu)\).
For Item 1, since \(\mu\in\{2,6\}\), we have that \((\mathbf{k},\stackrel{{ 4}}{{\ldots}},\mathbf{k})+(\mathbf{0}, \mathbf{2},\mathbf{4},\mathbf{6})\) is a permutation of \((2\mathbb{Z}_{8},\stackrel{{ m}}{{\ldots}},2\mathbb{Z}_{8})\), and hence \((u,u,u)+(\mathbf{0},\mathbf{2},\mathbf{4},\mathbf{6})\) is a permutation of \((2\mathbb{Z}_{8},\stackrel{{ 2r+2n+m}}{{\ldots}},2\mathbb{Z}_{8})\).
For Item 2, we have that \((\mathbf{k},\stackrel{{ 4}}{{\ldots}},\mathbf{k})+(\mathbf{0}, \mathbf{4},\mathbf{0},\mathbf{4})\) is a permutation of \((2\mathbb{Z}_{8}\backslash 4\mathbb{Z}_{8},\stackrel{{ 2m}}{{\ldots}},2\mathbb{Z}_{8} \backslash 4\mathbb{Z}_{8})\). Therefore, \((u,u,u)+(\mathbf{0},\mathbf{4},\mathbf{0},\mathbf{4})\) is a permutation of \((4\mathbb{Z}_{8},\stackrel{{ 4n}}{{\ldots}},4\mathbb{Z}_{8},2\mathbb{Z}_{8} \backslash 4\mathbb{Z}_{8},\stackrel{{ 4r+2m}}{{\ldots}},2\mathbb{Z}_{8} \backslash 4\mathbb{Z}_{8})\).
Let \(t_{1}\geq 1,t_{2}\geq 0\), and \(t_{3}\geq 1\) be integers. Let \({\cal G}^{t_{1},t_{2},t_{3}}\) be the set of all codewords of the code generated by the matrix obtained from \(A^{t_{1},t_{2},t_{3}}\) after removing the row \((\mathbf{1}\mid\mathbf{2}\mid\mathbf{4})\).
**Lemma 3.7**: _Let \(t_{1}\geq 1\) be an integer. Let_
\[\mathbf{z}=(u_{1},u_{1}\mid x_{1},u_{2},u_{2},u_{2},u_{2}\mid x_{2},u_{3}, \stackrel{{ 8}}{{\ldots}},u_{3})\in{\cal G}^{t_{1}+1,0,1},\]
_where \(\mathbf{u}=(u_{1}\mid u_{2}\mid u_{3})\in{\cal G}^{t_{1},0,1}\) and \(x_{i-1}\in(2\mathbb{Z}_{2^{i}})^{2^{(i-1)t_{1}}}\) for \(i\in\{2,3\}\). Then,_
1. _if_ \(o(\mathbf{z})=8\)_, then_ \(x_{i-1}\) _is a permutation of_ \((2\mathbb{Z}_{2^{i}},\stackrel{{ 2(i-1)(t_{1}-1)}}{{\ldots}},2 \mathbb{Z}_{2^{i}})\) _for_ \(i\in\{2,3\}\)_._
2. _if_ \(o(\mathbf{z})=4\)_, then_ \(x_{1}=\mathbf{0}\) _and_ \(x_{2}\) _is a permutation of_ \((4\mathbb{Z}_{8},\stackrel{{ 2\cdot 4^{t_{1}-1}}}{{\ldots}},4 \mathbb{Z}_{8})\)_._
3. _if_ \(o(\mathbf{z})=2\)_, then_ \(x_{1}=\mathbf{0}\) _and_ \(x_{2}=\mathbf{0}\)_._
**Proof.** Let \(\mathbf{w}_{j}\), where \(j\in\{1,\ldots,t_{1}+2\}\), be the \(j\)th row of the matrix \(A^{t_{1}+1,0,1}\). Note that \(\mathbf{w}_{1}=(\mathbf{1}\mid\mathbf{2}\mid\mathbf{4})\), and \(\mathbf{w}_{2},\ldots,\mathbf{w}_{t_{1}+2}\) are the rows of order \(8\), where \(\mathbf{w}_{t_{1}+2}=(\mathbf{0},\mathbf{1}\mid\mathbf{1},\mathbf{0},\mathbf{1 },\mathbf{2},\mathbf{3}\mid\mathbf{1},\mathbf{0},\ldots,\mathbf{7})\). Since any element of
\(\mathcal{G}^{t_{1}+1,0,1}\) can be written as \(\mathbf{z}+\lambda\mathbf{w}_{t_{1}+2}\), where \(\lambda\in\mathbb{Z}_{8}\), then \(\mathbf{z}=\sum_{j=2}^{t_{1}+1}r_{j}\mathbf{w}_{j}=(u_{1},u_{1}\mid x_{1},u_{2},u_{2},u_{2},u_{2}\mid x_{2},u_{3},\stackrel{{ 8}}{{\dots}},u_{3})\), where \(r_{j}\in\mathbb{Z}_{8}\). By construction, \(x_{1}\) and \(x_{2}\) are generated by the rows of \(M^{\prime}_{1}=\{\mathbf{z}^{T}:\mathbf{z}\in\{0,2\}^{t_{1}}\}\) and \(M^{\prime}_{2}=\{\mathbf{z}^{T}:\mathbf{z}\in\{0,2,4,6\}^{t_{1}}\}\), respectively. Thus, \(x_{1}=\mathbf{0}\) or \(x_{1}\) is a permutation of \((2\mathbb{Z}_{4},\stackrel{{ 2^{t_{1}-1}}}{{\dots}},2\mathbb{Z}_{4})\), and \(x_{2}=\mathbf{0}\) or \(x_{2}\) is a permutation of \((2\mathbb{Z}_{8},\stackrel{{ 4^{t_{1}-1}}}{{\dots}},2\mathbb{Z}_{8})\) or \((4\mathbb{Z}_{8},\stackrel{{ 2^{t_{1}-1}}}{{\dots}},4\mathbb{Z}_{8})\).
For Item 1, we have that there exists at least one \(j\in\{2,\dots,t_{1}+1\}\) such that \(r_{j}\in\{1,3,5,7\}\). Therefore, by Item 1 of Lemma 3.2, \(x_{i-1}\) is a permutation of \((2\mathbb{Z}_{2^{i}},\stackrel{{ 2^{(i-1)(t_{1}-1)}}}{{\dots}},2 \mathbb{Z}_{2^{i}})\) for \(i\in\{2,3\}\).
For Item 2, we have that \(r_{j}\in 2\mathbb{Z}_{8}\) for all \(j\in\{2,\dots,t_{1}+1\}\) and there exist at least one \(j\in\{2,\dots,t_{1}+1\}\) such that \(r_{j}\in\{2,6\}\). Therefore, \(x_{1}=\mathbf{0}\) and, by Item 1 of Lemma 3.2, \(x_{2}\) is a permutation of \((4\mathbb{Z}_{8},\stackrel{{ 2^{t_{1}-1}}}{{\dots}},4\mathbb{Z}_{8})\).
For Item 3, we have that \(r_{j}\in 4\mathbb{Z}_{8}\) for all \(j\in\{2,\dots,t_{1}+1\}\) and there exist at least one \(j\in\{2,\dots,t_{1}+1\}\) such that \(r_{j}=4\). Therefore, \(x_{1}=\mathbf{0}\) and \(x_{2}=\mathbf{0}\).
**Lemma 3.8**: _Let \(t_{1}\geq 1\) and \(t_{2}\geq 0\) be integers. Let_
\[\mathbf{z}=(u_{1},u_{1}\mid x_{1},u_{2},u_{2},u_{2},u_{2}\mid u_{3},u_{3},u_{3 },u_{3})\in\mathcal{G}^{t_{1},t_{2}+1,1},\]
_where \(\mathbf{u}=(u_{1}\mid u_{2}\mid u_{3})\in\mathcal{G}^{t_{1},t_{2},1}\) and \(x_{1}\in(2\mathbb{Z}_{4})^{2^{t_{1}+t_{2}}}\). Then,_
1. _if_ \(o(\mathbf{z})=8\)_, then_ \(x_{1}\) _is a permutation of_ \((2\mathbb{Z}_{4},\stackrel{{ 2^{t_{1}+t_{2}-1}}}{{\dots}},2 \mathbb{Z}_{4})\)_._
2. _if_ \(o(\mathbf{z})=4\)_, then_ \(x_{1}=\mathbf{0}\) _if_ \(u_{1}=\mathbf{0}\)_, and_ \(x_{1}\) _is a permutation of_ \((2\mathbb{Z}_{4},\stackrel{{ 2^{t_{1}+t_{2}-1}}}{{\dots}},2 \mathbb{Z}_{4})\) _otherwise._
3. _if_ \(o(\mathbf{z})=2\)_, then_ \(x_{1}=\mathbf{0}\)_._
**Proof.** Let \(\mathbf{w}_{i}\), where \(i\in\{1,\dots,t_{1}+t_{2}+2\}\), be the \(i\)th row of the matrix \(A^{t_{1},t_{2}+1,1}\). Note that \(\mathbf{w}_{1}=(\mathbf{1}\mid\mathbf{2}\mid\mathbf{4})\), \(\mathbf{w}_{2},\dots,\mathbf{w}_{t_{1}+1}\) are the rows of order \(8\), and \(\mathbf{w}_{t_{1}+2},\dots,\mathbf{w}_{t_{1}+t_{2}+2}\) are the rows of order \(4\), where \(\mathbf{w}_{t_{1}+t_{2}+2}=(\mathbf{0},\mathbf{1}\mid\mathbf{1},\mathbf{0}, \mathbf{1},\mathbf{2},\mathbf{3}\mid\mathbf{0},\mathbf{2},\mathbf{4},\mathbf{6})\). Since any element of \(\mathcal{G}^{t_{1},t_{2}+1,1}\) can be written as \(\mathbf{z}+\lambda\mathbf{w}_{t_{1}+t_{2}+2}\), where \(\lambda\in\{0,1,2,3\}\), then \(\mathbf{z}=\sum_{i=2}^{t_{1}+t_{2}+1}r_{i}\mathbf{w}_{i}=(u_{1},u_{1}\mid x_{1},u_{2},u_{2},u_{2},u_{2}\mid u_{3},u_{3},u_{3},u_{3})\), where \(r_{i}\in\mathbb{Z}_{8}\) for \(i\in\{2,\dots,t_{1}+1\}\) and \(r_{i}\in\{0,1,2,3\}\) for \(i\in\{t_{1}+2,\dots,t_{1}+t_{2}+1\}\). By construction, \(x_{1}\) is generated by the rows of \(M^{\prime}_{1}=\{\mathbf{z}^{T}:\mathbf{z}\in\{0,2\}^{t_{1}+t_{2}}\}\). Thus, \(x_{1}=\mathbf{0}\) or \(x_{1}\) is a permutation of \((2\mathbb{Z}_{4},\stackrel{{ 2^{t_{1}+t_{2}-1}}}{{\dots}},2 \mathbb{Z}_{4})\).
For Item 1, we have that there exists at least one \(i\in\{2,\dots,t_{1}+1\}\) such that \(r_{i}\in\{1,3,5,7\}\). Therefore, since \(x_{1}\) is of order at most two, \(x_{1}\neq\mathbf{0}\).
For Item 2, we have that \(r_{i}\in 2\mathbb{Z}_{8}\) for all \(i\in\{2,\ldots,t_{1}+1\}\) and \(r_{i}\in\{0,1,2,3\}\) for all \(i\in\{t_{1}+2,\ldots,t_{1}+t_{2}+1\}\). Note that, since \(x_{1}\) and \(u_{1}\) are of order at most two, \(x_{1}\neq\mathbf{0}\) if and only if there exists at least one \(i\) for \(i\in\{t_{1}+2,\ldots,t_{1}+t_{2}+1\}\) such that \(r_{i}\in\{1,3\}\), or equivalently, if and only if \(u_{1}\neq\mathbf{0}\).
For Item 2, we have that \(r_{i}\in 4\mathbb{Z}_{8}=\{0,4\}\) for all \(i\in\{2,\ldots,t_{1}+1\}\) and \(r_{i}\in\{0,2\}\) for all \(i\in\{t_{1}+2,\ldots,t_{1}+t_{2}+1\}\). Therefore, since \(x_{1}\) is of order at most two, \(x_{1}=\mathbf{0}\).
**Lemma 3.9**: _Let \(t_{1}\geq 1\) be an integer. Let \(\mathcal{H}^{t_{1},0,1}\) be the \(\mathbb{Z}_{2}\mathbb{Z}_{4}\mathbb{Z}_{8}\)-additive code of type \((\alpha_{1},\alpha_{2},\,\alpha_{3};\,t_{1},0,1)\) generated by the matrix \(A^{t_{1},0,1}\). Let \(\mathbf{u}=(u_{1}\mid u_{2}\mid u_{3})\in\mathcal{G}^{t_{1},0,1}\). Then,_
1. _if_ \(o(\mathbf{u})=8\)_, then_ \(u_{1}\) _contains every element of_ \(\mathbb{Z}_{2}\) _the same number of times,_ \(u_{2}\) _is a permutation of_ \((\mu,\stackrel{{ m}}{{\ldots}},\mu,2\mathbb{Z}_{4},\stackrel{{ n}}{{\ldots}},2\mathbb{Z}_{4},\mathbb{Z}_{4}\backslash 2 \mathbb{Z}_{4},\stackrel{{ r}}{{\ldots}},\mathbb{Z}_{4} \backslash 2\mathbb{Z}_{4})\) _for some integers_ \(m,n,r\geq 0\) _and_ \(\mu\in\{1,3\}\)_, and_ \(u_{3}\) _is a permutation of_ \((\mu^{\prime},\stackrel{{ m^{\prime}}}{{\ldots}},\mu^{\prime}, \mu^{\prime\prime},\stackrel{{ m^{\prime}}}{{\ldots}},\mu^{\prime \prime},2\mathbb{Z}_{8},\stackrel{{ n^{\prime}}}{{\ldots}},2 \mathbb{Z}_{8},\mathbb{Z}_{8}\backslash 2\mathbb{Z}_{8},\stackrel{{ r^{\prime}}}{{\ldots}}, \mathbb{Z}_{8}\backslash 2\mathbb{Z}_{8})\) _for some integers_ \(m^{\prime},n^{\prime},r^{\prime}\geq 0\) _and_ \(\mu,\mu^{\prime}\in\{1,3,5,7\}\)_._
2. _if_ \(o(\mathbf{u})=4\)_, then_ \(u_{1}=\mathbf{0}\)_,_ \(u_{2}\) _contains the element in_ \(2\mathbb{Z}_{4}\backslash\{0\}=\{2\}\) _exactly_ \(\frac{1}{2}(\frac{\alpha_{1}}{2}+\alpha_{2})=4^{t_{1}-1}\) _times and_ \(\frac{\alpha_{2}}{2}-\frac{\alpha_{1}}{4}=4^{t_{1}-1}-2^{t_{1}-1}\) _times the element_ \(0\)_, and_ \(u_{3}\) _is a permutation of_ \((\mu,\stackrel{{ m}}{{\ldots}},\mu,4\mathbb{Z}_{8},\stackrel{{ n}}{{\ldots}},4\mathbb{Z}_{8},2\mathbb{Z}_{8}\backslash 4 \mathbb{Z}_{8},\stackrel{{ r}}{{\ldots}},2\mathbb{Z}_{8} \backslash 4\mathbb{Z}_{8})\) _for some integers_ \(m,n,r\geq 0\) _and_ \(\mu\in\{2,6\}\)_._
3. _if_ \(o(\mathbf{u})=2\)_, then_ \(u_{1}=\mathbf{0}\)_,_ \(u_{2}=\mathbf{0}\)_, and_ \(u_{3}\) _contains the element in_ \(4\mathbb{Z}_{8}\backslash\{0\}=\{4\}\) _exactly_ \(\frac{1}{4}(\frac{\alpha_{1}}{2}+\alpha_{2}+2\alpha_{3})=8^{t_{1}-1}\) _times and_ \(\frac{\alpha_{3}}{2}-\frac{1}{4}(\frac{\alpha_{1}}{2}+\alpha_{2})=8^{t_{1}-1}- 4^{t_{1}-1}\) _times the element_ \(0\)_._
**Proof.** We prove this lemma by induction on \(t_{1}\geq 1\). If \(t_{1}=1\), then by Lemma 3.1, \(\alpha_{1}=2\), \(\alpha_{2}=1\), \(\alpha_{3}=1\), and \(\mathcal{G}^{1,0,1}=\langle(0,1\mid 1\mid 1)\rangle\). Let \(\mathbf{u}=(u_{1}\mid u_{2}\mid u_{3})\in\mathcal{G}^{1,0,1}\). Then, \(\mathbf{u}=\lambda(0,1\mid 1\mid 1)\), where \(\lambda\in\mathbb{Z}_{8}\). Thus, we have that \(u_{1}=\lambda(0,1)\), \(u_{2}=(\lambda)\), and \(u_{3}=(\lambda)\). If \(o(\mathbf{u})=8\), then \(\lambda\in\mathbb{Z}_{8}\backslash 2\mathbb{Z}_{8}\). Therefore, \(\mathbf{u}\) satisfies property 1. If \(o(\mathbf{u})=4\), then \(\lambda\in\{2,6\}\). In this case, \(u_{1}=(0,0)\), \(u_{2}=(2)\) contains the element in \(2\mathbb{Z}_{4}\backslash\{0\}=\{2\}\) exactly \(1=\frac{1}{2}(\frac{\alpha_{1}}{2}+\alpha_{2})\) time and \(0=\frac{\alpha_{2}}{2}-\frac{\alpha_{1}}{4}\) times the element \(0\), and \(u_{3}=(\lambda)\). Thus, \(\mathbf{u}\) satisfies property 2. If \(o(\mathbf{u})=2\), then \(\lambda=4\). In this case, \(u_{1}=(0,0)\), \(u_{2}=(0)\), and \(u_{3}=(4)\) contains the element in \(4\mathbb{Z}_{8}\backslash\{0\}=\{4\}\) exactly \(1=\frac{1}{4}(\frac{\alpha_{1}}{2}+\alpha_{2}+2\alpha_{3})\) time and \(0=\frac{\alpha_{3}}{2}-\frac{1}{4}(\frac{\alpha_{1}}{2}+\alpha_{2})\) times the element \(0\). Thus, \(\mathbf{u}\) satisfies property 3. Therefore, the lemma is true for \(t_{1}=1\).
Assume that the lemma holds for the code \({\cal H}^{t_{1},0,1}\) of type \((\alpha_{1},\alpha_{2},\alpha_{3};t_{1},0,1)\) with \(t_{1}\geq 1\). By Lemma 3.1, we have that
\[2^{t_{1}}=\alpha_{1},4^{t_{1}}=\alpha_{1}+2\alpha_{2},\mbox{ and }8^{t_{1}}= \alpha_{1}+2\alpha_{2}+4\alpha_{3}. \tag{18}\]
Now, we have to show that the lemma is also true for the code \({\cal H}^{t_{1}+1,0,1}\).
Let \({\bf v}=(v_{1}\mid v_{2}\mid v_{3})\in{\cal G}^{t_{1}+1,0,1}\). We can write
\[{\bf v}={\bf z}+\lambda{\bf w},\]
where \({\bf z}=(u_{1},u_{1}\mid x_{1},u_{2},u_{2},u_{2},u_{2}\mid x_{2},u_{3},.^{8}.,u_{3})\), \({\bf w}=({\bf 0},{\bf 1}\mid{\bf 1},{\bf 0},{\bf 1},{\bf 2},{\bf 3}\mid{\bf 1 },{\bf 0},\ldots,{\bf 7})\), \({\bf u}=(u_{1}\mid u_{2}\mid u_{3})\in{\cal G}^{t_{1},0,1}\), \(\lambda\in{\mathbb{Z}}_{8}\), \(x_{1}\in(2{\mathbb{Z}}_{4})^{2^{t_{1}}}\) such that either \(x_{1}={\bf 0}\) or \(x_{1}\) is a permutation of \((2{\mathbb{Z}}_{4},{}^{2^{t_{1}-1}_{\cdot}}_{\cdot},2{\mathbb{Z}}_{4})\), and \(x_{2}\in(2{\mathbb{Z}}_{8})^{4^{t_{1}}}\) such that either \(x_{2}={\bf 0}\) or \(x_{2}\) is a permutation of \((2{\mathbb{Z}}_{8},{}^{4^{t_{1}-1}_{\cdot}}_{\cdot},2{\mathbb{Z}}_{8})\) or \((4{\mathbb{Z}}_{8},{}^{2\cdot 4^{t_{1}-1}_{\cdot}}_{\cdot},4{\mathbb{Z}}_{8})\). Then, \(v_{1}=(u_{1},u_{1})+\lambda({\bf 0},{\bf 1})\) and, for \(i\in\{2,3\}\),
\[v_{i}=(x_{i-1},u_{i},.^{2^{i}}_{\cdot}.,u_{i})+\lambda({\bf 1},{\bf 0}, \ldots,{\bf 2^{i}}-{\bf 1}). \tag{19}\]
If \({\bf z}={\bf 0}\), then \({\bf v}=\lambda{\bf w}\) and it is easy to see that \({\bf v}\) satisfies property 1 if \(\lambda\in{\mathbb{Z}}_{8}\backslash 2{\mathbb{Z}}_{8}=\{1,3,5,7\}\), property 2 if \(\lambda\in\{2,6\}\), and property 3 if \(\lambda=4\). Therefore, we focus on the case when \({\bf z}\neq{\bf 0}\).
Case 1: Assume that \(o({\bf v})=8\). We have two subcases: when \(o({\bf z})\) is arbitrary and \(\lambda\in{\mathbb{Z}}_{8}\backslash 2{\mathbb{Z}}_{8}\), and when \(o({\bf z})=8\) and \(\lambda\in 2{\mathbb{Z}}_{8}\). In both subcases, note that \(v_{1}\) contains every element of \({\mathbb{Z}}_{2}\) the same number of times. For the first subcase, we have that \((u_{i},.^{2^{i}}_{\cdot}.,u_{i})+\lambda({\bf 0},\ldots,{\bf 2^{i}}-{\bf 1})\), for \(i\in\{2,3\}\), is a permutation of \(({\mathbb{Z}}_{2^{i}},.^{\alpha_{i}}_{\cdot}.,{\mathbb{Z}}_{2^{i}})\) by Lemma 3.3. Thus, from (19), \(v_{i}\) is a permutation of \((x_{i-1}+\lambda{\bf 1},{\mathbb{Z}}_{2^{i}},.^{\alpha_{i}}_{\cdot},{\mathbb{Z}}_{2^{i}})\). Since either \(x_{i-1}+\lambda{\bf 1}=\lambda{\bf 1}\), or \(x_{i-1}+\lambda{\bf 1}\) is a permutation of \(({\mathbb{Z}}_{2^{i}}\backslash 2{\mathbb{Z}}_{2^{i}},{}^{2^{(i-1)(t_{1}-1)}}_{ \cdot},{\mathbb{Z}}_{2^{i}}\backslash 2{\mathbb{Z}}_{2^{i}})\), \({\bf v}\) satisfies property 1.
For the second subcase when \(o({\bf v})=8\), that is, when \(o({\bf z})=8\) and \(\lambda\in 2{\mathbb{Z}}_{8}\), we have that \(o({\bf u})=8\) and, by Item 1 of Lemma 3.7, \(x_{i-1}\) is a permutation of \((2{\mathbb{Z}}_{2^{i}},{}^{2^{(i-1)(t_{1}-1)}}_{\cdot},2{\mathbb{Z}}_{2^{i}})\) for \(i\in\{2,3\}\). By induction hypothesis, \({\bf u}\) satisfies property 1 and then \(u_{2}\) is a permutation of
\[(\mu,.^{m}_{\cdot}.,\mu,2{\mathbb{Z}}_{4},.^{n}_{\cdot}.,2{\mathbb{Z}}_{4},{ \mathbb{Z}}_{4}\backslash 2{\mathbb{Z}}_{4},.^{r}_{\cdot}.,{\mathbb{Z}}_{4}\backslash 2{ \mathbb{Z}}_{4}),\]
where \(m,n,r\geq 0\) and \(\mu\in\{1,3\}\), and \(u_{3}\) is a permutation of
\[(\mu^{\prime},.^{m^{\prime}}_{\cdot},\mu^{\prime},\mu^{\prime\prime},.^{m^{ \prime}}_{\cdot},\mu^{\prime\prime},2{\mathbb{Z}}_{8},.^{n^{\prime}}_{\cdot}., 2{\mathbb{Z}}_{8},{\mathbb{Z}}_{8}\backslash 2{\mathbb{Z}}_{8},.^{r^{\prime}}_{\cdot}.,{\mathbb{Z}}_{8} \backslash 2{\mathbb{Z}}_{8}),\]
where \(m^{\prime},n^{\prime},r^{\prime}\geq 0\) and \(\mu^{\prime},\mu^{\prime\prime}\in\{1,3,5,7\}\). From (19), \(v_{2}=(x_{1},u_{2},u_{2},u_{2},u_{2})+\lambda(\mathbf{1},\mathbf{0},\mathbf{1}, \mathbf{2},\mathbf{3}).\) If \(\lambda\in\{0,4\}\), then \(v_{2}=(x_{1},u_{2},u_{2},u_{2},u_{2})\) in \(\mathbf{v}\) satisfies the same property as \(u_{2}\) in \(\mathbf{u}\); that is, property 1. If \(\lambda\in\{2,6\}\), then \(v_{2}=(x_{1},u_{2},u_{2},u_{2},u_{2})+(\mathbf{2},\mathbf{0},\mathbf{2}, \mathbf{0},\mathbf{2})\). By Item 1 of Lemma 3.2, we have that \(x_{1}+\mathbf{2}\) is a permutation of \((2\mathbb{Z}_{4},\stackrel{{ 2^{t_{1}-1}}}{{\dots}},2\mathbb{Z}_{4})\). Thus, by Lemma 3.4, \(v_{2}\) is a permutation of
\[(2\mathbb{Z}_{4},\stackrel{{ 4n+2^{t_{1}-1}}}{{\dots}},2 \mathbb{Z}_{4},\mathbb{Z}_{4}\backslash 2\mathbb{Z}_{4},\stackrel{{ 4r+2m}}{{\dots}}, \mathbb{Z}_{4}\backslash 2\mathbb{Z}_{4}).\]
Therefore, for \(\lambda\in 2\mathbb{Z}_{8}\), \(v_{2}\) satisfies property 1. Now, we consider the coordinates in \(v_{3}\). From (19), \(v_{3}=(x_{2},u_{3},\stackrel{{ 8}}{{\dots}},u_{3})+\lambda(\mathbf{1}, \mathbf{0},\dots,\mathbf{7}).\) By Item 1 of Lemma 3.2, we have that, for \(\lambda\in 2\mathbb{Z}_{8}\), \(x_{2}+\lambda\mathbf{1}\) is a permutation of \((2\mathbb{Z}_{8},\stackrel{{ 4^{t_{1}-1}}}{{\dots}},2\mathbb{Z}_{8})\). If \(\lambda=0\), it is easy to see that \(v_{3}\) satisfies property 1. Note that \(\lambda(\mathbf{0},\dots,\mathbf{7})\) is a permutation of \((\mathbf{0},\mathbf{2},\mathbf{4},\mathbf{6},\mathbf{0},\mathbf{2},\mathbf{4}, \mathbf{6})\) if \(\lambda\in\{2,6\}\), and a permutation of \((\mathbf{0},\mathbf{4},\mathbf{0},\mathbf{4},\mathbf{0},\mathbf{4},\mathbf{0}, \mathbf{4})\) if \(\lambda=4\). Thus, by Lemma 3.5, \(v_{3}\) satisfies property 1. Therefore, if \(o(\mathbf{v})=8\), then \(\mathbf{v}\) satisfies property 1.
Case 2: Assume that \(o(\mathbf{v})=4\). We have two subcases: when \(o(\mathbf{z})=4\) and \(\lambda\in 2\mathbb{Z}_{8}\), and when \(o(\mathbf{z})=2\) and \(\lambda\in\{2,6\}\). For the first subcase, since \(o(\mathbf{z})=4\), we have that \(o(\mathbf{u})=4\). Moreover, \(x_{1}=\mathbf{0}\) and \(x_{2}\) is a permutation of \((4\mathbb{Z}_{8},\stackrel{{ 2^{t_{1}-1}}}{{\dots}},4\mathbb{Z}_{8})\) by Item 2 of Lemma 3.7. By induction hypothesis, \(\mathbf{u}\) satisfies property 2. Then, \(u_{1}=\mathbf{0}\), \(u_{2}\) contains the element in \(2\mathbb{Z}_{4}\backslash\{0\}=\{2\}\) exactly \(4^{t_{1}-1}\) times and \(4^{t_{1}-1}-2^{t_{1}-1}\) times the element \(0\), and \(u_{3}\) is a permutation of
\[(\mu,\stackrel{{ m}}{{\dots}},\mu,4\mathbb{Z}_{8},\stackrel{{ n}}{{\dots}},4\mathbb{Z}_{8},2\mathbb{Z}_{8}\backslash 4 \mathbb{Z}_{8},\stackrel{{ r}}{{\dots}},2\mathbb{Z}_{8}\backslash 4 \mathbb{Z}_{8})\]
for some integers \(m,n,r\geq 0\) and \(\mu\in\{2,6\}\). Since \(v_{1}=(u_{1},u_{1})+\lambda(\mathbf{0},\mathbf{1})\), \(u_{1}=\mathbf{0}\), and \(\lambda\in 2\mathbb{Z}_{8}\), we have that \(v_{1}=\mathbf{0}\). From (19), \(v_{2}=(x_{1},u_{2},u_{2},u_{2},u_{2})+\lambda(\mathbf{1},\mathbf{0},\mathbf{1},\mathbf{2},\mathbf{3}).\) If \(\lambda\in\{0,4\}\), then \(v_{2}=(x_{1},u_{2},u_{2},u_{2},u_{2})\). Since \(x_{1}=\mathbf{0}\) is of length \(2^{t_{1}}\), it is easy to see that \(v_{2}\) in \(\mathbf{v}\) satisfies the same property as \(u_{2}\) in \(\mathbf{u}\); that is, property 2. If \(\lambda\in\{2,6\}\), then \(v_{2}=(x_{1},u_{2},u_{2},u_{2},u_{2})+(\mathbf{2},\mathbf{0},\mathbf{2}, \mathbf{0},\mathbf{2})\), where \(x_{1}=\mathbf{0}\) is of length \(2^{t_{1}}\). Note that \(u_{2}+\mathbf{2}\) contains the element in \(2\mathbb{Z}_{4}\backslash\{0\}=\{2\}\) as many times as \(u_{2}\) contains the element \(0\), and the element \(0\) as many times as \(u_{2}\) contains the element \(2\). Thus, \(v_{2}\) contains the element in \(2\mathbb{Z}_{4}\backslash\{0\}=\{2\}\) exactly \(2^{t_{1}}+2(4^{t_{1}-1})+2(4^{t_{1}-1}-2^{t_{1}-1})=4^{t_{1}}\) times and \(2(4^{t_{1}-1})+2(4^{t_{1}-1}-2^{t_{1}-1})=4^{t_{1}}-2^{t_{1}}\) times the element \(0\). Therefore, for \(\lambda\in 2\mathbb{Z}_{8}\), \(v_{2}\) satisfies property 2. Now, we consider the coordinates in \(v_{3}\). From (19), \(v_{3}=(x_{2},u_{3},\stackrel{{ 8}}{{\dots}},u_{3})+\lambda(\mathbf{1}, \mathbf{0},\dots,\mathbf{7})\).
to see that \(v_{3}\) satisfies property 2. For \(\lambda=4\), \(x_{2}+\lambda{\bf 1}\) is a permutation of \((4\mathbb{Z}_{8},\stackrel{{ 2\cdot 4^{t_{1}-1}}}{{\cdots}},4 \mathbb{Z}_{8})\), and for \(\lambda\in\{2,6\}\), it is a permutation of
\[(2\mathbb{Z}_{8}\backslash 4\mathbb{Z}_{8},\stackrel{{ 2\cdot 4^{t_{1}-1} }}{{\cdots}},2\mathbb{Z}_{8}\backslash 4\mathbb{Z}_{8}).\]
Note that \(\lambda({\bf 0},\ldots,{\bf 7})\) is a permutation of \(({\bf 0},{\bf 2},{\bf 4},{\bf 6},{\bf 0},{\bf 2},{\bf 4},{\bf 6})\) if \(\lambda\in\{2,6\}\), and a permutation of \(({\bf 0},{\bf 4},{\bf 0},{\bf 4},{\bf 0},{\bf 4},{\bf 0},{\bf 4})\) if \(\lambda=4\). Hence, by Lemma 3.6, \(v_{3}\) also satisfies property 2, and so does \({\bf v}\).
Now, we consider the second subcase, that is, when \(o({\bf z})=2\) and \(\lambda\in\{2,6\}\). Since \(o({\bf z})=2\), we have that \(o({\bf u})=2\). Then, by Item 3 of Lemma 3.7, \(x_{1}={\bf 0}\) and \(x_{2}={\bf 0}\). By induction hypothesis, \({\bf u}\) satisfies property 3, so \(u_{1}={\bf 0}\), \(u_{2}={\bf 0}\), and \(u_{3}\) contains the element in \(4\mathbb{Z}_{8}\backslash\{0\}=\{4\}\) exactly \(m=8^{t_{1}-1}\) times and \(m^{\prime}=8^{t_{1}-1}-4^{t_{1}-1}\) times the element \(0\). Since \(v_{1}=(u_{1},u_{1})+\lambda({\bf 0},{\bf 1})\), \(u_{1}={\bf 0}\), and \(\lambda\in\{2,6\}\), we have that \(v_{1}={\bf 0}\). From (19), \(v_{2}=(x_{1},u_{2},u_{2},u_{2},u_{2})+({\bf 2},{\bf 0},{\bf 2},{\bf 0},{\bf 2})\). Since \(x_{1}={\bf 0}\) and \(u_{2}={\bf 0}\), of length \(\alpha_{1}\) and \(\alpha_{2}\), respectively, we have that \(v_{2}=({\bf 2},{\bf 0},{\bf 2},{\bf 0},{\bf 2}).\) Therefore, \(v_{2}\) contains the element in \(2\mathbb{Z}_{4}\backslash\{0\}=\{2\}\) exactly \(\alpha_{1}+2\alpha_{2}=4^{t_{1}}\) times and \(2\alpha_{2}=4^{t_{1}}-2^{t_{1}}\) times the element \(0\), by (18). Therefore, \(v_{2}\) satisfies property 2. Now, we consider the coordinates in \(v_{3}\). From (19), \(v_{3}=(x_{2},u_{3},\stackrel{{ 8}}{{\cdot}},\stackrel{{ }}{{\cdot}},u_{3})+\lambda({\bf 1},{\bf 0},\ldots,{\bf 7}).\) Since \(x_{2}={\bf 0}\), \(x_{2}+\lambda{\bf 1}=(\lambda,\stackrel{{ 4^{t_{1}}}}{{\cdots}},\lambda)\). Note that \(u_{3}\) is a permutation of
\[(4,\stackrel{{ m-m^{\prime}}}{{\cdots}},4,4\mathbb{Z}_{8}, \stackrel{{ m^{\prime}}}{{\cdots}},4\mathbb{Z}_{8}).\]
Moreover, since \(\lambda\in\{2,6\}\), \(\lambda({\bf 0},\ldots,{\bf 7})\) is a permutation of \(({\bf 0},{\bf 2},{\bf 4},{\bf 6},{\bf 0},{\bf 2},{\bf 4},{\bf 6})\). Thus, by Item 1 of Lemma 3.2, \((u_{3},\stackrel{{ 8}}{{\cdot}},u_{3})+({\bf 0},{\bf 2},{\bf 4},{\bf 6 },{\bf 0},{\bf 2},{\bf 4},{\bf 6})\) is a permutation of
\[(2\mathbb{Z}_{8},\stackrel{{ 2(m-m^{\prime})+4m^{\prime}}}{{ \cdots}},2\mathbb{Z}_{8}).\]
Thus, \(v_{3}\) is a permutation of \((\lambda,\stackrel{{ 4^{t_{1}}}}{{\cdots}},\lambda,2\mathbb{Z}_{8}, \stackrel{{ 2(m-m^{\prime})+4m^{\prime}}}{{\cdots}},2\mathbb{Z}_{8})\) with \(\lambda\in\{2,6\}\), and hence \(v_{3}\) also satisfies property 2 and so does \({\bf v}\). Therefore, if \(o({\bf v})=4\), then \({\bf v}\) satisfies property 2.
Case 3: Assume that \(o({\bf v})=2\). Then, \(o({\bf z})=2\) and \(\lambda\in\{0,4\}\). Since \(o({\bf z})=2\), then \(o({\bf u})=2\). Moreover, \(x_{1}={\bf 0}\) and \(x_{2}={\bf 0}\) by Item 3 of Lemma 3.7. By induction hypothesis, \({\bf u}\) satisfies property 3, and then \(u_{1}={\bf 0}\), \(u_{2}={\bf 0}\), and \(u_{3}\) contains the element in \(4\mathbb{Z}_{8}\backslash\{0\}=\{4\}\) exactly \(8^{t_{1}-1}\) times and \(8^{t_{1}-1}-4^{t_{1}-1}\) times the element \(0\). Since \(v_{1}=(u_{1},u_{1})+\lambda({\bf 0},{\bf 1})\), \(v_{1}={\bf 0}\). From (19), \(v_{2}=(x_{1},u_{2},u_{2},u_{2},u_{2})+\lambda({\bf 1},{\bf 0},{\bf 1},{\bf 2},{\bf 3})\), where \(x_{1}={\bf 0}\) and \(u_{2}={\bf 0}\), so \(v_{2}={\bf 0}\). From (19), \(v_{3}=(x_{2},u_{3},\stackrel{{ 8}}{{\cdot}},u_{3})+\lambda({\bf 1},{\bf 0},\ldots,{\bf 7})\), where \(x_{2}={\bf 0}\) is of length \(4^{t_{1}}\). If \(\lambda=0\), it is easy to see that \(v_{3}\) satisfies property 3. If \(\lambda=4\)
\(v_{3}=(x_{2},u_{3},.\,.
**Proof.** We prove this lemma by induction on \(t_{2}\geq 0\). The lemma is true for the code \({\cal H}^{t_{1},0,1}\) by Lemma 3.9. Assume that the lemma holds for the code \({\cal H}^{t_{1},t_{2},1}\) of type \((\alpha_{1},\alpha_{2},\alpha_{3};t_{1},t_{2},1)\) with \(t_{1}\geq 1\) and \(t_{2}\geq 0\). By Lemma 3.1, we have that
\[2^{t_{1}+t_{2}}=\alpha_{1},4^{t_{1}+t_{2}}=\alpha_{1}+2\alpha_{2},\mbox{ and }8^{t_{1}}4^{t_{2}}=\alpha_{1}+2\alpha_{2}+4\alpha_{3}. \tag{20}\]
Now, we have to show that the lemma is also true for the code \({\cal H}^{t_{1},t_{2}+1,1}\).
Let \({\bf v}=(v_{1}\mid v_{2}\mid v_{3})\in{\cal G}^{t_{1},t_{2}+1,1}\). We can write
\[{\bf v}={\bf z}+\lambda{\bf w},\]
where \({\bf z}=(u_{1},u_{1}\mid x_{1},u_{2},u_{2},u_{2},u_{2}\mid u_{3},u_{3},u_{3}, u_{3})\), \({\bf w}=({\bf 0},{\bf 1}\mid{\bf 1},{\bf 0},{\bf 1},{\bf 2},{\bf 3}\mid{\bf 0},{\bf 2 },{\bf 4},{\bf 6})\), \({\bf u}=(u_{1}\mid u_{2}\mid u_{3})\in{\cal G}^{t_{1},t_{2},1}\), \(\lambda\in\{0,1,2,3\}\), and \(x_{1}\in(2\mathbb{Z}_{4})^{2^{t_{1}+t_{2}}}\) such that either \(x_{1}={\bf 0}\) or a permutation of \((2\mathbb{Z}_{4},{}^{2^{t_{1}+t_{2}-1}},2\mathbb{Z}_{4})\). Then,
\[\begin{split} v_{1}&=(u_{1},u_{1})+\lambda({\bf 0},{ \bf 1}),\\ v_{2}&=(x_{1},u_{2},u_{2},u_{2},u_{2})+\lambda({\bf 1},{ \bf 0},{\bf 1},{\bf 2},{\bf 3}),\\ v_{3}&=(u_{3},u_{3},u_{3},u_{3})+\lambda({\bf 0},{ \bf 2},{\bf 4},{\bf 6}).\end{split} \tag{21}\]
If \({\bf z}={\bf 0}\), then \({\bf v}=\lambda{\bf w}\). It is easy to see that \({\bf v}\) satisfies property 2b if \(\lambda\in\{1,3\}\) and property 3b if \(\lambda=2\). Therefore, we focus on the case when \({\bf z}\neq{\bf 0}\).
Case 1: Assume that \(o({\bf v})=8\). Then, \(o({\bf z})=8\) and \(\lambda\in\{0,1,2,3\}\). We have that \(o({\bf u})=8\) and, by Item 1 of Lemma 3.8, \(x_{1}\) is a permutation of \((2\mathbb{Z}_{4},{}^{2^{t_{1}+t_{2}-1}},2\mathbb{Z}_{4})\). By induction hypothesis, \({\bf u}\) satisfies property 1a. Then, \(u_{1}\) contains every element of \(\mathbb{Z}_{2}\) the same number of times, \(u_{2}\) is a permutation of
\[(\mu,.{}^{m}_{\cdot\cdot\cdot},\mu,2\mathbb{Z}_{4},.{}^{n}_{\cdot\cdot\cdot},2 \mathbb{Z}_{4},\mathbb{Z}_{4}\backslash 2\mathbb{Z}_{4},.{}^{r}_{\cdot\cdot\cdot}, \mathbb{Z}_{4}\backslash 2\mathbb{Z}_{4}),\]
where \(m,n,r\geq 0\) and \(\mu\in\{1,3\}\), and \(u_{3}\) is a permutation of
\[(\mu^{\prime},.{}^{m^{\prime}}_{\cdot\cdot\cdot},\mu^{\prime},\mu^{\prime \prime},.{}^{m^{\prime}}_{\cdot\cdot\cdot},\mu^{\prime\prime},2\mathbb{Z}_{8},.{}^{n^{\prime}}_{\cdot\cdot\cdot},2\mathbb{Z}_{8},\mathbb{Z}_{8}\backslash 2 \mathbb{Z}_{8},.{}^{r^{\prime}}_{\cdot\cdot\cdot},\mathbb{Z}_{8}\backslash 2 \mathbb{Z}_{8}),\]
where \(m^{\prime},n^{\prime},r^{\prime}\geq 0\) and \(\mu^{\prime},\mu^{\prime\prime}\in\{1,3,5,7\}\). First, since \(v_{1}=(u_{1},u_{1})+\lambda({\bf 0},{\bf 1})\), \(v_{1}\) contains every element of \(\mathbb{Z}_{2}\) the same number of times, for any \(\lambda\in\{0,1,2,3\}\). Second, from (21), \(v_{2}=(x_{1},u_{2},u_{2},u_{2},u_{2})+\lambda({\bf 1},{\bf 0},{\bf 1},{\bf 2},{\bf 3})\). If \(\lambda=0\), then \(v_{2}\) clearly satisfies 1a. If \(\lambda\in\{1,3\}\), then we have that \((u_{2},u_{2},u_{2},u_{2})+\lambda({\bf 0},{\bf 1},{\bf 2},{\bf 3})\) is a permutation of \((\mathbb{Z}_{4},{}^{\alpha_{2}\cdot\cdot},\mathbb{Z}_{4})\) by Lemma 3.3. For \(\lambda\in\{1,3\}\), since \(x_{1}+\lambda{\bf 1}\) is a permutation of \((\mathbb{Z}_{4}\backslash 2\mathbb{Z}_{4},{}^{2^{t_{1}+t_{2}-1}},\mathbb{Z}_{4} \backslash 2\mathbb{Z}_{4})\)
by Item 3 of Lemma 3.2, we have that \(v_{2}\) satisfies property 1a. If \(\lambda=2\), \(v_{2}=(x_{1},u_{2},u_{2},u_{2},u_{2})+(\mathbf{2},\mathbf{0},\mathbf{2}, \mathbf{0},\mathbf{2}).\) By Item 1 of Lemma 3.2, we have that \(x_{1}+\mathbf{2}\) is a permutation of \((2\mathbb{Z}_{4},\begin{smallmatrix}2^{t_{1}+t_{2}-1},2\mathbb{Z}_{4}\\ \cdots\end{smallmatrix},2\mathbb{Z}_{4})\). Therefore, by Lemma 3.4, \(v_{2}\) is a permutation of \((2\mathbb{Z}_{4},\begin{smallmatrix}4n+2^{t_{1}+t_{2}-1},2\mathbb{Z}_{4}, \mathbb{Z}_{4}\backslash 2\mathbb{Z}_{4},\end{smallmatrix},\begin{smallmatrix}4r_{+}2 m,\mathbb{Z}_{4}\\ \cdots\end{smallmatrix},\mathbb{Z}_{4}\backslash 2\mathbb{Z}_{4})\) and then \(v_{2}\) satisfies property 1a. Finally, we consider the coordinates in \(v_{3}\). From (21), \(v_{3}=(u_{3},u_{3},u_{3},u_{3})+\lambda(\mathbf{0},\mathbf{2},\mathbf{4}, \mathbf{6}).\) If \(\lambda=0\), then \(v_{3}\) clearly satisfies 1a. Note that \(\lambda(\mathbf{0},\mathbf{2},\mathbf{4},\mathbf{6})=(\mathbf{0},\mathbf{4}, \mathbf{0},\mathbf{4})\) if \(\lambda=2\) and \(\lambda(\mathbf{0},\mathbf{2},\mathbf{4},\mathbf{6})\) is a permutation of \((\mathbf{0},\mathbf{2},\mathbf{4},\mathbf{6})\) if \(\lambda\in\{1,3\}\). Therefore, by Lemma 3.5, \(v_{3}\) satisfies property 1a, and so does \(\mathbf{v}\).
Case 2: Assume that \(o(\mathbf{v})=4\). We have two subcases: when \(o(\mathbf{z})=4\) and \(\lambda\in\{0,1,2,3\}\), and when \(o(\mathbf{z})=2\) and \(\lambda\in\{1,3\}\). For the first subcase, since \(o(\mathbf{z})=4\), \(o(\mathbf{u})=4\). By induction hypothesis, \(\mathbf{u}\) satisfies property 2a or 2b. Assume that \(\mathbf{u}\) satisfies property 2a. Then, \(u_{1}=\mathbf{0}\), \(u_{2}\) contains the element in \(2\mathbb{Z}_{4}\backslash\{0\}=\{2\}\) exactly \(4^{t_{1}+t_{2}-1}\) times and \(4^{t_{1}+t_{2}-1}-2^{t_{1}+t_{2}-1}\) times the element \(0\), and \(u_{3}\) is a permutation of
\[(\mu,\begin{smallmatrix}m\\ \cdots\end{smallmatrix},\mu,4\mathbb{Z}_{8},\begin{smallmatrix}n\\ \cdots\end{smallmatrix},4\mathbb{Z}_{8},2\mathbb{Z}_{8}\backslash 4 \mathbb{Z}_{8},\begin{smallmatrix}r\\ \cdots\end{smallmatrix},2\mathbb{Z}_{8}\backslash 4\mathbb{Z}_{8})\]
for some integers \(m,n,r\geq 0\) and \(\mu\in\{2,6\}\). Note that, in this case, \(x_{1}=\mathbf{0}\) by Item 2 of Lemma 3.8. If \(\lambda=0\), then it is easy to see that \(\mathbf{v}\) satisfies property 2a. If \(\lambda=2\), we show that \(\mathbf{v}\) satisfies property 2a. Since \(v_{1}=(u_{1},u_{1})+\lambda(\mathbf{0},\mathbf{1})\), \(u_{1}=\mathbf{0}\), and \(\lambda=2\), we have that \(v_{1}=\mathbf{0}\). From (21), \(v_{2}=(x_{1},u_{2},u_{2},u_{2},u_{2})+(\mathbf{2},\mathbf{0},\mathbf{2}, \mathbf{0},\mathbf{2})\), where \(x_{1}=\mathbf{0}\) is of length \(2^{t_{1}+t_{2}}\). Note that \(u_{2}+\mathbf{2}\) contains the element in \(2\mathbb{Z}_{4}\backslash\{0\}=\{2\}\) as many times as \(u_{2}\) contains the element \(0\), and the element \(0\) as many times as \(u_{2}\) contains the element \(2\). Thus, \(v_{2}\) contains the element in \(2\mathbb{Z}_{4}\backslash\{0\}=\{2\}\) exactly \(2^{t_{1}+t_{2}}+2(4^{t_{1}+t_{2}-1})+2(4^{t_{1}+t_{2}-1}-2^{t_{1}+t_{2}-1})=4 ^{t_{1}+t_{2}}\) times and \(2(4^{t_{1}+t_{2}-1})+2(4^{t_{1}+t_{2}-1}-2^{t_{1}+t_{2}-1})=4^{t_{1}+t_{2}}-2 ^{t_{1}+t_{2}}\) times the element \(0\), so \(v_{2}\) satisfies property 2a. From (21), \(v_{3}=(u_{3},u_{3},u_{3},u_{3})+(\mathbf{0},\mathbf{4},\mathbf{0},\mathbf{4})\). By Item 2 of Lemma 3.6, \(v_{3}\) is a permutation of
\[(4\mathbb{Z}_{8},\begin{smallmatrix}4n\\ \cdots\end{smallmatrix},4\mathbb{Z}_{8},2\mathbb{Z}_{8}\backslash 4 \mathbb{Z}_{8},\begin{smallmatrix}4r_{+}2m,2\mathbb{Z}_{8}\backslash 4 \mathbb{Z}_{8}).\]
Therefore, for \(\lambda=2\), \(\mathbf{v}\) satisfies property 2a. Finally, if \(\lambda\in\{1,3\}\), we show that \(\mathbf{v}\) satisfies property 2b. Since \(v_{1}=(u_{1},u_{1})+\lambda(\mathbf{0},\mathbf{1})\), \(u_{1}=\mathbf{0}\), and \(\lambda\in\{1,3\}\), we have that \(v_{1}\) contains every element of \(\mathbb{Z}_{2}\) the same number of times. From (21), \(v_{2}=(x_{1},u_{2},u_{2},u_{2},u_{2})+\lambda(\mathbf{1},\mathbf{0},\mathbf{1 },\mathbf{2},\mathbf{3})\), where \(x_{1}=\mathbf{0}\) is of length \(2^{t_{1}+t_{2}}\). Since \(\lambda\in\{1,3\}\), by Lemma 3.3, we have that \(v_{2}\) is a permutation of \((\lambda,\begin{smallmatrix}2^{t_{1}+t_{2}},\cdots\end{smallmatrix},\lambda, \mathbb{Z}_{4},\begin{smallmatrix}\alpha_{2}\\ \cdots\end{smallmatrix},\mathbb{Z}_{4})\). From (21), \(v_{3}=(u_{3},u_{3},u_{3},u_{3})+\lambda(\mathbf{0},\mathbf{2},\mathbf{4}, \mathbf{6}).\) Note
that, for \(\lambda\in\{1,3\}\), \(\lambda(\mathbf{0},\mathbf{2},\mathbf{4},\mathbf{6})\) is a permutation of \((\mathbf{0},\mathbf{2},\mathbf{4},\mathbf{6})\). Thus, by Item 1 of Lemma 3.6, \(v_{3}\) satisfies property 2b, and so does \(\mathbf{v}\). Therefore, if \(o(\mathbf{u})=4\) and \(\mathbf{u}\) satisfies property 2a, we have that \(\mathbf{v}\) satisfies either property 2a or 2b.
We continue with the first subcase, when \(o(\mathbf{z})=4\) and \(\lambda\in\{0,1,2,3\}\). Again, we have that \(o(\mathbf{u})=4\). Now, we assume that \(\mathbf{u}\) satisfies property 2b. Then, \(u_{1}\) contains every element of \(\mathbb{Z}_{2}\) the same number of times, \(u_{2}\) is a permutation of
\[(\mu,\overset{m}{\dots},\mu,2\mathbb{Z}_{4},\overset{n}{\dots},2\mathbb{Z}_{4 },\mathbb{Z}_{4}\backslash 2\mathbb{Z}_{4},\overset{r}{\dots},\mathbb{Z}_{4} \backslash 2\mathbb{Z}_{4})\]
for some integers \(m,n,r\geq 0\) and \(\mu\in\{1,3\}\), and \(u_{3}\) is a permutation of \((4\mathbb{Z}_{8},\overset{t}{\dots},4\mathbb{Z}_{8},2\mathbb{Z}_{8}\backslash 4 \mathbb{Z}_{8},\overset{t^{\prime}}{\dots},2\mathbb{Z}_{8}\backslash 4 \mathbb{Z}_{8})\) for some integers \(t,t^{\prime}\geq 0\). Note that, in this case, \(x_{1}\) is a permutation of \((2\mathbb{Z}_{4},\overset{2^{t_{1}+t_{2}-1}}{\dots},2\mathbb{Z}_{4})\) by Item 2 of Lemma 3.8. Now, we show that \(\mathbf{v}\) satisfies property 2b. Since \(v_{1}=(u_{1},u_{1})+\lambda(\mathbf{0},\mathbf{1})\) and \(u_{1}\) contains every element of \(\mathbb{Z}_{2}\) the same number of times, we have that \(v_{1}\) contains every element of \(\mathbb{Z}_{2}\) the same number of times, for any \(\lambda\in\{0,1,2,3\}\). From (21), \(v_{2}=(x_{1},u_{2},u_{2},u_{2},u_{2})+\lambda(\mathbf{1},\mathbf{0},\mathbf{1},\mathbf{2},\mathbf{3})\). If \(\lambda=0\), it is clear that \(v_{2}\) satisfies property 2b. Note that \(x_{1}+\lambda\mathbf{1}\) is a permutation of \((2\mathbb{Z}_{4_{4}},\overset{2^{t_{1}+t_{2}-1}}{\dots},2\mathbb{Z}_{4})\) if \(\lambda=2\), and a permutation of \((\mathbb{Z}_{4}\backslash 2\mathbb{Z}_{4},\overset{2^{t_{1}+t_{2}-1}}{\dots}, \mathbb{Z}_{4}\backslash 2\mathbb{Z}_{4})\) if \(\lambda\in\{1,3\}\). If \(\lambda=2\), then by Lemma 3.4, \((u_{2},u_{2},u_{2},u_{2})+(\mathbf{0},\mathbf{2},\mathbf{0},\mathbf{2})\) is a permutation of
\[(2\mathbb{Z}_{4},\overset{4n}{\dots},2\mathbb{Z}_{4},\mathbb{Z}_{4}\backslash 2 \mathbb{Z}_{4},\overset{4r+2m}{\dots},\mathbb{Z}_{4}\backslash 2\mathbb{Z}_{4}).\]
If \(\lambda\in\{1,3\}\), then by Lemma 3.3, \((u_{2},u_{2},u_{2},u_{2})+\lambda(\mathbf{0},\mathbf{1},\mathbf{2},\mathbf{3})\) is a permutation of \((\mathbb{Z}_{4},\overset{\alpha_{2}}{\dots},\mathbb{Z}_{4})\). Therefore, \(v_{2}\) satisfies property 2b. From (21), \(v_{3}=(u_{3},u_{3},u_{3},u_{3})+\lambda(\mathbf{0},\mathbf{2},\mathbf{4}, \mathbf{6}).\) If \(\lambda=0\), it is clear that \(v_{3}\) satisfies property 2b. Note that \(\lambda(\mathbf{0},\mathbf{2},\mathbf{4},\mathbf{6})\) is a permutation of \((\mathbf{0},\mathbf{2},\mathbf{4},\mathbf{6})\) if \(\lambda\in\{1,3\}\), and \(\lambda(\mathbf{0},\mathbf{2},\mathbf{4},\mathbf{6})=(\mathbf{0},\mathbf{4}, \mathbf{0},\mathbf{4})\) if \(\lambda=2\). Therefore, by Lemma 3.6, \(v_{3}\) satisfies property 2b, and so does \(\mathbf{v}\).
Now, we consider the second subcase when \(o(\mathbf{v})=4\), that is, when \(o(\mathbf{z})=2\) and \(\lambda\in\{1,3\}\). Since \(o(\mathbf{z})=2\), \(o(\mathbf{u})=2\). By induction hypothesis, \(\mathbf{u}\) satisfies property 3a or 3b. Assume that \(\mathbf{u}\) satisfies property 3a. Then, \(u_{1}=\mathbf{0}\), \(u_{2}=\mathbf{0}\), and \(u_{3}\) contains the element in \(4\mathbb{Z}_{8}\backslash\{0\}=\{4\}\) exactly \(m=8^{t_{1}-1}4^{t_{2}}\) times and \(m^{\prime}=8^{t_{1}-1}4^{t_{2}}-4^{t_{1}+t_{2}-1}\) times the element \(0\). By Item 3 of Lemma 3.8, we have that \(x_{1}=\mathbf{0}\). Since \(v_{1}=(u_{1},u_{1})+\lambda(\mathbf{0},\mathbf{1})\), \(u_{1}=\mathbf{0}\), and \(\lambda\in\{1,3\}\), we have that \(v_{1}\) contains every element of \(\mathbb{Z}_{2}\) the same number of times. From (21), \(v_{2}=(x_{1},u_{2},u_{2},u_{2},u_{2})+\lambda(\mathbf{1},\mathbf{0},\mathbf{1},\mathbf{2},\mathbf{3})\), where
\(x_{1}=\mathbf{0}\) is of length \(2^{t_{1}+t_{2}}\). By Lemma 3.3, we have that \(v_{2}\) is a permutation of
\[(\lambda,\overset{2^{t_{1}+t_{2}}}{\cdots},\lambda,\mathbb{Z}_{4},\overset{ \alpha_{2}}{\cdots},\mathbb{Z}_{4}),\]
where \(\lambda\in\{1,3\}\). From (21), \(v_{3}=(u_{3},u_{3},u_{3},u_{3})+\lambda(\mathbf{0},\mathbf{2},\mathbf{4}, \mathbf{6})\). Note that \(u_{3}\) is a permutation of \((4,\overset{m-m^{\prime}}{\cdots},4,4\mathbb{Z}_{8},\overset{m^{\prime}}{ \cdots},4\mathbb{Z}_{8})\) and, since \(\lambda\in\{1,3\}\), \(\lambda(\mathbf{0},\mathbf{2},\mathbf{4},\mathbf{6})\) is a permutation of \((\mathbf{0},\mathbf{2},\mathbf{4},\mathbf{6})\). Thus, by Item 1 of Lemma 3.2, \(v_{3}=(u_{3},u_{3},u_{3},u_{3})+(\mathbf{0},\mathbf{2},\mathbf{4},\mathbf{6})\) is a permutation of \((2\mathbb{Z}_{8},\overset{m+m^{\prime}}{\cdots},2\mathbb{Z}_{8})\), so \(v_{3}\) satisfies property 2b, and so does \(\mathbf{v}\). Therefore, if \(o(\mathbf{u})=2\) and \(\mathbf{u}\) satisfies property 3a, we have that \(\mathbf{v}\) satisfies property 2b.
We continue with the second subcase, when \(o(\mathbf{z})=2\) and \(\lambda\in\{1,3\}\). Again, we have that \(o(\mathbf{u})=2\). Now, we assume that \(\mathbf{u}\) satisfies property 3b. Then, \(u_{1}=\mathbf{0}\), \(u_{2}\) contains the element in \(2\mathbb{Z}_{4}\backslash\{0\}=\{2\}\) exactly \(4^{t_{1}+t_{2}-1}\) times and \(4^{t_{1}+t_{2}-1}-2^{t_{1}+t_{2}-1}\) times the element \(0\), and \(u_{3}\) is a permutation of \((4\mathbb{Z}_{8},\overset{m}{\cdots},4\mathbb{Z}_{8})\) for some \(m\geq 0\). By Item 3 of Lemma 3.8, we have that \(x_{1}=\mathbf{0}\). Since \(v_{1}=(u_{1},u_{1})+\lambda(\mathbf{0},\mathbf{1})\), \(u_{1}=\mathbf{0}\), and \(\lambda\in\{1,3\}\), we have that \(v_{1}\) contains every element of \(\mathbb{Z}_{2}\) the same number of times. From (21), \(v_{2}=(x_{1},u_{2},u_{2},u_{2},u_{2})+\lambda(\mathbf{1},\mathbf{0},\mathbf{1 },\mathbf{2},\mathbf{3})\), where \(x_{1}=\mathbf{0}\) is of length \(2^{t_{1}+t_{2}}\). By Lemma 3.3, we have that \(v_{2}\) is a permutation of
\[(\lambda,\overset{2^{t_{1}+t_{2}}}{\cdots},\lambda,\mathbb{Z}_{4},\overset{ \alpha_{2}}{\cdots},\mathbb{Z}_{4}),\]
where \(\lambda\in\{1,3\}\). From (21), \(v_{3}=(u_{3},u_{3},u_{3},u_{3})+\lambda(\mathbf{0},\mathbf{2},\mathbf{4}, \mathbf{6})\). Since \(\lambda\in\{1,3\}\), \(\lambda(\mathbf{0},\mathbf{2},\mathbf{4},\mathbf{6})\) is a permutation of \((\mathbf{0},\mathbf{2},\mathbf{4},\mathbf{6})\). Thus, by Item 1 of Lemma 3.2, \(v_{3}=(u_{3},u_{3},u_{3},u_{3})+(\mathbf{0},\mathbf{2},\mathbf{4},\mathbf{6})\) is a permutation of \((2\mathbb{Z}_{8},\overset{2m}{\cdots},2\mathbb{Z}_{8})\). Therefore, \(v_{3}\) satisfies property 2b, and so does \(\mathbf{v}\).
Case 3: Assume that \(o(\mathbf{v})=2\). Then, \(o(\mathbf{z})=2\) and \(\lambda\in\{0,2\}\). Since \(o(\mathbf{z})=2\), we have that \(o(\mathbf{u})=2\) and, by Item 3 of Lemma 3.8, \(x_{1}=\mathbf{0}\). By induction hypothesis, \(\mathbf{u}\) satisfies property 3a or 3b. Assume that \(\mathbf{u}\) satisfies property 3a. Then, \(u_{1}=\mathbf{0}\), \(u_{2}=\mathbf{0}\), and \(u_{3}\) contains the element in \(4\mathbb{Z}_{8}\backslash\{0\}=\{4\}\) exactly \(m=8^{t_{1}-1}4^{t_{2}}\) times and \(m^{\prime}=8^{t_{1}-1}4^{t_{2}}-4^{t_{1}+t_{2}-1}\) times the element \(0\). If \(\lambda=0\), then \(\mathbf{v}=(\mathbf{0}\mid\mathbf{0}\mid v_{3})\) satisfies property 3a, since \(v_{3}\) contains \(4m\) times the element \(4\) and \(4m^{\prime}\) the element \(0\). Now, we assume that \(\lambda=2\). Since \(v_{1}=(u_{1},u_{1})+\lambda(\mathbf{0},\mathbf{1})\), \(u_{1}=\mathbf{0}\), and \(\lambda=2\), we have that \(v_{1}=\mathbf{0}\). From (21), \(v_{2}=(x_{1},u_{2},u_{2},u_{2},u_{2})+(\mathbf{2},\mathbf{0},\mathbf{2}, \mathbf{0},\mathbf{2})\), where \(x_{1}=\mathbf{0}\) is of length \(2^{t_{1}+t_{2}}\) and \(u_{2}=\mathbf{0}\). Therefore, \(v_{2}\) contains the element in \(2\mathbb{Z}_{4}\backslash\{0\}=\{2\}\) exactly \(\alpha_{1}+2\alpha_{2}=4^{t_{1}+t_{2}}\) times and \(2\alpha_{2}=4^{t_{1}+t_{2}}-2^{t_{1}+t_{2}}\) times the element \(0\), by (20). From (21), \(v_{3}=(u_{3},u_{3},u_{3},u_{3})+(\mathbf{0},\mathbf{4},\mathbf{0},\mathbf{4}).\) Note that \(u_{3}\) is a permutation of
\[(4,\overset{m-m^{\prime}}{\cdots},4,4\mathbb{Z}_{8},\overset{m^{\prime}}{ \cdots},4\mathbb{Z}_{8}).\]
Thus, by Item 1 of Lemma 3.2, \(v_{3}\) is a permutation of \((4\mathbb{Z}_{8},\stackrel{{ 2m+2m^{\prime}}}{{\cdots}},4\mathbb{Z}_{8})\), so \(v_{3}\) satisfies property 3b, and so does \(\mathbf{v}\). Therefore, if \(o(\mathbf{u})=2\) and \(\mathbf{u}\) satisfies property 3a, we have that \(\mathbf{v}\) satisfies property 3b.
We continue with the case when \(o(\mathbf{z})=2\) and \(\lambda\in\{0,2\}\). Again, we have that \(o(\mathbf{u})=2\) and \(x_{1}=\mathbf{0}\). Now, we assume that \(\mathbf{u}\) satisfies property 3b. Then, \(u_{1}=\mathbf{0}\), \(u_{2}\) contains the element in \(2\mathbb{Z}_{4}\backslash\{0\}=\{2\}\) exactly \(4^{t_{1}+t_{2}-1}\) times and \(4^{t_{1}+t_{2}-1}-2^{t_{1}+t_{2}-1}\) times the element \(0\), and \(u_{3}\) is a permutation of \((4\mathbb{Z}_{8},\stackrel{{ m}}{{\cdots}},4\mathbb{Z}_{8})\) for some \(m\geq 0\). If \(\lambda=0\), then it is easy to see that \(\mathbf{v}\) satisfies property 3b. Now, we assume that \(\lambda=2\). Since \(v_{1}=(u_{1},u_{1})+\lambda(\mathbf{0},\mathbf{1})\), \(u_{1}=\mathbf{0}\), and \(\lambda=2\), we have that \(v_{1}=\mathbf{0}\). From (21), \(v_{2}=(x_{1},u_{2},u_{2},u_{2},u_{2})+(\mathbf{2},\mathbf{0},\mathbf{2}, \mathbf{0},\mathbf{2})\), where \(x_{1}=\mathbf{0}\) is of length \(2^{t_{1}+t_{2}}\). Note that \(u_{2}+\mathbf{2}\) contains the element in \(2\mathbb{Z}_{4}\backslash\{0\}=\{2\}\) as many times as \(u_{2}\) contains the element \(0\), and the element \(0\) as many times as \(u_{2}\) contains the element \(2\). Therefore, \(v_{2}\) contains the element in \(2\mathbb{Z}_{4}\backslash\{0\}=\{2\}\) exactly \(2^{t_{1}+t_{2}}+2(4^{t_{1}+t_{2}-1})+2(4^{t_{1}+t_{2}-1}-2^{t_{1}+t_{2}-1})=4^ {t_{1}+t_{2}}\) times and \(2(4^{t_{1}+t_{2}-1})+2(4^{t_{1}+t_{2}-1}-2^{t_{1}+t_{2}-1})=4^{t_{1}+t_{2}}-2^ {t_{1}+t_{2}}\) times the element \(0\). From (21), \(v_{3}=(u_{3},u_{3},u_{3},u_{3})+(\mathbf{0},\mathbf{4},\mathbf{0},\mathbf{4}).\) By Item 1 of Lemma 3.2, \(v_{3}\) is a permutation of \((4\mathbb{Z}_{8},\stackrel{{ 4m}}{{\cdots}},4\mathbb{Z}_{8})\). Therefore, \(v_{3}\) satisfies property 3b, and so does \(\mathbf{v}\). This completes the proof.
**Proposition 3.2**: _Let \(t_{1}\geq 1\) and \(t_{2}\geq 0\) be integers. The \(\mathbb{Z}_{2}\mathbb{Z}_{4}\mathbb{Z}_{8}\)-additive code \(\mathcal{H}^{t_{1},t_{2},1}\), generated by the matrix \(A^{t_{1},t_{2},1}\), is a \(\mathbb{Z}_{2}\mathbb{Z}_{4}\mathbb{Z}_{8}\)-additive Hadamard code._
**Proof.** Let \(\mathcal{H}^{t_{1},t_{2},1}\) be the \(\mathbb{Z}_{2}\mathbb{Z}_{4}\mathbb{Z}_{8}\)-additive code of type \((\alpha_{1},\alpha_{2},\alpha_{3};t_{1},t_{2},1)\) and \(H^{t_{1},t_{2},1}\) be the corresponding \(\mathbb{Z}_{2}\mathbb{Z}_{4}\mathbb{Z}_{8}\)-linear code of length \(N\). We have that \(N=\alpha_{1}+2\alpha_{2}+4\alpha_{3}\). The cardinality of \(H^{t_{1},t_{2},1}\) is \(8^{t_{1}}\cdot 4^{t_{2}}\cdot 2=2(\alpha_{1}+2\alpha_{2}+4\alpha_{3})=2N\) by Lemma 3.1. By Proposition 2.1, the minimum distance of \(H^{t_{1},t_{2},1}\) is equal to the minimum weight of \(H^{t_{1},t_{2},1}\). Therefore, we just need to prove that the minimum weight of \(H^{t_{1},t_{2},1}\) is \(N/2\).
We can write that \(\mathcal{H}^{t_{1},t_{2},1}=\mathcal{G}^{t_{1},t_{2},1}\cup(\mathcal{G}^{t_{1},t_{2},1}+(\mathbf{1}\mid\mathbf{2}\mid\mathbf{4}))\). By Corollary 2.2, \(H^{t_{1},t_{2},1}=\Phi(\mathcal{G}^{t_{1},t_{2},1})\cup(\Phi(\mathcal{G}^{t_{1},t_{2},1})+\mathbf{1})\). Let \(\mathbf{u}=(u_{1}\mid u_{2}\mid u_{3})\in\mathcal{H}^{t_{1},t_{2},1}\backslash\{ \mathbf{0},(\mathbf{1}\mid\mathbf{2}\mid\mathbf{4})\}\). We show that \(\mathrm{wt}_{H}(\Phi(\mathbf{u}))=N/2\). First, consider \(\mathbf{u}\in\mathcal{G}^{t_{1},t_{2},1}\backslash\{\mathbf{0}\}\). If \(o(\mathbf{u})=8\), then by Lemma 3.10, \(u_{1}\) contains every element of \(\mathbb{Z}_{2}\) the same number of times, and for \(i\in\{2,3\}\), \(u_{i}\) contains every element of \(2^{i-1}\mathbb{Z}_{2^{i}}\) exactly \(s_{i}\) times, \(s_{i}\geq 0\), and the remaining \(\alpha_{i}-2s_{i}\) coordinates of \(u_{i}\) are from \(\mathbb{Z}_{2^{i}}\backslash 2^{i-1}\mathbb{Z}_{2^{i}}\). Thus, from the definition of \(\Phi\), we have that \(\mathrm{wt}_{H}(\Phi(\mathbf{u}))=\alpha_{1}/2+2s_{2}+(\alpha_{2}-2s_{2})\cdot 1+4s_{3 }+(\alpha_{3}-2s_{3})\cdot 2=2(\alpha_{1}+2\alpha_{2}+4\alpha_{3})\). By Item 1 of Lemma 3.2, \(v_{3}\) satisfies property 3b, and so does \(\mathbf{v}\). This completes the proof.
**Proposition 3.3**: _Let \(t_{1}\geq 1\) and \(t_{2}\geq 0\) be integers. The \(\mathbb{Z}_{2}\mathbb{Z}_{4}\mathbb{Z}_{8}\)-additive code \(\mathcal{H}^{t_{1},t_{2},1}\), generated by the matrix \(A^{t_{1},t_{2},1}\), is a \(\mathbb{Z}_{2}\mathbb{Z}_{4}\mathbb{Z}_{8}\)-additive Hadamard code._
**Proof.** Let \(\mathcal{H}^{t_{1},t_{2},1}\) be the \(\mathbb{Z}_{2}\mathbb{Z}_{4}\mathbb{Z}_{8}\)-additive code of type \((\alpha_{1},\alpha_{2},\alpha_{3};t_{1},t_{2},1)\) and \(H^{t_{1},t_{2},1}\) be the corresponding \(\mathbb{Z}_{2}\mathbb{Z}_{4}\mathbb{Z}_{8}\)-linear code of length \(N\). We have that \(N=\alpha_{1}+2\alpha_{2}+4\alpha_{3}\). The cardinality of \(H^{t_{1},t_{2},1}\) is \(8^{t_{1}}\cdot 4^{t_{2}}\cdot 2=2(\alpha_{1}+2\alpha_{2}+4\alpha_{3})=2N\) by Lemma 3.1. By Proposition 2.1, the minimum distance of \(H^{t_{1},t_{2},1}\) is equal to the minimum weight of \(H^{t_{1},t_{2},1}\). Therefore, we just need to prove that the minimum weight of \(H^{t_{1},t_{2},1}\) is \(N/2\).
We can write that \(\mathcal{H}^{t_{1},t_{2},1}=\mathcal{G}^{t_{1},t_{2},1}\cup(\mathcal{G}^{t_{1},t_{2},1}+(\mathbf{1}\mid\mathbf{2}\mid\mathbf{4}))\). By Corollary 2.2, \(H^{t_{1},t_{2},1}=\Phi(\mathcal{G}^{t_{1},t_{2},1})\cup(\Phi(\mathcal{G}^{t_{1},t_{2},1})+\mathbf{1})\). Let \(\mathbf{u}=(u_{1}\mid u_{2}\mid u_{3})\in\mathcal{H}^{t_{1},t_{2},1}\backslash\{ \mathbf{0},(\mathbf{1}\mid\mathbf{2}\mid\mathbf{4})\}\). We show that \(\mathrm{wt}_{H}(\Phi(\mathbf{u}))=N/2\). First, consider \(\mathbf{u}\in\mathcal{G}^{t_{1},t_{2},1}\backslash\{\mathbf{0}\}\
\(\alpha_{1}/2+\alpha_{2}+2\alpha_{3}=N/2\). If \(o({\bf u})=4\), then \({\bf u}\) satisfies property 2a or 2b given in Lemma 3.10. If \({\bf u}\) satisfies property 2a, then \(u_{3}\) contains every element of \(4\mathbb{Z}_{8}\) exactly \(m\) times, \(m\geq 0\), and the remaining coordinates of \(u_{3}\) are from \(\mathbb{Z}_{8}\backslash 4\mathbb{Z}_{8}\). Thus, \({\rm wt}_{H}(\Phi({\bf u}))=\alpha_{1}/2+\alpha_{2}+4m+(\alpha_{3}-2m)\cdot 2 =\alpha_{1}/2+\alpha_{2}+2\alpha_{3}=N/2\). Otherwise, if \({\bf u}\) satisfies property 2b, then \({\rm wt}_{H}(\Phi({\bf u}))=\alpha_{1}/2+2n+(\alpha_{2}-2n)\cdot 1+4t+( \alpha_{3}-2t)\cdot 2=N/2\). If \(o({\bf u})=2\), then \({\bf u}\) satisfies property 3a or 3b given in Lemma 3.10. If \({\bf u}\) satisfies property 3a, then \({\rm wt}_{H}(\Phi({\bf u}))=\frac{1}{4}(\alpha_{1}/2+\alpha_{2}+2\alpha_{3}) \cdot 4=N/2\). Otherwise, if \({\bf u}\) satisfies property 3b, then \(\setminus=2\cdot\frac{1}{2}(\alpha_{1}/2+\alpha_{2})+4m+(\alpha_{3}-2m)\cdot 2 =N/2\).
Finally, note that \({\rm wt}_{H}(\Phi({\bf u})+{\bf 1})=N/2\). Therefore, we have that the weight of every element of \(H^{t_{1},t_{2},1}\backslash\{{\bf 0},{\bf 1}\}\) is \(N/2\), that is, the minimum weight of \(H^{t_{1},t_{2},1}\) is \(N/2\).
**Proposition 3.3**: _Let \(t_{1}\geq 1\), \(t_{2}\geq 0\), and \(t_{3}\geq 1\) be integers. If \({\cal H}^{t_{1},t_{2},t_{3}}\) is a \(\mathbb{Z}_{2}\mathbb{Z}_{4}\mathbb{Z}_{8}\)-additive Hadamard code of type \((\alpha_{1},\alpha_{2},\alpha_{3};t_{1},t_{2},t_{3})\), then, by applying construction (5), \({\cal H}^{t_{1},t_{2},t_{3}+1}\) is a \(\mathbb{Z}_{2}\mathbb{Z}_{4}\mathbb{Z}_{8}\)-additive Hadamard code of type \((2\alpha_{1},2\alpha_{2},2\alpha_{3};t_{1},t_{2},t_{3}+1)\)._
**Proof.** By construction (5), \({\cal H}^{t_{1},t_{2},t_{3}+1}\) is a \(\mathbb{Z}_{2}\mathbb{Z}_{4}\mathbb{Z}_{8}\)-additive code of type \((\alpha^{\prime}_{1},\alpha^{\prime}_{2},\alpha^{\prime}_{3};t_{1},t_{2},t_{3}+1)\), where \(\alpha^{\prime}_{1}=2\alpha_{1}\), \(\alpha^{\prime}_{2}=2\alpha_{2}\), and \(\alpha^{\prime}_{3}=2\alpha_{3}\).
Since \(H^{t_{1},t_{2},t_{3}}\) is a Hadamard code of length \(N=\alpha_{1}+2\alpha_{2}+4\alpha_{3}\), then its minimum distance is \(N/2\) and \(|H^{t_{1},t_{2},t_{3}}|=2N\). Note that \(H^{t_{1},t_{2},t_{3}+1}\) is a \(\mathbb{Z}_{2}\mathbb{Z}_{4}\mathbb{Z}_{8}\)-linear code of length \(N^{\prime}=\alpha^{\prime}_{1}+2\alpha^{\prime}_{2}+4\alpha^{\prime}_{3}=2N\) and \(|H^{t_{1},t_{2},t_{3}+1}|=8^{t_{1}}4^{t_{2}}2^{t_{3}+1}=2|H^{t_{1},t_{2},t_{3} }|=2\cdot 2N=2N^{\prime}\). By Proposition 2.1, the minimum distance of \(H^{t_{1},t_{2},t_{3}+1}\) is equal to the minimum weight of \(H^{t_{1},t_{2},t_{3}+1}\). Now, we only have to prove that the minimum weight of \(H^{t_{1},t_{2},t_{3}+1}\) is \(N^{\prime}/2\). Let \({\cal H}^{t_{1},t_{2},t_{3}}=({\cal H}_{1}\mid{\cal H}_{2}\mid{\cal H}_{3})\). Note that
\[{\cal H}^{t_{1},t_{2},t_{3}+1}=\bigcup_{\lambda\in\{0,1\}}(({\cal H}_{1},{\cal H }_{1}\mid{\cal H}_{2},{\cal H}_{2}\mid{\cal H}_{3},{\cal H}_{3})+\lambda({\bf 0},{\bf 1} \mid{\bf 0},{\bf 2}\mid{\bf 0},{\bf 4})).\]
By Corollaries 2.1 and 2.2,
\[H^{t_{1},t_{2},t_{3}+1} =\bigcup_{\lambda\in\{0,1\}}(\Phi({\cal H}_{1},{\cal H}_{1}\mid{ \cal H}_{2},{\cal H}_{2}\mid{\cal H}_{3},{\cal H}_{3})+\lambda({\bf 0},{\bf 1},{\bf 0},{ \bf 1},{\bf 0},{\bf 1}))\] \[=A_{0}\cup A_{1}, \tag{22}\]
where \(A_{\lambda}=\Phi({\cal H}_{1},{\cal H}_{1}\mid{\cal H}_{2},{\cal H}_{2}\mid{ \cal H}_{3},{\cal H}_{3})+\lambda({\bf 0},{\bf 1},{\bf 0},{\bf 1},{\bf 0},{\bf 1}), \lambda\in\{0,1\}\). Next, we show that the minimum weight of \(A_{\lambda}\) is \(N^{\prime}/2\). Any element in
is of the form \(\Phi(u_{1},u_{1}\mid u_{2},u_{2}\mid u_{3},u_{3})+\lambda({\bf 0},{\bf 1},{\bf 0},{ \bf 1},{\bf 0},{\bf 1})\), for \({\bf u}=(u_{1}\mid u_{2}\mid u_{3})\in({\cal H}_{1}\mid{\cal H}_{2}\mid{\cal H}_{ 3})\). Let \({\bf u}=(u_{1}\mid u_{2}\mid u_{3})\in({\cal H}_{1}\mid{\cal H}_{2}\mid{\cal H}_ {3})\backslash\{{\bf 0}\}\). When \(\lambda=0\), we have that \({\rm wt}_{H}(\Phi(u_{1},u_{1}\mid u_{2},u_{2}\mid u_{3},u_{3}))=2{\rm wt}_{H}( \Phi({\bf u}))\). Thus, the minimum weight of \(A_{0}\) is \(2\cdot N/2=N^{\prime}/2\). Otherwise, when \(\lambda=1\), we have that \({\rm wt}_{H}(\Phi(u_{1},u_{1}\mid u_{2},u_{2}\mid u_{3},u_{3})+({\bf 0},{\bf 1 },{\bf 0},{\bf 1},{\bf 0},{\bf 1}))={\rm wt}_{H}(\Phi({\bf u}))+\alpha_{1}-{\rm wt }_{H}(u_{1})+2\alpha_{2}-{\rm wt}_{H}(\Phi_{2}(u_{2}))+4\alpha_{3}-{\rm wt}_{H }(\Phi_{3}(u_{3}))={\rm wt}_{H}(\Phi({\bf u}))+\alpha_{1}+2\alpha_{2}+4\alpha_{ 3}-{\rm wt}_{H}(\Phi({\bf u}))=N=N^{\prime}/2\). Thus, the minimum weight of \(A_{1}\) is \(N^{\prime}/2\). Therefore, from (22), the minimum weight of \(H^{t_{1},t_{2},t_{3}+1}\) is \(N^{\prime}/2\).
**Theorem 3.1**: _Let \(t_{1}\geq 1\), \(t_{2}\geq 0\), and \(t_{3}\geq 1\) be integers. The \({\mathbb{Z}}_{2}{\mathbb{Z}}_{4}{\mathbb{Z}}_{8}\)-additive code \({\cal H}^{t_{1},t_{2},t_{3}}\), generated by the matrix \(A^{t_{1},t_{2},t_{3}}\), is a \({\mathbb{Z}}_{2}{\mathbb{Z}}_{4}{\mathbb{Z}}_{8}\)-additive Hadamard code._
**Proof.** It follows from Propositions 3.2 and 3.3.
**Example 3.2**: _The \({\mathbb{Z}}_{2}{\mathbb{Z}}_{4}{\mathbb{Z}}_{8}\)-additive code \({\cal H}^{1,0,1}\) generated by the matrix \(A^{1,0,1}\), given in (2), is a \({\mathbb{Z}}_{2}{\mathbb{Z}}_{4}{\mathbb{Z}}_{8}\)-additive Hadamard code of type \((2,1,1;1,0,1)\). We can write \({\cal H}^{1,0,1}=\bigcup_{\alpha\in{\mathbb{Z}}_{2}}({\cal A}+\alpha\,{\bf 1})\), where \({\cal A}=\{\lambda(0,1\mid 1\mid 1):\lambda\in{\mathbb{Z}}_{8}\}\). Thus, \(H^{1,0,1}=\Phi({\cal H}^{1,0,1})=\bigcup_{\alpha\in{\mathbb{Z}}_{2}}(\Phi({ \cal A})+\alpha\,{\bf 1})\), where \(\Phi({\cal A})\) consists of all the rows of the Hadamard matrix_
\[H(2,4)=\left(\begin{array}{cccccccc}0&0&0&0&0&0&0&0\\ 0&1&0&1&0&1&0&1\\ 0&0&1&1&0&0&1&1\\ 0&1&1&0&0&1&1&0\\ 0&0&0&0&1&1&1&1\\ 0&1&0&1&1&0&1&0\\ 0&0&1&1&1&1&0&0\\ 0&1&1&0&1&0&0&1\end{array}\right).\]
_Note that \(\Phi({\cal A})\) is linear and the minimum distance of \(\Phi({\cal A})\) is \(4\), so \(H^{1,0,1}\) is a binary linear Hadamard code of length \(8\)._
**Proposition 3.4**: _Let \(t_{1}\geq 1\), \(t_{2}\geq 0\), and \(t_{3}\geq 1\) be integers. Let \(H^{t_{1},t_{2},t_{3}}\) be a \({\mathbb{Z}}_{2}{\mathbb{Z}}_{4}{\mathbb{Z}}_{8}\)-linear Hadamard code of length \(2^{t}\). Then, \(t+1=3t_{1}+2t_{2}+t_{3}\)._
**Proof.** Since \(H^{t_{1},t_{2},t_{3}}\) is a binary Hadamard code of length \(2^{t}\), we have that \(|H^{t_{1},t_{2},t_{3}}|=2\cdot 2^{t}=2^{t+1}\). Note that \(|H^{t_{1},t_{2},t_{3}}|=2^{3t_{1}+2t_{2}+t_{3}}\), and hence \(t+1=3t_{1}+2t_{2}+t_{3}\).
Now, we recall the following theorem in order to compare the \(\mathbb{Z}_{2}\mathbb{Z}_{4}\mathbb{Z}_{8}\)-linear Hadamard codes (with \(\alpha_{1}\neq 0\), \(\alpha_{2}\neq 0\) and \(\alpha_{3}\neq 0\)) with the \(\mathbb{Z}_{2}\mathbb{Z}_{4}\)-linear Hadamard codes (with \(\alpha_{1}\neq 0\), \(\alpha_{2}\neq 0\)).
**Theorem 3.2**: _[_14_]_ _Let \(t\geq 3\) and \(t_{2}\in\{0,\ldots,\lfloor t/2\rfloor\}\). Let \(H^{t_{1},t_{2}}\) be the nonlinear \(\mathbb{Z}_{2}\mathbb{Z}_{4}\)-linear Hadamard code of length \(2^{t}\) and type \((\alpha_{1},\alpha_{2};t_{2},t_{3})\), where \(\alpha_{1}=2^{t-t_{1}}\), \(\alpha_{2}=2^{t-1}-2^{t-t_{1}-1}\), and \(t_{2}=t+1-2t_{1}\). Then,_
\[\mathrm{rank}(H^{t_{1},t_{2}})=t_{2}+2t_{1}+\binom{t_{1}}{2}\ \ \text{and}\ \ \ \ker(H^{t_{1},t_{2}})=t_{1}+t_{2}.\]
Also, we recall the construction of the \(\mathbb{Z}_{2^{s}}\)-linear Hadamard codes with \(s\geq 2\) studied in [7], and the following theorem given in [17], in order to compare these codes with the \(\mathbb{Z}_{2}\mathbb{Z}_{4}\mathbb{Z}_{8}\)-linear Hadamard codes having \(\alpha_{1}\neq 0\), \(\alpha_{2}\neq 0\), and \(\alpha_{3}\neq 0\). Let \(T_{i}=\{j\cdot 2^{i-1}\,:\,j\in\{0,1,\ldots,2^{s-i+1}-1\}\}\) for all \(i\in\{1,\ldots,s\}\). Note that \(T_{1}=\{0,\ldots,2^{s}-1\}\). Let \(t_{1}\), \(t_{2}\),\(\ldots\),\(t_{s}\) be non-negative integers with \(t_{1}\geq 1\). Consider the matrix \(\bar{A}^{t_{1},\ldots,t_{s}}\) whose columns are exactly all the vectors of the form \(\mathbf{z}^{T}\), \(\mathbf{z}\in\{1\}\times T_{1}^{t_{1}-1}\times T_{2}^{t_{2}}\times\cdots\times T _{s}^{t_{s}}\). Let \(\bar{\mathcal{H}}^{t_{1},\ldots,t_{s}}\) be the \(\mathbb{Z}_{2^{s}}\)-additive code of type \((n;t_{1},\ldots,t_{s})\) generated by the matrix \(\bar{A}^{t_{1},\ldots,t_{s}}\). Let \(\bar{H}^{t_{1},\ldots,t_{s}}=\Phi(\bar{\mathcal{H}}^{t_{1},\ldots,t_{s}})\) be the corresponding \(\mathbb{Z}_{2^{s}}\)-linear Hadamard code.
**Theorem 3.3**: _[_17_]_ _Let \(\bar{H}^{t_{1},\ldots,t_{s}}\) be the \(\mathbb{Z}_{2^{s}}\)-linear Hadamard code, with \(s\geq 2\) and \(t_{s}\geq 1\). Then, for all \(\ell\in\{1,\ldots,t_{s}\}\), \(\bar{H}^{t_{1},\ldots,t_{s}}\) is permutation equivalent to the \(\mathbb{Z}_{2^{s+\ell}}\)-linear Hadamard code \(\bar{H}^{1,0^{\ell-1},t_{1}-1,t_{2},\ldots,t_{s-1},t_{s}-\ell}\)._
For \(5\leq t\leq 11\), Tables 1 and 3 given in [7] show all possible values of \((t_{1},\ldots,t_{s})\) corresponding to nonlinear \(\mathbb{Z}_{2^{s}}\)-linear Hadamard codes, with \(s\geq 2\), of length \(2^{t}\). For each of them, the values \((r,k)\) are shown, where \(r\) is the rank and \(k\) is the dimension of the kernel. Note that if two codes have different values \((r,k)\), they are not equivalent. The following example shows that all the nonlinear \(\mathbb{Z}_{2}\mathbb{Z}_{4}\mathbb{Z}_{8}\)-linear Hadamard codes of length \(2^{11}\), with \(\alpha_{1}\neq 0\), \(\alpha_{2}\neq 0\), and \(\alpha_{3}\neq 0\), are not equivalent to any \(\mathbb{Z}_{2}\mathbb{Z}_{4}\mathbb{Z}_{8}\)-linear Hadamard codes of any other type, any \(\mathbb{Z}_{2}\mathbb{Z}_{4}\)-linear Hadamard code, with \(\alpha_{1}\neq 0\) and \(\alpha_{2}\neq 0\), and any \(\mathbb{Z}_{2^{s}}\)-linear Hadamard code, with \(s\geq 2\), of the same length \(2^{11}\).
**Example 3.3**: _Consider \(t=11\). By solving equation \(t+1=3t_{1}+2t_{2}+t_{3}\) given in Proposition 3.4, all \(\mathbb{Z}_{2}\mathbb{Z}_{4}\mathbb{Z}_{8}\)-linear Hadamard codes of length \(2^{11}\) are
the ones in_
\[T=\{H^{1,0,9},H^{1,1,7},H^{1,2,5},H^{1,3,3},H^{1,4,1},H^{2,0,6},H^{2,1,4},H^{2,2,2 },H^{3,0,3},H^{3,1,1}\}.\]
_By using Magma, their corresponding values of \((r,k)\), where \(r\) is the rank and \(k\) is the dimension of the kernel, are \((12,12)\), \((14,9)\), \((17,8)\), \((21,7)\), \((26,6)\), \((17,8)\), \((22,7)\), \((28,6)\), \((28,6)\), and \((37,5)\), respectively. The code \(H^{1,0,9}\) is the only linear code in \(T\) since it has the same rank and the dimension of the kernel. By using Magma, we can check that the following codes in each pair are nonequivalent to each other: \((H^{1,2,5},H^{2,0,6})\), \((H^{2,2,2},H^{3,0,3})\). Therefore, none of the \(\mathbb{Z}_{2}\mathbb{Z}_{4}\mathbb{Z}_{8}\)-linear Hadamard codes of length \(2^{11}\) is equivalent to another \(\mathbb{Z}_{2}\mathbb{Z}_{4}\mathbb{Z}_{8}\)-linear Hadamard code of any other type._
_Let \(\bar{T}=T\setminus\{H^{1,0,9}\}\). Similarly, by solving equation \(t+1=2t_{1}+t_{2}\) given in Theorem 3.2, all nonlinear \(\mathbb{Z}_{2}\mathbb{Z}_{4}\)-linear Hadamard codes of length \(2^{11}\) are \(H^{2,8}\), \(H^{3,6}\), \(H^{4,4}\) and \(H^{5,2}\), and by Theorem 3.2, their corresponding values of \((r,k)\) are \((13,10)\), \((15,9)\), \((18,8)\), and \((22,7)\), respectively. Note that if two codes have different values \((r,k)\), they are not equivalent. By using Magma, we can check that \(H^{2,1,4}\) and \(H^{5,2}\) are nonequivalent. Therefore, all the codes in \(\bar{T}\) are nonequivalent to any \(\mathbb{Z}_{2}\mathbb{Z}_{4}\)-linear Hadamard code of length \(2^{11}\)._
_Finally, note that all the codes in \(\bar{T}\), except \(H^{1,1,7}\) and \(H^{2,1,4}\), are not equivalent to any \(\mathbb{Z}_{2^{s}}\)-linear Hadamard code, with \(s\geq 2\), of length \(2^{11}\), since they have different values of \((r,k)\). The \(\mathbb{Z}_{2^{s}}\)-linear Hadamard codes of length \(2^{11}\), having the same values \((r,k)=(14,9)\) as \(H^{1,1,7}\), are \(\bar{H}^{2,0,6}\), \(\bar{H}^{1,1,0,5}\), \(\bar{H}^{1,0,1,0,4}\), \(\bar{H}^{1,0,0,0,1,0,2}\), and \(\bar{H}^{1,0,0,0,0,1,0,0}\), which are equivalent to each other by Theorem 3.3. The \(\mathbb{Z}_{4}\)-linear Hadamard code \(\bar{H}^{6,0}\) is the only \(\mathbb{Z}_{2^{s}}\)-linear Hadamard code of length \(2^{11}\), having the same values \((r,k)=(22,7)\) as \(H^{2,1,4}\). However, by using Magma, we can check that the following codes in each pair are nonequivalent to each other: \((H^{1,1,7},\bar{H}^{2,0,6})\), \((H^{2,1,4},\bar{H}^{6,0})\)._
_Therefore, all nonlinear \(\mathbb{Z}_{2}\mathbb{Z}_{4}\mathbb{Z}_{8}\)-linear Hadamard codes of length \(2^{11}\) are not equivalent to any \(\mathbb{Z}_{2}\mathbb{Z}_{4}\mathbb{Z}_{8}\)-linear Hadamard code of any other type, any \(\mathbb{Z}_{2}\mathbb{Z}_{4}\)-linear and \(\mathbb{Z}_{2^{s}}\)-linear Hadamard code, with \(s\geq 2\), of length \(2^{11}\)._
|
2310.10614
|
Understanding an Acquisition Function Family for Bayesian Optimization
|
Bayesian optimization (BO) developed as an approach for the efficient
optimization of expensive black-box functions without gradient information. A
typical BO paper introduces a new approach and compares it to some alternatives
on simulated and possibly real examples to show its efficacy. Yet on a
different example, this new algorithm might not be as effective as the
alternatives. This paper looks at a broader family of approaches to explain the
strengths and weaknesses of algorithms in the family, with guidance on what
choices might work best on different classes of problems.
|
Jiajie Kong, Tony Pourmohamad, Herbert K. H. Lee
|
2023-10-16T17:37:02Z
|
http://arxiv.org/abs/2310.10614v1
|
# Understanding an Acquisition Function Family for Bayesian Optimization
###### Abstract
Bayesian optimization (BO) developed as an approach for the efficient optimization of expensive black-box functions without gradient information. A typical BO paper introduces a new approach and compares it to some alternatives on simulated and possibly real examples to show its efficacy. Yet on a different example, this new algorithm might not be as effective as the alternatives. This paper looks at a broader family of approaches to explain the strengths and weaknesses of algorithms in the family, with guidance on what choices might work best on different classes of problems.
**Keywords:** Black-box function, expected improvement, improvement function, Gaussian processes
*Corresponding author(s). E-mail(s): [email protected];
Contributing authors: [email protected]; [email protected];
\({}^{\dagger}\)These authors contributed equally to this work.
## 1 Introduction
Expensive black-box functions arise in many scientific disciplines where computer models are needed to model complex physical systems (Gramacy, 2020; Pourmohamad and Lee, 2021). These computer models (or black-box computer codes) are typically deployed when direct experimentation of the physical system under study is prohibitive. For example, geological phenomena, such as earthquakes and volcanic eruptions, are not reproducible physical experiments, and so computer models based
on seismology and volcanology are sometimes used to study these events. Typically, the black-box functions describing the complex system under study are highly complex, multi-modal, and difficult to understand, which makes optimizing these black-box functions a challenging problem. The optimization problem becomes even more difficult when the black-box functions are computationally expensive to evaluate and no gradient information is available. Given the computational expense of evaluating these types of black-box functions, there is a clear need for efficient sequential optimization algorithms that do not require many function evaluations.
In the context of optimizing expensive black-box functions, a popular solution for this type of problem is to use Bayesian optimization (BO) (Mockus et al., 1978). BO is an efficient sequential design strategy for optimizing expensive black-box functions, in few steps, that does not require gradient information (Brochu et al., 2010). More precisely, BO is well suited for solving optimization problems of the following form
\[x^{*}\in\operatorname*{argmin}_{x\in\mathcal{X}}f(x) \tag{1}\]
where \(\mathcal{X}\subset\mathbb{R}^{d}\) is a known, bounded region such that \(f:\mathcal{X}\rightarrow\mathbb{R}\) denotes a scalar-valued objective function. Here, we regard \(f(x)\) as the output of evaluating the objective function at input \(x\). Furthermore, we treat \(f(x)\) as a black-box function that only returns function evaluations of the objective function \(f\) and does not provide any gradient information about it, i.e., we focus on the case of derivative free optimization (Conn et al., 2009). BO proceeds in solving (1) by iteratively developing a "cheap-to-compute" model, or _surrogate model_(Gramacy, 2020), of the objective function \(f\), and at each step of this iterative process, using predictions from the surrogate model to maximize an acquisition (or utility) function, \(a(x)\), that measures how promising each location, \(x\in\mathcal{X}\), in the input space is if it were to be the next chosen point to evaluate.
Clearly, the success of the BO algorithm is heavily tied to the efficiency of the acquisition function for guiding the search (Schonlau et al., 1998; Taddy et al., 2009; Srinivas et al., 2010; Snoek et al., 2012; Henning and Schuler, 2012; Hernandez-Lobato et al., 2014, to name a few). A good acquisition function should accurately reflect our beliefs about which is the best next input to evaluate, while also striking a balance between exploration (global search) and exploitation (local search). With this in mind, one of the most widely used acquisition functions in BO was developed by Jones et al. (1998), namely the expected improvement (EI) acquisition function (Section 3.1). As the name suggests, a new candidate input, \(x\), is chosen such that it maximizes the expected improvement (i.e., reduction) in the solution to the optimization problem in (1) over other possible candidate inputs. Here, expected is synonymous with average and so the EI acquisition function can be viewed as a point estimate of the average improvement. A natural extension would then be to think about quantifying the uncertainty (say with a confidence or credible interval, for example) for the point estimate, yet, little work in this direction has been done (Noe and Husmeier, 2019; Marisu and Pun, 2023). To this end, this paper proposes a new BO acquisition function that naturally, and efficiently, incorporates the associated uncertainty for the point estimate of the EI acquisition function. The development of this new acquisition function will
also lead to a general framework for constructing an acquisition function family for dealing with uncertainty in the EI acquisition function.
We emphasize that the primary goal of this paper is to provide intuition for parameters in a family of acquisition functions, to better explain how different acquisition functions work and on what types of problems each one will work best. This is not intended to be another paper that introduces a new acquisition function and attempts to show that it is better than existing functions. We are interested in understanding existing and new functions. As no acquisition function is optimal on every possible problem, it is helpful to know which acquisition function is best suited for different types of problems.
The remainder of this article is organized as follows. In Section 2, we introduce the two integral components for understanding the inner working of a BO algorithm. Section 3 outlines the concept of expected improvement, how and why we might quantify the uncertainty in our improvement, and establishes an acquisition function family based on improvement functions. Section 4 details the strengths and weaknesses of different algorithm settings in the acquisition function family based on simulated examples. Lastly, Section 5 finishes with some discussion.
## 2 Components of Bayesian Optimization
Section 2 introduces the two main components that are essential for conducting Bayesian optimization.
### Acquisition Functions
The performance of any BO algorithm is inherently tied to its acquisition function. An acquisition function, \(a_{n}(x)\), encodes a measure of belief of how promising an input \(x\) is at minimizing the objective function \(f(x)\) in (1). At the \(n^{th}\) iteration of the BO algorithm, the best next input \(x_{n}\) to evaluate is chosen such that
\[x_{n}=\operatorname*{argmax}_{x\in\mathcal{X}}a_{n-1}(x). \tag{2}\]
Here, the strategy in (2) is to choose the input that maximizes the acquisition function since the maximizing the acquisition function is akin to maximizing our beliefs about where the best next input to evaluate is for minimizing the problem in (1).
Obviously, different choices (i.e., functional forms) of acquisition function will inherently lead to different beliefs in which is the best next input to evaluate, and many are suggested in the literature (for example, Schonlau et al., 1998; Jones et al., 1998; Frazier et al., 2008; Srinivas et al., 2010; Henning and Schuler, 2012; Kandasamy et al., 2018). However, all useful acquisition functions have one feature in common which is that they make use of an exploitation-exploration trade-off. The exploitation-exploration trade-off says that a good acquisition function should trade-off between searching the input space globally (exploration) and searching the input space locally (exploitation). Too much exploration and the BO algorithm will likely not converge to a solution for (1), and too much exploitation leads to BO algorithms that tend to
get stuck in local modes (i.e., local minima) of the input space and thus never finding the global solution to (1).
A popular choice of acquisition function that directly enforces the exploitation-exploration trade-off is the EI acquisition function (Jones et al., 1998). This acquisition function provides a basis for exploration of a family of other acquisition functions, and it also serves as a benchmark for the effectiveness of other functions. The finer details of the EI acquisition function are discussed in Section 3.1.
### Gaussian Process Surrogate Modeling
Once an acquisition function for BO has been chosen, the next step is to develop a strategy for maximizing the acquisition function. For the optimization problem in (2), the BO algorithm essentially embeds an optimization problem inside of an already challenging and computationally expensive optimization problem in (1). Given these computational challenges associated with (1), it is necessary that the optimization problem in (2) be a much easier and faster problem to solve. With the expensive nature of evaluating inputs \(x\), the BO algorithm relies on developing a surrogate model (Gramacy, 2020; Pourmohamad and Lee, 2021) that relates the inputs \(x\) to the outputs \(f(x)\) and can be used to make predictions of the outputs, say \(f(x^{*})\), at untried inputs, \(x^{*}\). The typical choice of surrogate model for BO has been the Gaussian process (GP) (Santner et al., 2003). GPs are typically viewed as a highly flexible nonparametric regression model which, when acting as a surrogate model, are much faster/cheaper to predict untried inputs when compared to evaluating the actual objective function \(f\). Moreover, GP surrogate models allow for uncertainty quantification in the prediction of the objective function at untried inputs which tends to be a critical component of most acquisition functions.
Fundamentally, GPs are distributions over functions such that the joint distribution at any finite set of points is a multivariate Gaussian distribution, and are defined by a mean function and a covariance function. Let \(\{x^{(i)},y^{(i)}\}_{i=1}^{n}\) denote the input-output pairs of data after \(n\) input evaluations of the objective function \(f\). The GP, \(Y(x)\), serves as the surrogate model for the data \(\{x^{(i)},y^{(i)}\}_{i=1}^{n}\) and its predictive equations can be obtained as a straightforward consequence of conditioning for multivariate normal joint distributions, that is, the predictive distribution \(Y(x)|\{x^{(i)},y^{(i)}\}_{i=1}^{n}\) at a new input \(x\) follows another Gaussian process \(Y(x)|\{x^{(i)},y^{(i)}\}_{i=1}^{n}\sim N(\mu(x),\sigma^{2}(x))\). The choice of surrogate model that we use for the remainder of this paper is the GP.
## 3 Quantifying the Improvement
### Expected Improvement
Originally introduced by Jones et al. (1998) in the computer modeling literature, the improvement function, \(I(x)=\max_{x}\{0,f_{\min}^{n}-Y(x)\}\), measures the amount of improvement of an untried input, \(x\), over the current observed minimum value \(f_{\min}^{n}=\min\{f(x_{1}),...,f(x_{n})\}\) after \(n\) runs of the computer model. Since the untried input \(x\) has not yet been observed, both \(Y(x)\) and \(I(x)\) are unknown and can be regarded as random variables. Here, the usual approach is to model \(Y(x)\), conditional
on the observed inputs \(x_{1},...,x_{n}\), using a Gaussian process surrogate model. Under this assumption, one can calculate the expectation of the improvement function, or rather the expected improvement acquisition function, i.e.,
\[\text{EI}(x)=\mathbb{E}(I(x))=(f_{\min}^{n}-\mu_{n}(x))\Phi\left(\frac{f_{\min}^ {n}-\mu_{n}(x)}{\sigma_{n}(x)}\right)+\sigma_{n}(x)\phi\left(\frac{f_{\min}^{n }-\mu_{n}(x)}{\sigma_{n}(x)}\right) \tag{3}\]
where \(\mu_{n}(x)\) and \(\sigma_{n}(x)\) are the mean and standard deviation of the predictive distribution of \(Y(x)\), and \(\Phi(\cdot)\) and \(\phi(\cdot)\) are the standard normal cdf and pdf, respectively.
Conceptually, the EI acquisition function makes a lot of sense. We should intuitively favor trying new candidate inputs \(x\) where we expect, on average, for the improvement to be high over other solutions to (1). Moreover, the form of the EI acquisition function in (3) provides a combined measure of how promising a candidate input is by trading off between local search (\(\mu(x)\) under \(f_{\min}\)) and global search (\(\sigma(x)\)).
### Variance of the Improvement
The EI acquisition function grew organically out of the intuitive notion that candidate inputs should be chosen based on where we should expect, on average, for the improvement to be high. However, when recalling the definition of EI in (3), the EI is the expectation of the improvement function, or rather a point estimate of the random function \(I(x)\) and so there is a quantifiable amount of uncertainty associated with our point estimate as well. In order to understand the variability (or uncertainty) associated with the EI acquisition function, one needs to calculate the variance of the improvement function \(I(x)\). Fortunately, the variance of the improvement function, under the assumption of a GP surrogate model, has the following closed form expression:
\[\text{VI}(x)=\mathbb{V}\text{ar}(I(x))=\sigma_{n}^{2}(x)\Bigg{[} \left(\left(\frac{f_{\min}^{n}-\mu_{n}(x)}{\sigma_{n}(x)}\right)^{2}+1\right) \Phi\left(\frac{f_{\min}^{n}-\mu_{n}(x)}{\sigma_{n}(x)}\right)+ \tag{4}\] \[\left(\frac{f_{\min}^{n}-\mu_{n}(x)}{\sigma_{n}(x)}\right)\phi \left(\frac{f_{\min}^{n}-\mu_{n}(x)}{\sigma_{n}(x)}\right)\Bigg{]}-(\text{EI} (x))^{2}.\]
The details of the derivation of \(\text{VI}(x)\) can be found in Schonlau et al. (1998). Interestingly, most works in the BO literature have focused mainly on expected improvement as is, with no regards to direct uncertainty quantification in the improvement function. Schonlau et al. (1998) was the first to calculate the variance of the improvement function, but quantifying the variability in the improvement was not the main goal of the paper, but rather a result that fell out of their methodology of calculating the power expected improvement (PEI), i.e., \(\mathbb{E}(I^{g}(x))\) for \(g>0\). The case of \(g=2\) leads to the derivation of the variance of the improvement function, \(\text{VI}(x)\), since \(\text{VI}(x)=\mathbb{E}(I^{2}(x))-[\mathbb{E}(I(x))]^{2}\).
It was not until recently though that any BO algorithms made any attempt to consider incorporating the variance of the improvement function into the acquisition
function. In particular, Noe and Husmeier (2019) and Marisu and Pun (2023) take two different approaches to incorporating the uncertainty in the improvement function into their respective acquisition functions. Noe and Husmeier (2019) introduced the concept of the scaled expected improvement (SEI) acquisition function as the following:
\[\text{SEI}(x)=\frac{\text{EI}(x)}{\sqrt{\text{VI}(x)}}. \tag{5}\]
The acquisition function in (5) scales the expected improvement by the reciprocal of the standard deviation of the improvement function, and by doing so, attempts to create an acquisition function that corresponds to selecting inputs where the improvement is expected to be high with high certainty. However, it should be noted that the SEI acquisition function may lead to a BO algorithm that overly favors local search since SEI will be maximized at or near points where the variance of the improvement function is close to 0, which commonly occurs at or around inputs that have already been previously evaluated (i.e., areas of high exploitation rather than exploration). On the other hand, Marisu and Pun (2023) introduced an acquisition function (which we refer to as VEI) as a linear combination of the expectation and variance of the improvement function, i.e.,
\[\text{VEI}(x)=\text{EI}(x)-\frac{\xi}{2}\text{VI}(x). \tag{6}\]
Here, \(\xi>0\) can be thought of as a tuning parameter that controls the amount of uncertainty in which to penalize the expected improvement by as well rates of convergence of the BO algorithm (similar to the tuning parameter found in the upper confidence bound acquisition function of Srinivas et al. (2010)). Likewise, the choice of \(\xi\) may change during the course of running the BO algorithm and may as, say, a function of the total number of inputs evaluated. However, based upon empirical evidence, Marisu and Pun (2023) recommend setting \(\xi=1\) given their experience working with the VEI acquisition function. Note that there is no theoretical guarantee that VEI has to be greater than 0. In fact, it is easy to see that when \((\xi/2)\text{VI}(x)>\text{EI}(x)\) for all \(x\), that VEI will either be a negative number, or VEI will be 0 at inputs that have already been evaluated. However, under the very real scenario that \((\xi/2)\text{VI}(x)>\text{EI}(x)\) for all \(x\), this is problematic since maximizing the VEI acquisition function under this scenario will lead to choosing inputs that are close in proximity to previously evaluated inputs which can lead to a local search algorithm that will likely get stuck in local minima of the objective function. With these problems in mind, the next section introduces a new acquisition function intended to address these issues.
### Accounting for Uncertainty
Although Noe and Husmeier (2019) and Marisu and Pun (2023) describe separately different ways to account for the uncertainty in the improvement function when using expected improvement, as previously discussed, there are some clear deficiencies in both their SEI and VEI acquisition functions. More specifically, there are cases where
SEI and VEI break down due to small or large variances in the improvement function, respectively. In order to overcome some of these issues, we propose a new acquisition function which accounts for the uncertainty in the expected improvement without encountering these local search issues. Here we define an acquisition function based on uncertainty in the expected improvement (referred to as UEI) as follows:
\[\text{UEI}(x)=\text{EI}(x)+\gamma\sqrt{\text{VI}(x)}, \tag{7}\]
where \(\gamma>0\). Given that we desire uncertainty quantification around the expected improvement, the form of the UEI acquisition in (7) feels like a natural choice given that it resembles what would be a credible (or confidence) interval for the point estimate of the expected improvement. That is to say, for appropriate choices of \(\gamma\), the UEI acquisition function can be viewed as maximizing the upper quantiles of a \((1-\alpha)\%\) credible (or confidence) interval, \(\alpha\in(0,1)\), for the EI.
The individual components of the UEI acquisition function may not look drastically different than that of the SEI and VEI acquisition functions, however, the UEI acquisition function incorporates the variance in the improvement function in a very different manner. Both the SEI and VEI acquisition functions penalize the EI for having high variability in the improvement function, which sounds natural if we are interested in choosing new inputs based on high EI and high certainty in the EI. However, this penalization ultimately leads to BO algorithms which may display a higher degree of local, rather than global, search when optimizing the objective function. On the other hand, the UEI acquisition function favors rewarding variability in the improvement function which is beneficial for two reasons. First, since \(\gamma\sqrt{\text{VI}(x)}\geq 0\) for all \(x\), the UEI acquisition will always be a non-negative value and so it will not suffer from the same local search issues that plague VEI. Likewise, when the variability in the improvement function is large, UEI will tend to favor global search, however, as the variability in the improvement goes to 0, the UEI will also not suffer from the local search issues encountered by SEI since the UEI acquisition function will converge to the original EI acquisition function in (3) as VI(\(x\)) goes to 0. Secondly, the form of the UEI acquisition function suggests treating the search for the best next input to evaluate less pessimistically that SEI and VEI. By this we mean that the UEI acquisition function suggests picking the best next input as the one that will give the highest potential expected improvement as measured by the upper credible interval of the EI, as opposed to focusing in on areas of high expected improvement with high certainty. The reward for embracing uncertainty in this way is that UEI will function as an acquisition function that can still efficiently balance the exploration-exploitation trade-off.
As shown in Noe and Husmeier (2019) and Marisu and Pun (2023), the SEI and VEI acquisition functions are not without their merits. In fact, we believe that there does not exist a single best acquisition function for every scenario, but that the SEI and VEI acquisition functions perform better under certain scenarios. With this in mind, we envision that there is a general acquisition function family that encapsulates the class of expected improvement based acquisition functions, and that the general
form for the family of acquisition functions may provide insights into when one EI-based acquisition function is preferable to another. And so, we define an acquisition function family using the following acquisition function:
\[a(x)=\frac{\mathbb{E}(I^{w}(x))}{[\mathbb{V}\text{ar}(I(x))]^{u}}+\beta[\mathbb{ V}\text{ar}(I(x))]^{v} \tag{8}\]
with \(\beta\in\mathbb{R}\), and \(u,v,w\geq 0\). The acquisition function in (8) incorporates components of both the expectation and variance of the improvement function, while the parameters \(u,v,\) and \(w\) govern how much of each component to use, and if the acquisition function should be a linear combination or scaling of the uncertainty in the improvement function, or both. It is obvious to see that for certain choices of \(u\), \(v\), \(w\), and \(\beta\), the acquisition function in (8) will recover the EI, PEI, SEI, VEI, and UEI acquisition functions exactly (Table 1). For PEI, any positive \(w\) could be used, and \(w=2\) is the most commonly used value, so we use that for the rest of this paper.
We discuss the roles that each value \(u\), \(v\), \(w\), and \(\beta\) play in the general form for the acquisition function family in the next section.
## 4 Illustrative Examples
To demonstrate the performance, strengths, and weaknesses of the different acquisition functions, we solve several well-known optimization problems in Section 4.1 using the EI, PEI, SEI, VEI, and UEI acquisition functions, and compare and contrast their respective results. Section 4.2 explores the values of the parameters in the functional family. For VEI, \(\beta\) is a free parameter, and we focus primarily on the recommended value of \(\beta=-1/2\). For UEI, we have tried a variety of values and found that \(\beta=2\) generally works well.
### Optimization Test Problems
Given the popularity of the EI acquisition function, its performance on optimization test functions is well-known, yet the effect on this performance given the addition of quantifying the uncertainty in the improvement function through its variance is much less known. In this section, we seek to minimize six different optimization test functions taken from the optimization community (Surjanovic and Bingham, 2023) via the five different acquisition functions. Our choice of optimization test functions
\begin{table}
\begin{tabular}{l c c c c} \hline Source & Acquisition Function & \(u\) & \(v\) & \(w\) & \(\beta\) \\ \hline Jones et al. (1998) & EI & 0 & – & 1 & 0 \\ Schonlau et al. (1998) & PEI & 0 & – & 2 & 0 \\ Noe and Husmeier (2019) & SEI & 1/2 & – & 1 & 0 \\ Marisu and Pun (2023) & VEI & 0 & 1 & 1 & \(<0\) \\ – & UEI & 0 & 1/2 & 1 & \(>0\) \\ \hline \end{tabular}
\end{table}
Table 1: The parameter settings of \(u,v,w\) and \(\beta\) for recovering the different acquisition functions from the general form for the acquisition function family.
were based on choosing optimization problems that contained objective functions with either many local minima, solutions that lied along the boundary of the input space, valley shapes, steep ridges or drops, or any combination of these qualities, in order to induce different levels of difficulty for each of the acquisition functions. Further characteristics of the optimization test functions can be found in Table 2, and the exact form of the equations for the test functions can be found in A.
For a given acquisition function, we solve each test problem by starting with an initial random sample of 10 inputs from a Latin hypercube design (McKay et al., 1979) over the input space, and then sequentially chose 490 more inputs based on our BO strategy. For each of the acquisition functions, we conduct 100 repetitions of a Monte Carlo experiment in order to quantify robustness and distribution of the solutions, as well as to understand under which scenarios a given acquisition function may (or may not) have difficulties or shortcomings in finding the global solution to the optimization problem. Table 3 and Figure 1 capture the results of these different Monte Carlo experiments.
In general, for all six of the test functions, it appears that each acquisition function converges to the global solution of the optimization problem at least once (see the best final solution column of Table 3), although some acquisition functions tended to find the global solution more reliably than others. Interestingly, the UEI acquisition function tended to do the best in all of the different performance categories of Table 3), i.e., it had the best average final solution (3 out of 6 times), the smallest standard deviation of the final solution (3 out of 6 times), the best final solution (5 out of 6 times), and the smallest worst final solution (3 out of 6 times), over all of the different test functions. Furthermore, even when not the best acquisition function for a given test function, the UEI acquisition function tended to be as competitive as all of the other acquisition functions. These empirical results are indicative of the benefit of incorporating the variance of the improvement function into the acquisition function, but they also highlight the importance of incorporating that extra information efficiently into the acquisition function. The pitfalls of incorporating the variance of the improvement function into the acquisition as the SEI and VEI acquisition functions do was discussed in Section 3.2, and these issues readily manifest themselves in 4 of the 6 optimization test functions (the GRL, ROS, MOT, and RAS test functions). Investigating the shapes of the objective function surfaces (see A), there does not appear to be a single distinguishing quality that leads to this poor performance,
\begin{table}
\begin{tabular}{l c c c c} \hline Test Function & Abbreviation & Number of & Number of & Global Solution \\ & & Dimensions & Local Minima & \(f(x)\) \\ \hline Gramacy and Lee & GRL & 1 & 10 & -0.869 \\ Rosenbrock & ROS & 2 & 1 & 0 \\ Modified Townsend & MOT & 2 & 6 & -2.969 \\ Ackley & ACY & 2 & 25 & 0 \\ Rastrigin & RAS & 2 & 25 & 0 \\ Hartman & HTN & 6 & 6 & -3.322 \\ \hline \end{tabular}
\end{table}
Table 2: The different optimization test functions used to evaluate the performance of the different acquisition functions.
Figure 1: The results of running 100 Monte Carlo repetitions, with random starting inputs, for the different test and acquisition functions. The plots show the average best objective function values found over 300 black-box iterations.
but rather, that the penalization for the variance in the improvement function leads the SEI and VEI acquisition functions to insufficiently explore the input space. For example, if we investigate the individual Monte Carlo solutions associated with the MOT optimization function for the SEI, VEI, and UEI acquisition functions (Figure 2), we see that the SEI and VEI acquisition functions tended to have several runs that get stuck exploring around a local minima of the surface, while this does not tend to occur to the UEI acquisition function. The ROS function is unimodal but with a very shallow slope in a narrow valley around the minimum, and so being too focused on a local search can lead to slow movement within the valley and worse results for a fixed number of iterations.
On the other hand, there are clearly instances of when the the SEI and VEI acquisition functions do bring added value to the BO algorithm. For example, from Figure 1, we see that for the ACY and RAS optimization test functions, the VEI and SEI acquisition functions were much quicker at minimizing the objective function with fewer runs than the other acquisition functions. Furthermore, the totality of the results in
\begin{table}
\begin{tabular}{l c c c c c} \hline Test & Acquisition & Average Final & SD of Final & Best Final & Worst Final \\ Function & Function & Solution & Solution & Solution & Solution \\ \hline \multirow{6}{*}{GRL} & EI & **-0.867** & **0.021** & **-0.869** & **-0.663** \\ & PEI & -0.862 & 0.051 & **-0.869** & -0.489 \\ & SEI & -0.666 & 0.189 & **-0.869** & -0.232 \\ & VEI & -0.858 & 0.056 & **-0.869** & -0.527 \\ & UEI & -0.860 & 0.055 & **-0.869** & -0.402 \\ \hline \multirow{6}{*}{ROS} & EI & 0.002 & 0.005 & 2e-08 & 0.028 \\ & PEI & 0.002 & 0.006 & 1e-08 & 0.039 \\ & SEI & 0.103 & 0.136 & 8e-08 & 0.654 \\ & VEI & 0.125 & 0.326 & 1e-06 & 2.83 \\ & UEI & **0.001** & **0.002** & **1e-10** & **0.009** \\ \hline \multirow{6}{*}{MOT} & EI & -2.871 & 0.279 & -2.969 & -1.660 \\ & PEI & -2.927 & 0.162 & -2.969 & -1.659 \\ & SEI & -2.080 & 0.574 & -2.969 & -1.639 \\ & VEI & -2.822 & 0.339 & -2.969 & -1.640 \\ & UEI & **-2.969** & **3e-06** & **-2.969** & **-2.969** \\ \hline \multirow{6}{*}{ACY} & EI & 0.009 & 0.010 & 6e-05 & 0.044 \\ & PEI & **0.008** & **0.010** & 8e-05 & 0.041 \\ & SEI & 0.009 & 0.010 & 6e-05 & 0.044 \\ & VEI & 0.012 & 0.011 & 0.000 & 0.054 \\ & UEI & 0.008 & 0.010 & **4e-06** & **0.039** \\ \hline \multirow{6}{*}{RAS} & EI & 0.094 & 0.488 & 1e-12 & **4.000** \\ & PEI & 0.040 & 0.398 & 1e-10 & 3.981 \\ \cline{1-1} & SEI & 2.103 & 1.481 & 2e-07 & 4.995 \\ \cline{1-1} & VEI & 1.602 & 1.149 & 7e-07 & 4.995 \\ \cline{1-1} & UEI & **0.057** & **0.431** & **3e-12** & 4.000 \\ \hline \multirow{6}{*}{HTN} & EI & **-3.280** & 0.059 & -3.322 & -3.137 \\ \cline{1-1} & PEI & -3.279 & 0.058 & -3.322 & -3.193 \\ \cline{1-1} \cline{1-1} & SEI & -3.281 & **0.057** & -3.322 & -3.196 \\ \cline{1-1} \cline{1-1} & VEI & -3.279 & 0.058 & **-3.322** & **-3.201** \\ \hline \end{tabular}
\end{table}
Table 3: The average, standard deviation (SD), best, and worst solutions found at the end of the 100 Monte Carlo experiments by each acquisition function on each optimization test function. Bolded values signify the best outcome in a given category for each test function.
Table 3 and Figure 1 seem to indicate that there that are potential strengths to try to borrow across all of the acquisition functions, further suggesting that a general family of acquisition functions is likely appropriate.
### Acquisition Function Family Performance
In this section, we tested the performance of BO using acquisition functions with different parameter values. The functions involved in the test include ROS, MOT, and RAS, with 1, 6, and 25 local minima, respectively. We use these functions to explore the behavior of various parameter values under different difficulties of problems. For the selection of parameter sets, we try to keep some parameters fixed while varying others, because the parameters \(w\), \(u\), and \(v\) all effect the influence of VI(\(x\)) on the acquisition function. There are three groups of parameters involved in the test. The first group is to test only \(w\). Here, \(u\) and \(\beta\) are fixed to 0 (noting that the value of \(v\) does not affect the setting in this case), and \(w\) is set to 0, 1, 2, and 3 respectively. The second group of parameters tests how the value of \(\beta\) and \(v\) influence the performance of UEI. Here, \(w\) and \(u\) are set to 0 and 1, respectively, and the values of \(\beta\) and \(v\) take values from the Cartesian product \(\{-0.5,0,2\}\times\{0,0.5,1\}\). The last group of tests simultaneously adjusts \(u\) and \(v\), with \(w\) and \(\beta\) fixed to 1 and 2. The values of \(u\) and \(v\) take values from the Cartesian product \(\{0,0.5,1\}\times\{0,0.5,1\}\). Notice that each set of test parameters contains EI as a reference baseline.
For the first set of parameters, only the value of \(w\) is varied. The results of this part are shown in the first row of Figure 3. When \(w=1\), it becomes the classic EI, which can be used as a reference; when \(w=0\), the acquisition function becomes the probability of improvement (PI) (Kushner, 1964), and the algorithm will look for the position with a positive number in the surrogate function in each iteration, regardless of the size of the improvement. It can be seen that PI drops faster at the beginning in the ROS and RAS test functions, but does not converge in all the tests. When \(w=2\), it becomes the classic PEI; since \(\mathbb{E}(I^{2}(x))=\text{EI}(x)^{2}+\text{VI}(x)\), PEI favors locations with higher VI(\(x\)). In the ROS and RAS function tests, it can be seen that PEI drops more
Figure 2: A view of the performance of the SEI, VEI, and UEI acquisition functions for the 100 the Monte Carlo experiments associated with the MOT test function. Here, each grey line represents the best value found over the search by the BO algorithm during a single run of the Monte Carlo experiment while the black line represents the average of the grey lines.
slowly than EI at the beginning, but is more likely to find the global minima at the end. In the MOT function test, PEI dominates EI after multiple iterations. In other words, PEI more easily finds the global minimum than EI, and does not get stuck in local minima as much. When \(w=3\), it can be seen in the ROS function that it drops more slowly than PEI, but finally finds the global minimum. Interestingly, in the MOT function test, the value of the lowest point does not go down like PEI. Instead, it is not clear that the global minimum is not always
gets stuck somewhere after about 50 iterations. That is to say, the algorithm will not guarantee that it will converge just because the value of \(u\) is increased, and too much weight on VI will weaken the ability of the algorithm. See Schonlau et al. (1998) for more discussion of the form of \(\mathbb{E}(I^{w})\).
The second set of parameters explores how \(\beta\) and \(v\) influence the performance of VEI and UEI. The results are shown in the second row of Figure 3. We let \(\beta\) and \(v\) take values from the Cartesian product of \(\{-0.5,0,2\}\times\{0,0.5,1\}\), and the acquisition function is equivalent to the classic EI when \(\beta=0\) or \(v=1\). Thus the tested parameter sets are effectively \(\{0,0\}\), \(\{-0.5,0.5\}\), \(\{-0.5,1\}\), \(\{2,0.5\}\) and \(\{2,1\}\). The algorithm is VEI when \(\beta=-0.5\). VEI will penalize the location with higher variance. In the simple function ROS test, the algorithm will approach the lowest point faster than EI when \(v=0.5\), but facing the more complex MOT and RAS functions, VEI quickly becomes stuck at a local minima; it is worth noting that when \(v=1\), the algorithm is less likely to get stuck at the local minima, because after some iterations, there are more and more points used to generate the surrogate process, resulting in smaller and smaller VI\((x)\). When \(0<\text{VI(x)}<1\), we have that VI(x) \(<\text{VI(x)}^{1/2}\) resulting in a reduction in the penalty of VI\((x)\) for the acquisition function, which decreases the tendency of the algorithm to become stuck in local minima. The acquisition function is UEI when \(\beta=2\). It can be seen from the unimodal ROS function that for either \(v=0.5\) or \(v=1\), UEI converges slower than the alternatives, as it puts more emphasis on global search than these comparators. However, when facing the more complex MOT and RAS functions, UEI is more capable of finding global minima. Interestingly when \(v=0.5\), the performance in the MOT function test is significantly better than any other parameter choices and is the only one that converges every time. Compared with \(v=1\), the performance of \(v=0.5\) is better mainly because the reward for VI\((x)\) is greater in the long run as VI\((x)\) decreases below 1.
The last comparison fixes \(w=1\) and \(\beta=2\) and explores the influence of adjusting \(u\) and \(v\). When \(u=0\) and \(v=0.5\) we again obtain UEI. Meanwhile, when \(u=0.5\) and \(v=0\), the acquisition function is reduced to SEI. When both \(u\) and \(v\) are 0, the acquisition function returns to the classic EI. In these examples, SEI (\(v=0\) and \(u=0.5\)) displays similar behavior as VEI, converging the fastest for the simple ROS function, but quickly becoming stuck in local minima when for the more complex MOT and RAS functions. It is worth noting that \(u=1\) makes SEI converge even slower, because the VI(x) \(>1\) in the beginning, making the acquisition function quite risk averse. At each iteration, the algorithm will prefer points that are very close to the known points because of the VI\((x)\) penalty, making searches extremely localized. When \(u\neq 0\), all parameter sets perform poorly on the complex function tests. \(u\neq 0\) penalizes risk, while \(v\neq 0\) rewards risk, and when they are both non-zero, the dynamic can be complicated and less effective.
Overall we see the best performance when \(u=0\), especially when paired with \(v=0.5\), which results in the UEI acquisition function when \(\beta=2\). Most of the acquisition functions in this family succeed in finding the global minimum for simple functions such as ROS. Acquisition functions that penalize VI\((x)\), such as VEI and SEI, usually require fewer iterations to converge on simpler functions. For objective
functions that contain multiple local minima, the acquisition functions that reward VI(\(x\)) are more likely to find the global minimum.
## 5 Discussion
This paper considers a range of acquisition functions as part of a larger family, in order to explore the behavior of functions in the family as parameter values shift. This family includes the existing acquisition functions EI, PEI, SEI, and VEI, and it also includes a newly proposed UEI. A simulation study illustrates the effect of tuning different parameters for this family.
The simulation results show that penalization of the variance results in faster convergence to local minima, while rewarding variance improves global search, resulting in improved convergence to the global minimum on complex multimodal objective functions. Thus, the best choice of acquisition function will depend on the particular objective function. If some information about the objective function is available, analyses such as the one in this paper can be used to guide a more effective choice of acquisition function.
## Statements and Declarations
The authors declare that there are no conflicts of interests nor competing interests. Additionally, there is no data associated with this publication.
## Appendix A Test Functions for Optimization
Further descriptions and implementations of the test functions in this section can be found in Surjanovic and Bingham (2023). Here we list the functional forms of the six test functions we examined in this paper, along with their respective input domains, and plots when applicable. Recall that the objective here is to minimize \(f(x)\) subject to its input domain \(x\).
**Gramacy and Lee Function**
\[f(x)=\frac{\sin(10\pi x)}{2x}+(x-1)^{4}\] (A1) \[x\in[0.5,2.5]\]
**Rosenbrock Function**
\[f(x)=100(x_{2}-x_{1}^{2})^{2}+(x_{1}-1)^{2}\] (A2) \[x_{1},x_{2}\in[-2,2]\]
**Modified Townsend Function**
\[f(x)=-[\cos((x_{1}-.1)x_{2})]^{2}-x_{1}\sin(3x_{1}+x_{2})\] (A3)
\[x_{1},x_{2}\in[-2,2]\]
**Ackley Function**
\[f(x)=-20\exp\left(-0.2\sqrt{\frac{1}{2}\sum_{i=1}^{2}x_{i}^{2}} \right)-\exp\left(\frac{1}{2}\sum_{i=1}^{2}\cos(2\pi x_{i})\right)+20+\exp(1)\] (A4) \[x_{1},x_{2}\in[-2,2]\]
**Rastrigin Function**
\[f(x)=20+\sum_{i=1}^{2}\left(x_{i}^{2}-10\cos(2\pi x_{i})\right)\] (A5) \[x_{1},x_{2}\in[-2,2]\]
**Hartman Function**
\[f(x)=-\sum_{i=1}^{4}\alpha_{i}\exp\left(-\sum_{j=1}^{6}A_{ij}(x_{j}-P_{ij})^{2 }\right),\] (A6)
where \(\alpha=(1,1.2,3,3.2)^{T}\),
\[A=\left(\begin{array}{cccccc}10&3&17&3.5&1.7&8\\ 0.05&10&17&0.1&8&14\\ 3&3.5&1.7&10&17&8\\ 17&8&0.05&10&0.1&14\end{array}\right), \tag{10}\]
where \(\alpha=(1,1.2,3,3.2)^{T}\),
\[A=\left(\begin{array}{cccccc}10&3&17&3.5&1.7&8\\ 0.05&10&17&0.1&8&14\\ 3&3.5&1.7&10&17&8\\ 17&8&0.05&10&0.1&14\end{array}\right), \tag{11}\]
and
\[P=10^{-4}\begin{pmatrix}1312&1696&5569&124&8283&5886\\ 2329&4135&8307&3736&1004&9991\\ 2348&1451&3522&2883&3047&6650\\ 4047&8828&8732&5743&1091&381\end{pmatrix},\] (A8)
where \(x_{i}\in(0,1)\) for all \(i=1,...,6\).
|
2303.08163
|
Entropic equilibrium for the lattice Boltzmann method: Hydrodynamics and
numerical properties
|
The entropic lattice Boltzmann framework proposed the construction of the
discrete equilibrium by taking into consideration minimization of a discrete
entropy functional. The effect of this form of the discrete equilibrium on
properties of the resulting solver has been the topic of discussions in the
literature. Here we present a rigorous analysis of the hydrodynamics and
numerics of the entropic. In doing so we demonstrate that the entropic
equilibrium features unconditional linear stability, in contrast to the
conventional polynomial equilibrium. We reveal the mechanisms through which
unconditional linear stability is guaranteed, most notable of which the
adaptive normal modes propagation velocity and the positive-definite nature of
the dissipation rates of all eigen-modes. We further present a simple local
correction to considerably reduce the deviations in the effective bulk
viscosity.
|
S. A. Hosseini, I. V. Karlin
|
2023-03-14T18:23:10Z
|
http://arxiv.org/abs/2303.08163v1
|
# Entropic equilibrium for the lattice Boltzmann method:
###### Abstract
The entropic lattice Boltzmann framework proposed the construction of the discrete equilibrium by taking into consideration minimization of a discrete entropy functional. The effect of this form of the discrete equilibrium on properties of the resulting solver has been the topic of discussions in the literature. Here we present a rigorous analysis of the hydrodynamics and numerics of the entropic. In doing so we demonstrate that the entropic equilibrium features unconditional linear stability, in contrast to the conventional polynomial equilibrium. We reveal the mechanisms through which unconditional linear stability is guaranteed, most notable of which the adaptive normal modes propagation velocity and the positive-definite nature of the dissipation rates of all eigen-modes. We further present a simple local correction to considerably reduce the deviations in the effective bulk viscosity.
pacs: 47.11.-j
## I Introduction
The lattice Boltzmann method is a numerical method developed in the late 80's/early 90's, as an alternative to classical solvers for the Navier-Stokes equations, initially, in the incompressible flow limit [1]. This approach finds its roots in the kinetic theory of gases and is essentially a solver for the phase-space discretized Boltzmann equation with the linear Bhatnagar-Gross-Krook (BGK) approximation for the collision term [2]. The relatively uncomplicated algorithm, low cost of discrete operations in the time/space evolution equations, their locality stemming from the purely hyperbolic nature of the corresponding equations -as compared to elliptic-hyperbolic alternatives such as the Poisson-Navier-Stokes equations, and properties such as low numerical dissipation and strictly conservative nature have pushed the lattice Boltzmann method to the forefront of computational fluid dynamics.
Early on after its first appearance in the literature, numerical stability became a major point of discussion and research. In its original form, i.e. the single relaxation time BGK model with a quadratic-in-velocity discrete equilibrium, the lattice Boltzmann solver has been reported to be sensitive to the viscosity non-dimensionalized by the time-step and grid sizes, also referred to as the Fourier number, and the maximum non-dimensional velocity [3]. Note that such limitations are not proper to the lattice Boltzmann method and come with any other numerical approximation to a system of hyperbolic/parabolic partial differential equations, as illustrated by the Lax equivalence theorem for finite difference methods [4; 5] and the Lax-Richtmyer stability condition [6]. Plethora of modifications to the original lattice BGK (LBGK) have been proposed since, the majority of which can be categorized as some form of multiple relaxation times putting in effect the burden of instabilities onto relaxation rates of individual independent moments of the distribution function [7; 8; 9; 10]. While such schemes have been successful, to different and limited extent, in extending the operation range of the LBGK, none of them has actually resulted in a scheme that is unconditionally stable in the linear regime. Based on that observation one might be tempted to ask whether focus should not be put on other components of the LBGK algorithm, for instance the discrete equilibrium. While the continuous equilibrium distribution function, i.e. Maxwell-Boltzmann distribution, is a minimizer of entropy and the Boltzmann-BGK equation complies with the H-theorem, it is not guaranteed that truncated expansions of the distribution function in a given projection space will abide by these conditions, i.e. non-commutativity of entropy minimization and discretization in phase-space. The entropic lattice Boltzmann method [11] proposed a lattice Boltzmann realization guaranteeing stability based on a construction of the equilibrium attractor taking into account minimization of the discrete entropy. While the entropic lattice Boltzmann method has been successfully used for a wide variety of large Reynolds number simulations, see for instance [12; 13; 14], the question of its stability as compared to other lattice Boltzmann realizations has been a topic of debates in the literature, see for instance [15; 16; 17; 18], as best illustrated by the following quote from [16]: "... it is unclear theoretically how the ELBE with a constant relaxation parameter \(\tau\) can improve the numerical stability of the LBGK scheme."
This sentence summarizes, in a nutshell, the question that will be considered in the present manuscript. Does the form of the discrete equilibrium distribution function, for a fixed relaxation scheme, affect the linear stability of the solver? Can the specific form of the discrete equilibrium derived by minimization of discrete entropy guarantee unconditional linear stability? And if so what are the mechanisms by which the entropic equilibrium
guarantees linear stability? The present work will address these questions with a detailed analysis of different forms of the discrete equilibrium distribution function. Note that the present work only addresses the question of linear stability, as non-linear stability in the entropic lattice Boltzmann method is brought about by another ingredient which is dynamic restriction of the maximum path-length in the relaxation process.
The manuscript is organized as follows: after a brief review of basic concepts of the lattice Boltzmann method and different discrete equilibria in section II, the continuum-level dynamics of the equivalent macroscopic balance equations will be analyzed. First dispersion properties will be dissected by looking into the Euler-level equations in section III. Dissipation at the Navier-Stokes level will then be discussed in section IV. In section V the analyses will be extended to the entire wave-number space through the von Neumann method and continuum limit results of sections IV and V will be confirmed via numerical simulations. In addition, a simple and local correction is proposed to considerably reduce the deviations in the effective bulk viscosity for all equilibria. The manuscript will end with section VI summarizing all observation and discussing larger impact of the present study.
## II Basic concepts
In the remainder of the present article, unless stated otherwise, we will consider single relaxation time lattice Boltzmann models with the following discrete time-evolution equation:
\[f_{i}(\mathbf{r}+\mathbf{c}_{i}\delta t,t+\delta t)-f_{i}(\mathbf{r},t)=2\beta\left(f_{i} ^{\text{eq}}(\rho,\mathbf{u})-f_{i}(\mathbf{r},t)\right). \tag{1}\]
Here \(f_{i}\) are the discrete distribution functions, r the position in space, \(t\) time, \(\delta t\) the time-step size, \(\rho\) the fluid density, \(\mathbf{u}\) the velocity and \(\beta\) the relaxation frequency defined as [19; 20]:
\[\beta=\frac{\delta t}{2\nu/\varsigma^{2}+\delta t}, \tag{2}\]
where \(\nu\) is the fluid kinematic viscosity and \(\varsigma\) the lattice sound speed:
\[\varsigma=\frac{\delta r}{\sqrt{3}\delta t}, \tag{3}\]
and \(\delta r\) the grid size. \(f_{i}^{\text{eq}}\) are the equilibrium distribution functions which will be the focus of this work. Here all analyses and discussions will consider first-neighbor lattices, in 1-D,
\[c_{i}\in\frac{\delta r}{\delta t}\{-1,0,1\}, \tag{4}\]
and corresponding tensorial products in 2- and 3-D. The weights associated with this lattice are, in 1-D,
\[w_{i}\in\left\{\frac{1}{6},\frac{2}{3},\frac{1}{6}\right\}. \tag{5}\]
Weights for 2- and 3-D lattices can be obtained as products of 1-D weights.
Details about the equilibrium distribution functions, starting with the entropic construction are given in the next subsections.
### Discrete entropic equilibrium construction
In the context of the entropic lattice Boltzmann method the discrete equilibrium state is found as the minimizer of a Lyapunov functional \(H\)[11; 21]:
\[H=\sum_{i=1}^{Q}h_{i}(f_{i}), \tag{6}\]
where \(h_{i}\) are convex functions, under mass and momentum conservation constraints:
\[\sum_{i=1}^{Q}f_{i}^{\text{eq}}=\rho, \tag{7}\] \[\sum_{i=1}^{Q}\mathbf{c}_{i}f_{i}^{\text{eq}}=\rho\mathbf{u}, \tag{8}\]
which is different from the polynomial construction which also imposes a constraint on the second-order moment of the equilibrium distribution function. Introducing the corresponding Lagrange multipliers, \(\lambda_{0},\lambda_{\alpha}\):
\[\delta\sum_{i=1}^{Q}\left(h_{i}(f_{i})-\lambda_{0}f_{i}-\lambda_{\alpha}c_{i \alpha}f_{i}\right)=0, \tag{9}\]
which yields
\[h_{i}^{\prime}(f_{i}^{\text{eq}})=\lambda_{0}+\lambda_{\alpha}c_{i\alpha}. \tag{10}\]
Defining the inverse of \(h_{i}^{\prime}(f_{i}^{\text{eq}})\) as \(\mu_{i}=\left[h_{i}^{\prime}(f_{i}^{\text{eq}})\right]^{-1}\), which must exist due to the convexity of \(h_{i}\) the formal solution of the minimization problem reads:
\[f_{i}^{\text{eq}}=\mu_{i}\left(\lambda_{0}+\lambda_{\alpha}c_{i\alpha}\right). \tag{11}\]
Taking the \(H\)-function to be [22]:
\[H=\sum_{i=1}^{Q}f_{i}\ln(f_{i}/w_{i}), \tag{12}\]
and considering stencils with \(3^{D}\) discrete velocities the corresponding equilibria can be expressed as:
\[f_{i}^{\text{eq}}=w_{i}\exp\left(\lambda_{0}\right)\prod_{\alpha=1}^{D}\exp \left(c_{i\alpha}\lambda_{\alpha}\right). \tag{13}\]
Introducing the following changes of variables, \(X=\exp\left(-\lambda_{0}\right)\) and \(Z_{\alpha}=\exp\left(\lambda_{\alpha}\right)\) the equilibrium can be re-written as:
\[f_{i}^{\text{eq}}=w_{i}X^{-1}\prod_{\alpha=1}^{D}Z_{\alpha}^{c_{i\alpha}}. \tag{14}\]
Here \(X\) and \(Z_{\alpha}\), i.e. the corresponding Lagrange multipliers, can be obtained by writing the constraints on the moments:
\[\rho X =\sum_{i=1}^{Q}w_{i}\prod_{\alpha=1}^{D}Z_{\alpha}^{c_{i\alpha}}, \tag{15a}\] \[\rho u_{\beta}X =\sum_{i=1}^{Q}w_{i}c_{i\beta}\prod_{\alpha=1}^{D}Z_{\alpha}^{c_{ i\alpha}}. \tag{15b}\]
Solving this system of equations results in:
\[Z_{\alpha}=\frac{2u_{\alpha}+\sqrt{\left(\frac{u_{\alpha}}{\varsigma}\right)^{ 2}+1}}{1-u_{\alpha}}, \tag{16}\]
\[X^{-1}=\rho\prod_{\alpha=1}^{D}\left(2-\sqrt{\left(\frac{u_{\alpha}}{\varsigma }\right)^{2}+1}\right), \tag{17}\]
leading to the following final expression for the discrete entropic equilibrium [22]:
\[f_{i}^{\rm eq}=w_{i}\rho\prod_{\alpha=1}^{D}\left(2-\sqrt{\left( \frac{u_{\alpha}}{\varsigma}\right)^{2}+1}\right)\left(\frac{2u_{\alpha}+\sqrt {\left(\frac{u_{\alpha}}{\varsigma}\right)^{2}+1}}{1-u_{\alpha}}\right)^{c_{ i\alpha}}. \tag{18}\]
### Polynomial discrete equilibria
Another class of discrete equilibria, widely used in discrete velocity approaches, are the polynomial equilibria. The most well-known realization of polynomial equilibria for isothermal flows is the second-order polynomial equilibrium defined as [3]:
\[f_{i}^{\rm eq}=w_{i}\rho\left(1+\frac{\mathbf{c}_{i}\cdot\mathbf{u}}{\varsigma^{2}}+ \frac{(\mathbf{c}_{i}\cdot\mathbf{u})^{2}}{2\varsigma^{4}}-\frac{\mathbf{u}^{2}}{2\varsigma ^{2}}\right). \tag{19}\]
While initially derived via a second-order Taylor expansion of the Maxwell-Boltzmann distribution around Ma=0, it can also be obtained as a second-order Hermite expansion of the Maxwell-Boltzmann distribution function.
This form of the equilibrium is known to lead to Galilean-variant errors in the off-diagonal components of the viscous stress tensor, which scale with order three in Mach number when the local temperature coincides with the lattice reference temperature and order one when the local temperature is different from the lattice reference. Attempts at improving upon the second-order polynomial equilibrium, especially for compressible flows, have led to the product form equilibrium:
\[f_{i}^{\rm eq}=\rho\prod_{\alpha=1}^{D}\left(1-\varsigma^{2}-u_{\alpha}^{2} \right)^{1-|c_{i\alpha}|}\!\left(\frac{c_{i\alpha}u_{\alpha}+\varsigma^{2}+u_{ \alpha}^{2}}{2}\right)^{|c_{i\alpha}|}. \tag{20}\]
This form of the equilibrium was obtained in [23] by _guiding_ the minimization problem of the previous section by explicitly adding a constraint on diagonal components of the second-order equilibrium moment tensor. This form of the equilibrium can also be obtained, in 2-D, via a fourth order Hermite expansion [24], or the moment matching method [25].
## III Euler-level dynamics: dispersion
In this section we will discuss Euler level dynamics of the hydrodynamic equations corresponding to each discrete equilibrium distribution function.
### Conservation equations
Applying the Chapmann-Enkog analysis to the entropic lattice Boltzmann equations, at order \(\varepsilon\) one finds the following conservation laws:
\[\partial_{t}^{(1)}\rho+\mathbf{\nabla}\cdot\rho\mathbf{u} =0, \tag{21a}\] \[\partial_{t}^{(1)}\rho\mathbf{u}+\mathbf{\nabla}\cdot\rho\mathbf{u}\otimes\bm {u}+\mathbf{\nabla}\cdot\rho\varsigma^{2}\mathbf{I}+\mathbf{\nabla}\cdot\mathbf{P}^{*} =0. \tag{21b}\]
Here \(\mathbf{P}^{*}\) is a diagonal matrix defined as:
\[P^{*}_{\alpha\alpha}=\widetilde{\Pi}^{\rm eq}_{\alpha\alpha}-\rho\varsigma^{2 }=\sum_{i=1}^{Q}\left(\mathbf{c}_{i}-\mathbf{u}\right)^{2}\!f_{i}^{\rm eq}-\rho \varsigma^{2}, \tag{22}\]
Using Eq. (18), \(P^{*}_{\alpha\alpha}\) is readily computed:
\[P^{*}_{\alpha\alpha}=-\rho\varsigma^{2}\left[\left(u_{\alpha}/\varsigma\right)^ {2}-2\sqrt{\left(u_{\alpha}/\varsigma\right)^{2}+1}+2\right], \tag{23}\]
while for Eqs. (19) and (20) \(P^{*}_{\alpha\alpha}=0\). As defined here, \(P^{*}_{\alpha\alpha}\) is the deviation of the pressure from the Maxwell-Boltzmann pressure, which is both flow-dependent and anisotropic for the entropic equilibrium. Note that in the limit of \(u_{\alpha}\to\pm\delta r/\delta t\) the deviation, \(P^{*}_{\alpha\alpha}\) goes to \(-\rho\varsigma^{2}\) resulting in a vanishing total thermodynamic pressure. The trace of the deviation is illustrated in Fig. 1 via the following normalized variable:
\[\delta=\frac{P^{*}_{xx}+P^{*}_{yy}}{2\rho\varsigma^{2}}. \tag{24}\]
Based on Fig. 1, as is well known, the deviation of pressure implied by the entropic equilibrium are immaterial for nearly-incompressible flow simulations. However, as we shall see in the analysis below, precisely these deviations have crucial implications on the well-posedness and stability of LBGK with the entropic equilibrium.
### Characteristics analysis: Eigen-modes
In the case of polynomial equilibria of order equal to or larger than two the second-order central moment, representing pressure, is only a function of local density and temperature:
\[\widetilde{\Pi}_{\alpha\alpha}=\rho\varsigma^{2}, \tag{25}\]
and the corresponding sound speed, \(c_{s}\), can be readily computed via:
\[c_{s}=u_{\alpha}\pm\sqrt{\frac{\partial\widetilde{\Pi}_{\alpha\alpha}}{ \partial\rho}\bigg{|}_{T=\text{cst}}}=u_{\alpha}\pm\varsigma. \tag{26}\]
In the case of the entropic equilibrium the equilibrium pressure is also function of local velocity making computation of the sound speed less evident. To compute an analytical expression for the sound speed with the entropic equilibrium, we use the method of characteristics, briefly introduced in the next section.
#### ii.2.1 Characteristics analysis formalism
Consider a system of conservation equations consisting of first-order partial differential equations of the following form [26]:
\[\partial_{t}\Phi+\partial_{x}\mathcal{F}+\mathcal{C}=0, \tag{27}\]
where \(\Phi\) is the vector of conserved variables, \(\mathcal{F}\) the vector of fluxes and \(\mathcal{C}\) represents non-homogeneous terms in the equations. This system, which is represented above with the conservative form, can also be represented using the primitive form as:
\[\partial_{t}\phi+\mathcal{A}\partial_{x}\phi+\mathcal{C}^{\prime}=0, \tag{28}\]
where \(\phi\) is the vector of primitive variable and \(\mathcal{A}\) is a matrix of size \(n\times n\) -for a system with \(n\) independent variables. Note that unlike conserved variables, the choice of primitive variable is not unique. The two descriptions are related to eachother via:
\[\partial_{t}\Phi =\mathcal{P}\partial_{t}\phi, \tag{29a}\] \[\partial_{x}\mathcal{F} =\mathcal{Q}\partial_{x}\phi, \tag{29b}\]
with:
\[\mathcal{P}_{mn} =\frac{\partial\Phi_{m}}{\partial\phi_{n}} \tag{30a}\] \[\mathcal{Q}_{mn} =\frac{\partial\mathcal{F}_{m}}{\partial\phi_{n}}, \tag{30b}\]
and:
\[\mathcal{A} =\mathcal{P}^{-1}\mathcal{Q}, \tag{31a}\] \[\mathcal{C} =\mathcal{P}^{-1}\mathcal{C}^{\prime}. \tag{31b}\]
Let us now introduce the left, \(\mathbf{l}\), and right, \(\mathbf{r}\), eigenvectors of \(\mathcal{A}\) such that:
\[l_{m}\mathcal{A} =\lambda_{m}l_{m}, \tag{32a}\] \[\mathcal{A}r_{m} =\lambda_{m}r_{m}. \tag{32b}\]
where \(\lambda_{i}\) are the eigen-values of \(\mathcal{A}\). Then a diagonal matrix \(\Lambda\) can be obtained via:
\[\Lambda=\mathcal{SAS}^{-1}, \tag{33}\]
where the rows of \(\mathcal{S}\) are the left eigen-vectors and the columns of \(\mathcal{S}^{-1}\) are the right eigen-vectors. Multiplying Eq. (28) by \(\mathcal{S}\) one obtains the characteristics form of the system:
\[l_{m}\partial_{t}\phi+\lambda_{m}l_{m}\partial_{x}\phi+l_{m}\mathcal{C}^{ \prime}=0, \tag{34}\]
Figure 1: Illustration of deviations in trace of pressure tensor for the entropic equilibrium. Left: Normalized deviation in 2-D. Right: Normalized error in 1-D.
which upon introduction of a new variable \(d\mathcal{V}_{m}=l_{m}d\phi+l_{m}\mathcal{C}^{\prime}dt\) reduces to a system of wave equations with velocities \(\lambda_{m}\):
\[\partial_{t}\mathcal{V}_{m}+\lambda_{m}\partial_{x}\mathcal{V}_{m}=0. \tag{35}\]
This analysis can be readily extended to multiple physical dimensions, considering propagation along the \(\alpha\)-axis, by re-writing Eq. (28) as:
\[\partial_{t}\phi+\mathcal{A}_{\alpha}\partial_{\alpha}\phi+\mathcal{C}^{\prime }=0, \tag{36}\]
where \(\alpha\in\{x,y,z\}\).
#### ii.2.2 Characteristics: 1-D system
Let us first apply this analysis to the system of hyperbolic conservation equations recovered by the D1Q3 lattice, shown in Eqs. (21a) and (21b). The primitive form of this system will results in:
\[\phi=\begin{bmatrix}\rho\\ u_{x}\end{bmatrix} \tag{37}\]
and:
\[\mathcal{A}=\begin{bmatrix}u_{x}&\rho\\ \varsigma^{2}+\frac{\partial_{\rho}P_{xx}^{*}}{\rho}&u_{x}+\frac{\partial_{u_ {x}}P_{xx}^{*}}{\rho}\end{bmatrix}. \tag{38}\]
Applying the characteristics analysis to this system the following eigen-values are recovered:
\[c_{s}^{+} =u_{x}+\frac{\partial_{u_{x}}P_{xx}^{*}}{2\rho}+\sqrt{\left( \frac{\partial_{u_{x}}P_{xx}^{*}}{2\rho}\right)^{2}+\varsigma^{2}+\partial_{ \rho}P_{xx}^{*}}, \tag{39a}\] \[c_{s}^{-} =u_{x}+\frac{\partial_{u_{x}}P_{xx}^{*}}{2\rho}-\sqrt{\left( \frac{\partial_{u_{x}}P_{xx}^{*}}{2\rho}\right)^{2}+\varsigma^{2}+\partial_{ \rho}P_{xx}^{*}}. \tag{39b}\]
Using the left eigen-vectors, readily obtained as right eigen-vectors of \(\mathcal{A}^{\dagger}\), the following wave system is recovered:
\[\left[\frac{c_{s}^{+}-u_{x}}{\rho}\partial_{t}\rho+\partial_{t}u_ {x}\right]+c_{s}^{+}\left[\frac{c_{s}^{+}-u_{x}}{\rho}\partial_{x}\rho+ \partial_{x}u_{x}\right] =0, \tag{40a}\] \[\left[\frac{c_{s}^{-}-u_{x}}{\rho}\partial_{t}\rho+\partial_{t}u_ {x}\right]+c_{s}^{-}\left[\frac{c_{s}^{-}-u_{x}}{\rho}\partial_{x}\rho+ \partial_{x}u_{x}\right] =0. \tag{40b}\]
Note that for polynomial equilibria of Eqs. (19) and (20), i.e. \(P_{xx}^{*}=0\), one recovers, in accord with (26):
\[c_{s}^{+} =u_{x}+\varsigma, \tag{41a}\] \[c_{s}^{-} =u_{x}-\varsigma, \tag{41b}\]
and
\[\left[\frac{\varsigma}{\rho}\partial_{t}\rho+\partial_{t}u_{x} \right]+\left(u_{x}+\varsigma\right)\left[\frac{\varsigma}{\rho}\partial_{x} \rho+\partial_{x}u_{x}\right] =0, \tag{42a}\] \[\left[\frac{\varsigma}{\rho}\partial_{t}\rho-\partial_{t}u_{x} \right]+\left(u_{x}-\varsigma\right)\left[\frac{\varsigma}{\rho}\partial_{x} \rho-\partial_{x}u_{x}\right] =0. \tag{42b}\]
We will discuss the specific case of the entropic equilibrium in the next paragraphs.
#### ii.2.3 Characteristics: multi-dimensional systems
Now for systems of dimension larger than one, here to illustrate a 2-D system:
\[\phi=\begin{bmatrix}\rho\\ u_{x}\\ u_{y}\end{bmatrix} \tag{43}\]
and:
\[\mathcal{A}_{x}=\begin{bmatrix}u_{x}&\rho&0\\ \varsigma^{2}+\frac{\partial_{\rho}P_{xx}^{*}}{\rho}&u_{x}+\frac{\partial_{u_ {x}}P_{xx}^{*}}{\rho}&\frac{\partial_{u_{y}}P_{xx}^{*}}{\rho}\\ 0&0&u_{x}\end{bmatrix}. \tag{44}\]
and:
\[\mathcal{A}_{y}=\begin{bmatrix}u_{y}&0&\rho\\ 0&u_{y}&0\\ \varsigma^{2}+\frac{\partial_{\rho}P_{yx}^{*}}{\rho}&\frac{\partial_{u_{x}}P_{ yx}^{*}}{\rho}&u_{y}+\frac{\partial_{u_{y}}P_{yx}^{*}}{\rho}\end{bmatrix}. \tag{45}\]
and considering propagation along the \(x-\)axis one recovers the two acoustic modes of Eqs. (40a) and (40b) and an additional _shear_ mode propagating at speed \(u_{x}\). Applying these eigen-values and the left side eigen-vectors the following wave system is recovered:
\[\partial_{t}u_{y}+u_{x}\partial_{x}u_{y} =0, \tag{46a}\] \[\left[\frac{c_{s}^{+}-u_{x}}{\rho}\partial_{t}\rho+\partial_{t}u_ {x}\right]+c_{s}^{+}\left[\frac{c_{s}^{+}-u_{x}}{\rho}\partial_{x}\rho+ \partial_{x}u_{x}\right] =0,\] (46b) \[\left[\frac{c_{s}^{-}-u_{x}}{\rho}\partial_{t}\rho+\partial_{t}u_ {x}\right]+c_{s}^{-}\left[\frac{c_{s}^{-}-u_{x}}{\rho}\partial_{x}\rho+ \partial_{x}u_{x}\right] =0. \tag{46c}\]
Along the \(y\)-axis the wave system changes into:
\[\partial_{t}u_{x}+u_{y}\partial_{y}u_{x} =0, \tag{47a}\] \[\left[\frac{c_{s}^{+}-u_{y}}{\rho}\partial_{t}\rho+\partial_{t}u_ {y}\right]+c_{s}^{+}\left[\frac{c_{s}^{+}-u_{y}}{\rho}\partial_{y}\rho+ \partial_{y}u_{y}\right] =0,\] (47b) \[\left[\frac{c_{s}^{-}-u_{y}}{\rho}\partial_{t}\rho+\partial_{t}u_ {y}\right]+c_{s}^{-}\left[\frac{c_{s}^{-}-u_{y}}{\rho}\partial_{y}\rho+ \partial_{y}u_{y}\right] =0. \tag{47c}\]
where the acoustic propagation modes in \(y-\)direction are now:
\[c_{s}^{+} =u_{y}+\frac{\partial_{u_{y}}P_{yy}^{*}}{2\rho}+\sqrt{\left(\frac{ \partial_{u_{y}}P_{yy}^{*}}{2\rho}\right)^{2}+\varsigma^{2}+\partial_{\rho}P_{ yy}^{*}}, \tag{48a}\] \[c_{s}^{-} =u_{y}+\frac{\partial_{u_{y}}P_{yy}^{*}}{2\rho}-\sqrt{\left(\frac {\partial_{u_{y}}P_{yy}^{*}}{2\rho}\right)^{2}+\varsigma^{2}+\partial_{\rho}P_ {yy}^{*}}. \tag{48b}\]
Analysis of the propagation speed of different eigenmodes shows that acoustic modes will propagate isotropically only in the limit of:
\[\partial_{\rho}P_{xx}^{*}=\partial_{\rho}P_{yy}^{*}, \tag{49}\]
and
\[\partial_{u_{x}}P_{xx}^{*}=\partial_{u_{y}}P_{yy}^{*}. \tag{50}\]
A special solution of these conditions is \(P_{xx}^{*}=P_{yy}^{*}=0\) which corresponds to the polynomial equilibria explicitly enforcing the second-order moment of the equilibrium distribution function, i.e. Eqs. (19) and (20). However these equilibria come with limitations that the entropic equilibrium does not have. This point will be discussed in the next paragraph.
### Stabilization of entropic equilibrium via sound speed
Plugging the entropic equilibrium into the eigenvalues obtained for the general formulation two non-symmetrical sound propagation speeds are recovered:
\[c_{s}^{e+} =\frac{u_{x}+\varsigma\sqrt{2\sqrt{\left(\frac{u_{x}}{\varsigma }\right)^{2}+1}-1}}{\sqrt{\left(\frac{u_{x}}{\varsigma}\right)^{2}+1}}, \tag{51a}\] \[c_{s}^{e-} =\frac{u_{x}-\varsigma\sqrt{2\sqrt{\left(\frac{u_{x}}{\varsigma }\right)^{2}+1}-1}}{\sqrt{\left(\frac{u_{x}}{\varsigma}\right)^{2}+1}}. \tag{51b}\]
The propagation speed of these two eigen-modes are shown in Fig. 2 as a function of speed. The behavior of entropic sound speed shows an interesting property of the entropic model pointing already to a (potentially) unconditional linear stability. Having re-written the equivalent Euler system in terms of coupled wave equations, note that for the solver to access information required to form the solution in time the numerical domain of dependence of any point in space and time must include the analytical domain of dependence, i.e. the initial conditions have an effect on the exact value of the solution at that point [27]. Simply put, the fastest eigen-modes in the system _cannot_ propagate faster than the lattice links. Only considering three _physical_ eigen-modes, i.e. \(u_{x}\), \(c_{s}^{+}\) and \(c_{s}^{-}\), one arrives at the following condition on linear stability:
\[\max(|u_{x}|,|c_{s}^{+}|,|c_{s}^{-}|)\leq\frac{\delta r}{\delta t}. \tag{52}\]
For the polynomial equilibria, given that sound speed is constant one recovers the following maximum tolerable velocity:
\[|u_{x}^{\max}|=\frac{\delta r}{\delta t}-\varsigma=0.4226\frac{\delta r}{ \delta t}, \tag{53}\]
which as will be shown in the next section through stability analyses, is indeed the maximum reachable velocity. For the entropic equilibrium on the other hand, it is observed that the sound speed self-adjusts as a function of local velocity to guarantee Eq. (52) is always satisfied, leading to:
\[|u_{x}^{\max}|=\frac{\delta r}{\delta t}. \tag{54}\]
At the higher/lower end of the velocity spectrum,
\[\lim_{u_{x}\rightarrow-\delta r/\delta t}c_{s}^{e+} =\frac{\delta r}{\delta t}. \tag{55a}\] \[\lim_{u_{x}\rightarrow-\delta r/\delta t}c_{s}^{e+} =0,\] (55b) \[\lim_{u_{x}\rightarrow\delta r/\delta t}c_{s}^{e-} =0.\] (55c) \[\lim_{u_{x}\rightarrow-\delta r/\delta t}c_{s}^{e-} =-\frac{\delta r}{\delta t}. \tag{55d}\]
Another point worth noting is that if one was to define the Mach number as the ratio of flow speed to the normal mode propagation speed, i.e.
\[\mathrm{Ma}^{+} =\frac{u_{x}}{c_{s}^{+}-u_{x}}, \tag{56a}\] \[\mathrm{Ma}^{-} =\frac{u_{x}}{c_{s}^{-}-u_{x}}, \tag{56b}\]
the Mach number would evolve as shown in Fig. 3, which in effect means that the entropic equilibrium leads to a scheme that is stable in the limit of \(\mathrm{Ma}\rightarrow\infty\). Note that the deviation in the propagation speed of acoustic modes scales with the non-dimensional velocity with order three. This behavior is illustrated in Fig. 4.
Overall the following points could be made based on the Euler-level analyses:
* The entropic equilibrium pressure tensor admits deviations from the Maxwell-Boltzmann pressure tensor that are not isotropic.
* Deviations are negligible for non-dimensional velocities as large as 0.5 which is well above the domain of validity of the weakly compressible flow assumption.
* The deviations scale as \(\propto\left(\frac{u\delta t}{\delta r}\right)^{4}\), see Fig. 1.
* The entropic equilibrium leads to a non-isotropic sound speed.
* In the limit of low non-dimensional flow velocities, this non-isotropic deviation scales out with \(\propto\left(\frac{u\delta t}{\delta r}\right)^{3}\), see Fig. 4.
* The _flow velocity-aware_ nature of the sound speed allows for a system that adjusts its fastest propagating eigen-mode speed so as to always guarantee the condition of Eq. (52) is satisfied, see Fig. 2.
* Defining the Mach number as the ratio of propagation speed of shear to normal modes, it is observed that entropic equilibrium can go to \(\text{Ma}^{\pm}\rightarrow\infty\) in the limit of \(u_{x}\rightarrow\pm\delta r/\delta t\), see Fig. 3. The slightly modified equilibrium pressure also affects dissipation at the Navier-Stokes level. The altered dissipation behavior is discussed in the next section.
## IV Navier-Stokes-level dynamics: dissipation
### 1-D lattice: Bulk viscosity
To illustrate the impact of the entropic equilibrium on dissipation let us first start with a 1-D system with three discrete velocities. In 1-D there is only acoustic modes, contrary to 2- and 3-D cases where shear modes are also present. Performing the classical Chapman-Enskog analysis, at order \(\varepsilon^{2}\) the following momentum balance equa
Figure 4: Normalized deviation of entropic equilibrium sound speed as a function of non-dimensional velocity.
Figure 3: Mach number as a function of speed, \(u_{x}\) as defined in Eqs. (56a) and (56b) respectively in red and blue.
Figure 2: (Left) Sound speed for entropic, Eqs. (51a) and (51b), and polynomial equilibria, Eqs. (41a) and (41b), as a function of velocity \(u_{x}\). (Right) Comparison of the speed of fastest propagating eigen-modes: (blue lines) polynomial and (red lines) entropic equilibria.
tion is recovered:
\[\partial_{t}^{(2)}\rho u_{x}+\partial_{x}\left(1-\beta\right)\Pi_{xx}^{(1)}=0, \tag{57}\]
where after some algebra the non-equilibrium stress tensor \(\Pi_{xx}^{(1)}\) for any equilibrium pressure of the form:
\[P_{xx}=\rho\varsigma^{2}+P_{xx}^{*}, \tag{58}\]
results in:
\[\Pi_{xx}^{(1)}=-\frac{1}{2\beta}\left[2A\rho\varsigma^{2}\partial_{x}u_{x}+B \partial_{x}\rho\right], \tag{59}\]
with
\[A=\left(1-\frac{3}{2}\frac{u_{x}^{2}}{\varsigma^{2}}-\frac{3}{2\rho\varsigma^ {2}}u_{x}\partial_{u_{x}}P_{xx}^{*}-\frac{\left(\partial_{u_{x}}P_{xx}^{*} \right)^{2}}{2\rho^{2}\varsigma^{2}}-\frac{\partial_{\rho}P_{xx}^{*}}{2\varsigma ^{2}}\right), \tag{60}\]
and
\[B=-3u_{x}\partial_{\rho}P_{xx}^{*}-\frac{\varsigma^{2}}{\rho}\partial_{u_{x}} P_{xx}^{*}-\frac{\partial_{u_{x}}P_{xx}^{*}\partial_{\rho}P_{xx}^{*}}{\rho}-u_{x}^{3}. \tag{61}\]
For the sake of clarity, in the remainder of the article, we will refer to (60) as the effective viscosity and (61) as the compressibility error at the Navier-Stokes level. For the case of the polynomial equilibria, i.e. both second-order and product form, one gets:
\[A=\left(1-\frac{3}{2}\frac{u_{x}^{2}}{\varsigma^{2}}\right), \tag{62}\]
and:
\[B=-u_{x}^{3}, \tag{63}\]
which, neglecting the second term, indicates that the corresponding partial differential equation is only dissipative for:
\[|u_{x}|\leq\varsigma\sqrt{\frac{2}{3}}. \tag{64}\]
Note that this maximum velocity is larger than the one imposed by CFL condition, i.e. Eq. (53). In the case of the entropic equilibrium:
\[A=1-\frac{3}{2}\frac{u_{x}^{2}}{\varsigma^{2}}+\frac{\left(u_{x}/\varsigma \right)^{2}+3\left(u_{x}/\varsigma\right)^{4}-2\sqrt{\left(u_{x}/\varsigma \right)^{2}+1}+2}{2\left(u_{x}/\varsigma\right)^{2}+2} \tag{65}\]
and:
\[B=0. \tag{66}\]
Clearly for the entropic equilibrium in the limit of \(u_{x}\rightarrow\delta r/\delta t\), the effective viscosity \(A\) goes to zero. The effective viscosities of Eqs. (62) and (65) are compared and shown in Fig. 5. Furthermore, it is interesting to note that while the entropic equilibrium guarantees a positive effective viscosity it maintains the second-order convergence to the nominal viscosity, i.e. \(A=1\), in the limit of \(u_{x}\delta t/\delta r\to 0\), as illustrated in Fig. 6. Going even further and comparing the compressibility errors, i.e. \(B\), it is observed that the entropic equilibrium, contrary to polynomial equilibrium, does not have such errors. For the polynomial equilibria this error scales out at order three. While two types of error were shown to exist in the Navier-Stokes level dissipation term, it is still unclear how each term would affect the dissipation of the eigenmodes derived in section III. This issue will be discussed in the next section.
### Analysis of linearized Navier-Stokes equations in the limit of vanishing wave-number
In the previous section we observed that two types of errors affecting the Navier-Stokes-level dissipation could be observed: One we called effective viscosity and another one we introduced as compressiblity error. While
Figure 5: Comparison of the effective bulk viscosity: (blue) polynomial and (black) entropic equilibria.
Figure 6: Scaling of the error in effective 1-D viscosity, i.e. \(1-A\): (blue) polynomial and (black) entropic equilibria.
the role of the former is quite clear, even from its name, the _real_ effect of the latter on viscosity is rather abstract. To clarify the role of each error we will analyze the corresponding linearized equations in the limit of vanishing wave-number, i.e. \(k_{x}\to 0\). For the sake of clarity we will first start with the simple case of polynomial equilibria. To linearize the system of partial differential equations at the Euler+Navier-Stokes level we separate velocity and density into a constant contribution and a perturbation,
\[\rho =\bar{\rho}+\rho^{\prime}, \tag{67a}\] \[u_{x} =\bar{u}_{x}+u_{x}^{\prime}. \tag{67b}\]
Introducing these into the system of partial differential equations and only keeping first-order perturbations we obtain the linearized system:
\[\partial_{t}\rho^{\prime} =-\bar{\rho}\partial_{x}u_{x}^{\prime}-\bar{u}_{x}\partial_{x} \rho^{\prime}, \tag{68a}\] \[\partial_{t}u_{x}^{\prime} =-\bar{u}_{x}\partial_{x}u_{x}^{\prime}-\frac{\varsigma^{2}}{ \bar{\rho}}\partial_{x}\rho^{\prime}+2\nu A\partial_{x}^{2}u_{x}^{\prime}+ \frac{\nu B}{\bar{\rho}^{2}\varsigma^{2}}\partial_{x}^{2}\rho^{\prime}, \tag{68b}\]
where \(B=B(\bar{\rho},\bar{u}_{x})\) and \(A=A(\bar{\rho},\bar{u}_{x})\). Note the last term in Eq. (68b) stemming from the compressibility error. Next we consider the perturbations to be monochromatic plane waves, i.e.
\[\rho^{\prime} =\hat{\rho}\exp\left(\mathrm{i}(\omega t-k_{x}x)\right), \tag{69a}\] \[u_{x}^{\prime} =\hat{u}_{x}\exp\left(\mathrm{i}(\omega t-k_{x}x)\right), \tag{69b}\]
where \(\mathrm{i}=\sqrt{-1}\). Introducing the perturbations into Eqs. (68a) and (68b) we end up with the following system:
\[\omega\hat{\rho} =\mathrm{i}\bar{\rho}k_{x}\hat{u}_{x}+\sqrt{-1}k_{x}\bar{u}_{x} \hat{\rho}, \tag{70a}\] \[\omega\hat{u}_{x} =\mathrm{i}k_{x}\bar{u}_{x}\hat{u}_{x}+\frac{\varsigma^{2}}{\bar {\rho}}k_{x}\hat{\rho}-2\nu Ak_{x}^{2}\hat{u}_{x}-\frac{\nu B}{\bar{\rho}^{2} \varsigma^{2}}k_{x}^{2}\hat{\rho}. \tag{70b}\]
Finally, we solve the corresponding eigen-value problem and Taylor-expand corresponding solutions around \(k_{x}=0\); Keeping terms of order \(k_{x}\) and \(k_{x}^{2}\) we get the following eigen-values:
\[\omega_{c_{s}^{+}} =(\bar{u}_{x}+\varsigma)k_{x}+\mathrm{i}\nu\left(A+\frac{B}{2 \bar{\rho}\varsigma^{3}}\right)k_{x}^{2}, \tag{71a}\] \[\omega_{c_{s}^{-}} =(\bar{u}_{x}-\varsigma)k_{x}+\mathrm{i}\nu\left(A-\frac{B}{2 \bar{\rho}\varsigma^{3}}\right)k_{x}^{2}. \tag{71b}\]
Looking at the dissipation terms, i.e. \(\propto k_{x}^{2}\), interesting observations on the role of each type of error could be made: (a) The existence of the compressibility error results in anisotropy in the dissipation of acoustic modes causing \(c_{s}^{+}\) and \(c_{s}^{-}\) to dissipate at different rates even though the dispersion is isotropic for polynomial equilibria. (b) The average of the dissipation rates of the two acoustic modes is modulated by the effective viscosity coefficient, \(A\).
Extending this analysis to the more general form of the equilibrium as in previous sections the following eigenvalues are recovered:
\[\omega_{c_{s}^{+}} =c_{s}^{+}k_{x}\] \[+\mathrm{i}\nu A\left(1+\frac{\varsigma^{2}\partial_{\bar{u}_{x}} P_{xx}^{*}+B/A}{2\bar{\rho}\varsigma^{2}\sqrt{\zeta^{2}+\partial_{\bar{\rho}}P_{xx}^{*} +(\partial_{\bar{u}_{x}}P_{xx}^{*}/2\bar{\rho})^{2}}}\right)k_{x}^{2}, \tag{72a}\] \[\omega_{c_{s}^{-}} =c_{s}^{-}k_{x}\] \[+\mathrm{i}\nu A\left(1-\frac{\varsigma^{2}\partial_{\bar{u}_{x}} P_{xx}^{*}+B/A}{2\bar{\rho}\sqrt{\zeta^{2}+\partial_{\bar{\rho}}P_{xx}^{*}+( \partial_{\bar{u}_{x}}P_{xx}^{*}/2\bar{\rho})^{2}}}\right)k_{x}^{2}, \tag{72b}\]
where \(c_{s}^{+}\) and \(c_{s}^{-}\) are those derived in Eqs. (48a) and (48b) and \(P_{xx}^{*}=P_{xx}^{*}(\bar{\rho},\bar{u}_{x})\). Plugging in the entropic equilibrium:
\[\omega_{c_{s}^{+}} =c_{s}^{e+}k_{x}\] \[+\mathrm{i}\nu A\left(1+\frac{1-\varsigma\bar{u}_{x}\left(\sqrt{ \left(\frac{\bar{u}_{x}}{\varsigma}\right)^{2}+1}-1\right)}{\sqrt{2\sqrt{ \left(\frac{\bar{u}_{x}}{\varsigma}\right)^{2}+1}-1}}\right)k_{x}^{2}, \tag{73a}\] \[\omega_{c_{s}^{-}} =c_{s}^{e-}k_{x}\] \[+\mathrm{i}\nu A\left(1-\frac{1-\varsigma\bar{u}_{x}\left(\sqrt{ \left(\frac{\bar{u}_{x}}{\varsigma}\right)^{2}+1}-1\right)}{\sqrt{2\sqrt{ \left(\frac{\bar{u}_{x}}{\varsigma}\right)^{2}+1}-1}}\right)k_{x}^{2}, \tag{73b}\]
where \(c_{s}^{e+}\) and \(c_{s}^{e-}\) were derived in Eqs. (51a) and (51b). The interesting point worth noting here is that even though for the entropic equilibrium it was shown that \(B=0\), writing the equations in terms of the eigen-modes of the system one recovers a compressibility-like error term.
Now that we have a closed-form solution for the total dissipation rate of the linearized hydrodynamic equations, noted as \(A^{\prime}\), defined in Eqs. (73a) and (73b) as:
\[\omega_{c_{s}}=c_{s}^{e}k_{x}+\mathrm{i}\nu A^{\prime}k_{x}^{2}, \tag{74}\]
well-posedeness of the corresponding wave system can be assessed by looking at the sign of the dissipation rates. The results are shown in Fig. 7. It is observed that the entropic equilibrium always remains dissipating while the polynomial equilibrium keeps that property within a certain range of velocity:
\[\frac{1}{\sqrt{3}}-1\leq\frac{u_{x}\delta t}{\delta r}\leq 1-\frac{1}{\sqrt{3}}, \tag{75}\]
which interestingly enough matches exactly the CFL condition on eigen-modes as shown in Fig. 2.
The next section will discuss the dissipation rates of shear modes.
### Multi-dimensional lattices: Shear viscosity
The multiscale analysis of the previous section can be extended to multiple dimensions, detailed in B. Decomposing the non-equilibrium stress tensor into two contributions as:
\[\Pi_{2}^{(1)}=-\frac{1}{2\beta}\left(\mathbf{S}^{*}+\mathbf{D}^{*}\right), \tag{76}\]
where:
\[\mathbf{S}^{*}=\left[\left(\mathbf{\varkappa}^{2}\mathbf{I}+\mathbf{P}^{*}\right)\cdot\mathbf{ \nabla u}\right]+\left[\left(\rho\varsigma^{2}\mathbf{I}+\mathbf{P}^{*}\right)\cdot \mathbf{\nabla u}\right]^{\dagger}+\mathbf{E}^{s}. \tag{77}\]
Here \(\mathbf{E}^{s}\) regroups errors specific to the second-order polynomial, detailed in Eq. (133). Focusing on off-diagonal components and following the presentation of the previous section, let us introduce an effective viscosity, \(\mathbf{A}^{s}\) such that:
\[S^{*}_{\alpha\beta}+E^{s}_{\alpha\beta}=\rho\varsigma^{2}\left[A^{s}_{\alpha \beta}\partial_{\alpha}u_{\beta}+A^{s}_{\beta\alpha}\partial_{\beta}u_{\alpha} \right]+B^{s}_{\alpha}\partial_{\alpha}\rho+B^{s}_{\beta}\partial_{\beta}\rho. \tag{78}\]
In the case of the product form equilibrium:
\[A^{s}_{\alpha\beta}=A^{s}_{\beta\alpha}=1, \tag{79}\]
while for the second order polynomial:
\[A^{s}_{\alpha\beta}=1-\frac{u_{\alpha}}{\varsigma^{2}}(u_{\alpha}+2u_{\beta}), \tag{80}\]
and the entropic equilibrium, Eq. (18):
\[A^{s}_{\alpha\beta}=1+\frac{P^{*}_{\alpha\alpha}}{\rho\varsigma^{2}}=2\sqrt{ \left(\frac{u_{\alpha}}{\varsigma}\right)^{2}+1}-\left(\frac{u_{\alpha}}{ \varsigma}\right)^{2}-1. \tag{81}\]
The dependence of \(A^{s}_{xy}\) on \(u_{x}\delta t/\delta r\) with \(u_{y}=0\) for all equilibria are illustrated in Fig. 8. The deviations in the entropic equilibrium are shown to scale out at order four while for the second order polynomial equilibrium the order is only two. Furthermore, the effective viscosity \(A^{s}_{xy}\) for the entropic equilibrium is positive definite for \(|u_{x}|\)\(\in\)\([0,\delta r/\delta t]\), while for the second order polynomial equilibrium it is only positive for:
\[|u_{x}|\leq\varsigma. \tag{82}\]
This point is illustrated in Fig. 9.
Based on the analysis of the Navier-Stokes level equations the following observations could be made:
* Contrary to polynomial equilibria, the Navier-Stokes-level viscosity of
Figure 8: Comparison of scaling of the effective viscosity in shear components of the viscous stress tensor for the (black) entropic and (blue) second-order polynomial equilibria.
Figure 7: Overall dissipation rate of linearized hydrodynamic equations for (left) polynomial and (right) entropic equilibria. Red lines show dissipation of \(c^{+}_{s}\) mode and blue lines that of \(c^{-}_{s}\) mode.
Figure 9: Comparison of positivity domain of the effective viscosity in shear components of the viscous stress tensor for the (black) entropic and (blue) second-order polynomial equilibria.
the entropic discrete equilibrium is positive definite, for \(-\delta r/\delta t\leq u_{x}\leq\delta r/\delta t\), see Fig. 5. Polynomial equilibria lose positivity following Eq. (64).
* The deviation in the viscosity tied to compressibility, i.e. \(\propto\partial_{x}\rho\), for the polynomial equilibria scales with \(\propto\left(\frac{u_{x}\delta t}{\delta r}\right)^{3}\), see Eq. (63). The entropic equilibrium on the other hand does not admit such errors, see Eq. (66).
* An analysis of the dissipation of eigen modes of different models reveals that entropic normal eigen modes are indeed subject to a compressibility-like error, see Eqs. (73a) and (73b).
* Compressibility errors cause the two acoustic modes \(c_{s}^{+}\) and \(c_{s}^{-}\) to dissipate at different rates, even in the limit of \(k_{x}\to 0\).
* Following the trend set in previous sections, the polynomial equilibria result in an overall dissipation rate that is only positive for both modes in \(|u_{x}|\in[0\ \frac{\delta r}{\delta t}-\varsigma]\). For the entropic equilibrium on the other hand the overall dissipation rate of the linearized hydrodynamic equations is positive definite for all modes in \(|u_{x}|\in[0\ \delta r/\delta t]\).
* Looking at Fig. 7, it is seen that the anisotropy in dissipation rate of the two acoustic modes is much less pronounced for the entropic equilibrium.
* As for normal modes, the Navier-Stokes level dissipation rate of shear modes in the entropic equilibrium is positive definite. The second-order polynomial equilibrium on the other hand is only positive under the condition of Eq. (82). Note that for a velocity vector with non-zero components in the \(y-\) and \(z-\)directions this condition becomes more restrictive.
* The deviations in the shear dissipation rate scale out with \(\propto\left(\frac{u_{x}\delta t}{\delta r}\right)^{4}\) while for the second order polynomial equilibrium they scale as \(\propto\left(\frac{u_{x}\delta t}{\delta r}\right)^{2}\).
## V Numerical applications
### Linear stability analysis
#### v.1.1 von Neumann spectral analysis: introduction
In the context of the von Neumann linear analysis, the discrete time-evolution system of equations is expanded around a reference state \(\bar{f}_{i}\left(\bar{\rho},\bar{u}\right)\) via a first-order Taylor expansion:
\[f_{i}\approx\bar{f}_{i}+f_{i}^{{}^{\prime}}, \tag{83}\]
where \(f_{i}^{{}^{\prime}}\) is the linear perturbation. Defining the discrete collision operator as:
\[\Omega_{i}=2\beta\left(f_{i}^{\text{eq}}-f_{i}\right), \tag{84}\]
the linearized operator is:
\[\Omega_{i}(f_{i})\approx\Omega_{i}|_{\bar{f}_{i}}+\mathcal{J}_{ij}f_{j}^{{}^{ \prime}}, \tag{85}\]
where \(\mathcal{J}_{ij}\) is the Jacobian of the collision operator evaluated about \(\bar{f}_{j}\), i.e
\[\mathcal{J}_{ij}=\partial_{f_{j}}\Omega_{i}|_{\bar{f}_{j}}. \tag{86}\]
Placing back these expressions into the discrete time-evolution equations one obtains [28]:
\[f_{i}^{{}^{\prime}}\left(\mathbf{r}+\mathbf{c}_{i}\delta t,t+\delta t\right)=\left( \delta_{ij}+\mathcal{J}_{ij}\right)f_{j}^{{}^{\prime}}\left(\mathbf{r},t\right). \tag{87}\]
Detailed expression for the Jacobian of the entropic equilibrium is given in Appendix C. The last step of the von Neumann analysis is to assume that perturbations \(f_{i}^{\prime}\) are monochromatic plane waves of the form:
\[f_{i}^{\prime}=F_{i}\exp{[\mathrm{i}(\mathbf{k}\cdot\mathbf{r}-\omega_{i}t)]}, \tag{88}\]
where \(F_{i}\) is the wave amplitude, \(\mathrm{i}\) is the imaginary unit, \(||\mathbf{k}||=k\) is the wave-number, and \(\omega\) is the complex time frequency of the wave. \(k\) is related to the wave-length of \(f_{i}^{\prime}\), whereas \(\Im(\omega)\) and \(\Re(\omega)\) are related to its attenuation and propagation speed. Introducing these perturbations into Eq. (87) one obtains the following eigenvalue problem of size \(Q\) :
\[\mathbf{MF}=\exp{(-\mathrm{i}\omega_{i})\mathbf{F}}, \tag{89}\]
where \(\mathbf{F}\) is the eigenvector composed of all amplitudes. \(\mathbf{M}\) is the matrix associated to Eq. (87). This matrix can be expressed as :
\[\mathbf{M}=\mathbf{E}\left[\mathbf{\delta}+\mathcal{J}\right], \tag{90}\]
with
\[E_{ij}=\exp[-\mathrm{i}(\delta t\mathbf{c}_{i}\cdot\mathbf{k})]\delta_{ij}. \tag{91}\]
Note that the matrix \(\mathbf{M}\) and the eigenvalue problem (89) depend on the mean flow \((\bar{\rho},||\tilde{\mathbf{u}}||)\), the wave-number (\(k_{x}\) and \(k_{y}\)) and the relaxation frequency \(\beta\). This means that for each set of these parameters the eigenvalue problem needs to be solved to obtain the corresponding values of \(\Re(\omega)\) and \(\Im(\omega)\). Doing so, the spectral properties can be obtained.
#### v.1.2 Linear stability domain
To assess the linear stability domain via the von Neumann approach, for a given set of properties \((\bar{\rho},||\tilde{\mathbf{u}}||,\nu\delta t/\delta r^{2})\) the eigen-value problem of Eq. (89) is solved for \(k_{x}\in[0\ \pi]\) and \(k_{y}\in[0\ \pi]\) and \(\theta=\exp[-\mathrm{i}(\delta t\mathbf{c}_{i}\cdot\mathbf{k})]\). The eigen-value problem of Eq.
\(\cos^{-1}\left(\frac{u_{x}}{||\mathbf{u}||}\right)\in[0\;\pi/2]\). The solver is considered to be linear-stable for the set of parameters if it guarantees negative attenuation rate for all modes and values of \(k_{x},k_{y}\) and \(\theta\). We first quantify the stability domain of the discrete solvers by finding the maximum non-dimensional velocity for which linear stability is guaranteed for a range of non-dimensional viscosities. Here we consider \(\nu\delta t/\delta r^{2}\in\{10^{-6},5\times 10^{-6},10^{-5}.5\times 10^{-5},10^{-4 },5\times 10^{-4},10^{-3},5\times 10^{-3},10^{-2},5\times 10^{-2},0.1,0.5,1,1.1,1.2\}\). The stability domains of three different equilibria, namely entropic, second order polynomial and product form are shown in Fig. 10.
It can be noted that the entropic equilibrium had, by far, the widest domain of stability guaranteeing unconditional stability which is unaffected by the value of the non-dimensional viscosity [29]. For the second-order polynomial and product form equilibria on the other hand, in the limit of vanishing non-dimensional viscosities the solvers become unconditionally unstable. On the other end of the spectrum, i.e. large non-dimensional viscosities the maximum speed encounters a threshold which is exactly the one shown in Eq. (53).
To better illustrate the stability behavior of the different equilibria, and especially to better show the unconditional linear stability of the entropic equilibrium, directional (orientation of velocity vector) stability domains are shown in Figs. 11, 12 and 13.
For the entropic equilibrium, the solver linear stability covers the entire lattice and confirms observations from characteristics analysis, i.e. unconditional stability. For the second-order polynomial for large non-dimensional viscosities the linear stability domains is close to isotropic and limited by the condition of Eq. (53). The product form equilibrium exhibits a directional behavior similar to the entropic one for large non-dimensional viscosities, however with a much smaller tolerated maximum velocity. The largest reachable non-dimensional velocity is reached when the velocity vector is oriented in the direction of one of the diagonal links of the lattice,
\[||\mathbf{u}||=\sqrt{2}\left(\frac{\delta r}{\delta t}-\varsigma\right)=0.598\frac {\delta r}{\delta t}. \tag{92}\]
#### iv.2.3 Spectral dispersion/dissipation
In this section we mainly aim at confirming two observations made via theoretical analyses of the hydrodynamic equations: (a) local-velocity awareness of the entropic sound speed and (b) the effect of different types of error on the effective normal dissipation rate. To that end we will compare results from von Neumann analysis of the full discrete system to previously obtained analytical expressions.
Figure 11: Directional stability domain of the entropic equilibrium for two different non-dimensional viscosities: (dashed black) \(10^{-5}\)and (dotted red) \(0.1\).
Figure 12: Directional stability domain of the polynomial equilibrium for two different non-dimensional viscosities: (dashed black) \(10^{-5}\)and (dotted red) \(0.1\).
Figure 10: Comparison of linear stability domain of different equilibrium distribution functions: (red with diamond markers) second order polynomial, (blue with square markers) product form and (black with circular markers) entropic.
First we look into the dispersion of normal modes. The spectral propagation speed is computed as \(\Im(\omega)/k_{x}\). The calculations have been carried out for velocities \(u_{x}\delta\tau/\delta t\in[-1\;1]\) and are compared to hydrodynamic limit predictions from the characteristics analysis. The results are shown in Fig. 14.
The comparison shows that the analytical expressions derived in Eqs. (51a) and (51b) exactly match the numerical results from spectral analysis at \(k_{x}\to 0\).
The second parameter to be validate in this section is the effective dissipation rates of normal modes. The dissipation rate is extracted as \(-\Re(\omega)\delta t/k_{x}^{2}\delta r^{2}\) from the full spectral analysis and compared to analytical expressions derived in Eqs. (73a) and (73b). The calculations have been carried out for velocities \(u_{x}\delta\tau/\delta t\in[-1\;1]\) and are compared to hydrodynamic limit predictions in Fig. 15.
The comparisons show exact agreement and confirm the analytical results derived in previous sections.
### Correction of leading-order bulk viscosity error
#### iv.2.1 Rescaling bulk viscosity
In the previous section, it was shown that at the Navier-Stokes level the bulk viscosity had two Galilean-variant errors, see Eq. (60) and (61). The first term, i.e. Eq. (60) can be corrected by redefining the relaxation frequency as:
\[\beta=\frac{\delta t}{\frac{2\nu}{A\epsilon^{2}}+\delta t}, \tag{93}\]
which, for instance, for polynomial equilibria reduces to:
\[\beta=\frac{\delta t}{\frac{2\nu}{(\varsigma^{2}-3u_{x}^{2}/2)}+\delta t}. \tag{94}\]
In doing so, and before checking the effect of this correction two points have to be considered: (a) The effect of this redefinition of the relaxation rate on the second error term, i.e. Eq. (61), and (b) changes in the now-velocity-dependent and variable relaxation rate. The changes induced in the relaxation rate by this correction are illustrated in Fig. 16.
The key observation here is that for the polynomial equilibria the redefined relaxation rate grows much faster than for the entropic equilibrium, and that \(\beta\in[0\;1]\) is not always guaranteed. For this condition to be satisfied one must have:
\[\frac{2\nu}{\varsigma^{2}A}\geq 0, \tag{95}\]
Figure 14: Speed of normal modes for the entropic equilibrium as a function of velocity, (black markers) as obtained from the spectral analysis of the full discrete system and (red line) from the characteristics analysis, i.e. Eqs. (51a) and (51b). The insert on the right hand side shows (black) the typical discrete spectrum. Red lines show dissipation of \(c_{s}^{+}\) mode and blue lines that of \(c_{s}^{-}\) mode.
Figure 13: Directional stability domain of the product form equilibrium for two different non-dimensional viscosities: (dashed black) \(10^{-5}\)and (dotted red) \(0.1\).
Figure 15: Effective overall viscosity, \(A^{\prime}\), for the entropic equilibrium as a function of velocity, (black markers) as obtained from the spectral analysis of the full discrete system and (red line) from the corresponding Navier-Stokes linearized equations, i.e. Eqs. (73a) and (73b). The insert on the right hand side shows (black) the typical discrete spectrum. Red lines show dissipation of \(c_{s}^{+}\) mode and blue lines that of \(c_{s}^{-}\) mode.
which boils down to a condition of positivity of \(A\) since both \(\nu\) and \(\varsigma\) are positive. As shown previously in Fig. 5 this condition is not always satisfied for the polynomial and product form equilibria. For the entropic equilibrium on the other hand, because \(A\) is positive definite, the redefined relaxation rate satisfies \(\beta\in[0\;1]\) for \(|u_{x}|<\delta r/\delta t\). The effect of this correction will be checked via numerical simulations in the next section.
#### iv.2.2 Validation via dissipation of isothermal pressure waves
In the limit of small velocities and density variations, the contributions from non-linear terms in the Navier-Stokes equation can be ignored and acoustic waves can be modeled through the linear theory including losses, which reads [30]:
\[\partial_{t}^{2}u_{x}=c_{s}^{2}\partial_{x}^{2}u_{x}+\left(\frac{D+1}{D}\nu+ \frac{\eta}{\rho}\right)\partial_{t}\partial_{x}^{2}u_{x}, \tag{96}\]
where the bulk viscosity \(\eta\) in the context of the classical LB model is fixed at \(\frac{D-1}{D}\rho\nu\). The exact solution of this equation can be shown to be of the form \(u_{x}\propto\exp\left(\sqrt{-1}k_{x}x+\sigma t\right)\) where:
\[\sigma=-\left(\frac{2}{3}\nu+\frac{1}{2}\eta\right)k_{x}^{2}+\sqrt{-1}k_{x}c _{s}\sqrt{1-\left(\frac{D+1}{D}\nu+\frac{\eta}{\rho}\right)^{2}\frac{k_{x}^{2 }}{c_{s}^{2}}}. \tag{97}\]
The total energy, here defined as \(E=\int_{x}\frac{1}{2}u^{2}+c_{s}^{2}\left(\rho-\rho_{0}\right)dx\) will therefore decay in time as:
\[E(t)=E(0)\exp\left[-\left(\frac{D+1}{D}\nu+\frac{\eta}{\rho}\right)\frac{t}{k _{x}^{2}}\right], \tag{98}\]
with \(k_{x}=\frac{2\pi}{L}\), where \(L\) is the wavelength. To evaluate the effective bulk viscosity of the considered numerical schemes, we initialize a wave-function in a domain of size \(N_{x}\) with periodic boundary conditions as:
\[\rho=\rho_{0}+\delta\rho\sin\left(\frac{2\pi x}{N_{x}}\right), \tag{99}\]
for different values of initial velocity \(U_{0}\in[-0.3\;0.3]\). The waves are then left to evolve, and energy is stored for each time-step. The resulting time-evolution of the energy is then fitted with a function of the form of (98) to extract the effective viscosity. This process is illustrated in Fig. 17, with results obtained from a sample simulation.
Simulations were ran for different convection speeds for both entropic and product form equilibria both with and without correction of the relaxation frequencies. For the entropic equilibrium the corrected relaxation frequencies were computed according to Eq. (93) while for the product form Eq. (94) was used. The obtained results are illustrated in Fig. 18. Based on these results a number of observations can be made; The first one is that in the non-corrected forms, the product form equilibrium induces more deviations into the effective viscosity as compared to the entropic equilibrium. Second, the rescaling of the relaxation frequency does reduce the error in the effective viscosity, however as expected it does not fully eliminate it.
Finally the outcomes of this final section can be summarized as follows:
* Linear spectral analysis confirmed that the entropic equilibrium guarantees unconditional stability for any wave-number \(\mathbf{k}\) for \(|u_{\alpha}|\leq\delta r/\delta t\), see Fig. 11. The family of polynomial equilibria, both second order and product form have a more limited linear
Figure 16: Variation in the redefined relaxation rate, Eq. (93) as a function of local velocity: (blue) polynomial equilibrium and (black) entropic equilibrium. Here non-dimensional viscosity is taken to be \(\nu\delta t/\delta r^{2}=0.1\).
Figure 17: Evolution of total energy as a function of time as obtained from two simulations using the entropic equilibrium, one with the regular definition of \(\beta\) from Eq. (2) and one with the redefined \(\beta\) from Eq. (93). \(\nu\delta t/\delta r^{2}\) is set to \(0.05\).
stability domain and are affected by the choice of the non-dimensional viscosity, see Fig. 12, 13 and 10.
* The errors in effective viscosity, in 1-D, from Eq. (60) can be, to a great extend, reduced via a simple rescaling of the relaxation frequency as given by Eq. (93). This is confirmed by Fig. 18.
* The rescaling strategy applied to the entropic equilibrium has a number of advantages over its product form counter part: For the product form it can result in \(\beta\notin[0\;1]\), see Eq. (95). The corrected entropic model exhibits much less pronounced deviations as compared to the product form.
## VI Conclusions and discussion
Construction of discrete distribution functions in reduced kinetic models, whether it be moments-based methods such as the Grad system or discrete velocity methods such as the lattice Boltzmann method is crucial to both accuracy and stability of the system. While the classical approach, used for instance for classical polynomial equilibria in the lattice Boltzmann, consists in matching moments of interest in the equilibrium to their continuous counter-parts. The entropic construction provides an approach that satisfies target moments constraints within the range of operation and asymptotically vanishing sound speed which, as demonstrated in details here, leads to unconditional linear stability. Our detailed analysis of the entropic construction further showed that this asymptotic regularization, in the case of the entropic equilibrium, also leads to positive-definite Navier-Stokes-level dissipation rates. A simple local approach was also introduced to further reduce deviation in the effective viscosity of the model. The correction was also shown to maintain positive-definiteness only for the entropic equilibrium.
###### Acknowledgements.
This work was supported by European Research Council (ERC) Advanced Grant 834763-PonD. Computational resources at the Swiss National Super Computing Center CSCS were provided under the grant s1066.
|
2310.06076
|
CFPB Consumer Complaints Analysis Using Hadoop
|
Consumer complaints are a crucial source of information for companies,
policymakers, and consumers alike. They provide insight into the problems faced
by consumers and help identify areas for improvement in products, services, and
regulatory frameworks. This paper aims to analyze Consumer Complaints Dataset
provided by Consumer Financial Protection Bureau (CFPB) and provide insights
into the nature and patterns of consumer complaints in the USA. We begin by
describing the dataset and its features, including the types of complaints,
companies involved, and geographic distribution. We then conduct exploratory
data analysis to identify trends and patterns in the data, such as the most
common types of complaints, the companies with the highest number of
complaints, and the states with the most complaints. We have also performed
descriptive and inferential statistics to test hypotheses and draw conclusions
about the data. We have investigated whether there are significant differences
in the types of complaints or companies involved based on geographic location.
Overall, our analysis provides valuable insights into the nature of consumer
complaints in the USA and helps stakeholders make informed decisions to improve
the consumer experience.
|
Dhwani Vaishnav, Manimozhi Neethinayagam, Akanksha S Khaire, Mansi Vivekanand Dhoke, Jongwook Woo
|
2023-10-09T18:33:06Z
|
http://arxiv.org/abs/2310.06076v1
|
# CFPB Consumer Complaints Analysis Using Hadoop
###### Abstract
Consumer complaints are a crucial source of information for companies, policymakers, and consumers alike. They provide insight into the problems faced by consumers and help identify areas for improvement in products, services, and regulatory frameworks.
###### Abstract
Consumer complaints are a crucial source of information for companies, policymakers, and consumers alike. They provide insight into the problems faced by consumers and help identify areas for improvement in products, services, and regulatory frameworks.
This paper aims to analyze Consumer Complaints Dataset provided by Consumer Financial Protection Bureau (CFPB) and provide insights into the nature and patterns of consumer complaints in the USA. We begin by describing the dataset and its features, including the types of complaints, companies involved, and geographic distribution. We then conduct exploratory data analysis to identify trends and patterns in the data, such as the most common types of complaints, the companies with the highest number of complaints, and the states with the most complaints. We have also performed descriptive and inferential statistics to test hypotheses and draw conclusions about the data. We have investigated whether there are significant differences in the types of complaints or companies involved based on geographic location. Overall, our analysis provides valuable insights into the nature of consumer complaints in the USA and helps stakeholders make informed decisions to improve the consumer experience.
Dhwani Vaishnav, Manimozhi Neethinayagam, Akanksha S Khaire, Mansi Vivekanand Dhoke, Jongwook Woo Department of Information Systems, California State University Los Angeles
{dvaishn2, mmeethi, akhaire3, mdhoke, jwoo5}@calstatela.edu
## 1 Introduction
The aim of this paper is to analyze consumer complaints in the USA using Big Data platform, Hadoop, and Hive, specifically focusing on finance-related issues faced by consumers in financial institutions. We utilized the Consumer Financial Protection Bureau Dataset provided by the U.S. government agency dedicated to consumer protection. The dataset was used because it provides data on finance-related issues faced by consumers in the USA and is made available by a U.S. government agency dedicated to consumer protection in banks and financial institutions. By analyzing this dataset, we can gain valuable insights into consumer complaints and identify trends, patterns, and relationships that can help improve consumer protection and financial services. Additionally, the dataset is quite large, at 3.6 GB, making it suitable for use in big data platform. Our analysis includes analysis of financial organizations & their services receiving high concerns, year-on-year complaints growth rate analysis and overall analysis of the sentiment of the consumers. As California residents, we have also focused on complaints statistics for the state of California. The results of this analysis provide valuable information on consumer complaints trends and issues, which can be used to improve the customer experience in the financial industry. The Consumer Financial Protection Bureau (CFPB) dataset used in this analysis is publicly accessible and intended for public use.
## 2 Related Work
In recent years, there has been a growing interest in analyzing the Consumer Financial Protection Bureau (CFPB) dataset to better understand consumer complaints in the financial industry. One such related work is a study by Chen and Zhang [1] that focuses on the impact of the Dodd-Frank Wall Street Reform and Consumer Protection Act on consumer financial outcomes. They utilized the CFPB database to examine consumer complaints before and after the implementation of the Act, finding that the Act had a positive impact on consumer financial outcomes by reducing complaints related to predatory lending, mortgage servicing, and debt collection. Another example is a study by Rupesh et al. [2] that analyzes the CFPB data to identify patterns and trends in mortgage, debt collection, and credit reporting complaints. Their analysis, performed using Tableau Software and IBM Watson Analytics, found that complaints were constantly increasing between 2011 and June 2016, with the largest number of complaints in California involving credit card disputes. Another example of a dataset analysis using the Consumer Financial Protection Bureau (CFPB) dataset was conducted by P. Van Leuven, K. Miller, and S. Spitzer [3]. In their research, they aimed to identify patterns and relationships in consumer complaints regarding mortgage loans, and to evaluate the effectiveness of the CFPB's complaint handling process. Their insights provide an analysis of complaints received in 2021, categorized by product and service, geographic region, and special population such as servicemembers and older consumers. They also evaluated the timeline and accuracy of the CFPB's complaint handling process by comparing the results of their analysis to the CFPB's official reports. The study found that the most common issues reported by consumers were related to loan modification and servicing. This study provides valuable insights into consumer complaints related to mortgage loans and the effectiveness of the CFPB's complaint handling process.
In contrast to these related works, our analysis focused on all U.S. business complaint data from 2021 to 2023, plotting year-on-year growth in complaints, and filtering CFPB complaint data based on the highest-receiving financial organizations, as well as only including complaints from California businesses. We also analyzed overall sentiment in
complaints and used Ngram Text Processing in the consumer narrative section to obtain a set of frequently used words for further analysis. We visualized the data using Tableau Software and, for sentiment analysis, Excel Power Map. Overall, our analysis provides a unique perspective on the CFPB dataset and offers valuable insights into consumer complaints and trends in the financial industry.
## 3 Specifications
The Consumer Complaint Database is a collection of complaints about consumer financial products and services that the Consumer Financial Protection Bureau (CFPB) receives from consumers. The dataset contains detailed information about each complaint, including the date of submission, the consumer's zip code, the type of financial product or service being complained about, and the nature of the complaint. The dataset is continuously updated and as of the date we downloaded, it's of the size 3.6 GB. It contains complaints data from 2011 to March 2023.
Below Table 1 shows files and size of the files from dataset.
\begin{tabular}{|l|l|} \multicolumn{3}{l}{_Table 1 Data Specification_} \\ \hline Data Set Size & 3.6 GB \\ \hline Number for files & 1 \\ \hline Content Format & JSON \\ \hline \end{tabular}
The Table 2 below shows the specification for Oracle cluster we are using and Hadoop specification for our project.
\begin{tabular}{|l|l|} \multicolumn{3}{l}{_Table 2 H/W Specification_} \\ \hline Number of nodes & 5 (2 master nodes, 3 worker nodes) \\ \hline CPU speed & 1995.312 MHz \\ \hline Storage & 390 GB \\ \hline \end{tabular}
## 4 Implementation Flowchart
Initially, the raw dataset, which comprises the detail of consumer complaints from CFPB platform, was downloaded from data.gov.
\begin{tabular}{|l|l|} \multicolumn{3}{l}{_Figure 1 Architecture Flow Chart_} \\ \hline \end{tabular}
The whole process of data manipulation is shown in the _Figure 1 Architecture Flow Chart_.
The dataset is available in two different formats - CSV & JSON. We downloaded the dataset in JSON format and uploaded it to the Hadoop File System. After that, HiveQL is used as querying language to create the tables' schema, clean data, create a summary table and export the results. Once the output file has been downloaded in CSV/TSV format, we used Excel's 3D map and Tableau to obtain the visualizations.
## 5 Data Cleaning
Raw files were uploaded and stored in HDFS and then loaded into tables using Beeline Client. The dataset doesn't contain much NULL or missing values and hence it doesn't need to be thoroughly clean. However, it contains the information regarding the complaints registered by consumers in narration and therefore contains many special characters. Additionally, since this dataset is all about financial services, there are confidential and secure details of the customers present in the dataset which is not made public and enclosed with 'XXXX' wherever required. For handling the special characters and replacing the 'XXXX' from the 'customer narrative' field, data cleaning was conducted using regular expressions.
## 6 Analysis and Visualization
After data cleaning and preparation for further analysis, files were extracted into BI tools, Tableau and Excel. We used different interactive maps to show statistics based on total complaints, the companies receiving those complaints and the services having problems according to the complaints. We have carried out two different types of sentiment analysis for the period of 2021 to 2023 to know the overall polarity of the complaints for each state of United States.
### 6.1 3D Map in Excel
The first visualization Figure 2 Sentiment Analysis, a 3D map, was made in Excel and it is an animated map with a timeline for one year, April 2022 to April 2023. This visualization uses a bubble chart to represent each state on a map, with the size of the bubble indicating the sentiment count (i.e., number of complaints) in that state. The layers with different colors of the bubbles indicates the sentiment (positive, negative, or neutral). The map is arranged by state and the time element is represented by months of date received, allowing us to analyze sentiment trends over time. This visualization helps to identify states with the most complaints for a particular sentiment, and using the 3D map feature, it allows for interactive viewing and analysis from different angles. Most states have more negative sentiment values than positive, this indicates that consumers in these states have experienced more negative outcomes than positives. However, for the states like Texas & California, the distribution is more towards positives, which is an interesting finding! This visualization is in a video format, by playing the
Figure 1: Architecture Flow Chart
video, it is clear to see that the bars grow faster after September 2022.
### Tableau
The dashboard in the Figure 3 Overall Complaints Statistics displays the overall statistics of complaints registered in USA. Using HIVE queries, we found out for which company consumers registered maximum complaints and our analysis revealed EQUIFAX, INC has the highest complaints filed against it. We used bubble chart to show the issues faced by the consumers. The product Credit Reporting has received most of complaints from consumers which is 80.19%. There are several mediums available for registering the complaints, however approx. 86% of complaints were received through CPB website.
The dashboard in the next Figure 4 Year on year Complaints Statistics is created in Tableau. It contains a tree map to display the number of complaints received per quarter from Q1 2021 to Q1 2023. The % difference in complaints count compared to the previous quarter allows us to see how the number of complaints is changing over time. For example, if the % difference is positive, this would indicate an increase in the number of complaints compared to the previous quarter, while a negative % difference would indicate a decrease. This would provide a more nuanced understanding of the data, allowing us to see the rate of change over time. The dashboard also contains the area chart to represent gradual increase in the complaints over the quarters from 117K to 289K.
After conducting a tempo-spatial analysis using HIVE queries, we were able to determine which state had the highest volume of customer complaints. According to our findings shown below in Figure 5 State wise distribution of complaints, FLORIDA had the largest number of complaints throughout the United States, with TEXAS coming in second and CALIFORNIA taking third place. In contrast, WYOMING had the lowest number of complaints recorded nationwide.1
Footnote 1: The minor outlying islands are not considered in the list of the states
Figure 4: Year on year Complaints Statistics
Figure 5: State wise distribution of complaints
Figure 3: Overall Complaints Statistics
Figure 2: Sentiment Analysis
Despite Florida having the highest number of consumer complaints, our curiosity as California residents led us to analyze our own state. As shown in Figure 6 California Complaints Statistics, our analysis revealed that customers of California registered 166,725 complaints. We conducted HIVE queries to understand which company has the maximum complaints, and found that EQUIFAX, INC has received maximum complaints in California too. We used a Heat map to show this finding. With this information we analyzed the data further to understand for which product there was so many complaints and found an interesting result that 'Credit Reporting' has received 80.59% of complaints, showed in pie chart which is almost 8 times higher than other products. Additionally, for the actions taken by the company for the registered complaints, we also identified that EQUIFAX has closed 91.68% of their complaints with explanation2 and 0.2% of complaints with non-monetary relief.
Footnote 2: For the complaints ‘Closed with explanation’ for the ‘Public Response’ by EQUIFAX, we don’t have any data on which if the complaints were closed with monitory/non-monitory benefits.
Through our analysis we found EQUIFAX, INC received the maximum number of complaints throughout the USA, so we performed NGRAM TEST PROCESSING on Customer Narrative Section to understand why there are a lot of complaints. We used 4-word count NGRAM, because through bigram and trigram we were not able to get any meaningful insights as there is lot confidential details hidden with "XXXX". Our analysis revealed "_Victim of Identity Theft_" has appeared frequently in the list. EQUIFAX INC needs to address this issue to make their customers satisfied. The result is shown below in _Figure 7 NGram Snippet_.
## 7 Conclusion
In conclusion, the above analysis highlights the significant impact that EQUIFAX, INC. has had on its customers. The large number of complaints registered by customers, particularly in Florida, Texas, and California, indicates a widespread problem with the company's credit reporting product. The fact that identity theft was the primary reason for customer complaints underscores the importance of safeguarding personal information in today's digital age. Moreover, the negative sentiment expressed in the General Sentimental Analysis suggests that customers have had overwhelmingly negative experiences with the company. Finally, the high proportion of complaints received through the website highlights the need for effective online customer support channels. Overall, this analysis emphasizes the need for EQUIFAX, INC. and other companies to prioritize their customers' needs and concerns to ensure better customer satisfaction.
|
2310.09345
|
A Unified Bayesian Framework for Modeling Measurement Error in
Multinomial Data
|
Measurement error in multinomial data is a well-known and well-studied
inferential problem that is encountered in many fields, including engineering,
biomedical and omics research, ecology, finance, and social sciences.
Surprisingly, methods developed to accommodate measurement error in multinomial
data are typically equipped to handle false negatives or false positives, but
not both. We provide a unified framework for accommodating both forms of
measurement error using a Bayesian hierarchical approach. We demonstrate the
proposed method's performance on simulated data and apply it to acoustic bat
monitoring data.
|
Matthew D. Koslovsky, Andee Kaplan, Mevin B. Hooten
|
2023-10-13T18:18:53Z
|
http://arxiv.org/abs/2310.09345v1
|
# A Unified Bayesian Framework for Modeling Measurement Error in Multinomial Data
###### Abstract
Measurement error in multinomial data is a well-known and well-studied inferential problem that is encountered in many fields, including engineering, biomedical and omics research, ecology, finance, and social sciences. Surprisingly, methods developed to accommodate measurement error in multinomial data are typically equipped to handle false negatives or false positives, but not both. We provide a unified framework for accommodating both forms of measurement error using a Bayesian hierarchical approach. We demonstrate the proposed method's performance on simulated data and apply it to acoustic bat monitoring data.
c 2023 1
## 1 Introduction
Measurement error in multinomial data is a well-known and well-studied inferential problem that is encountered in many fields, including engineering, biomedical and omics research, ecology, finance, and social sciences (Swartz et al., 2004; Perez et al., 2007; Molinari, 2008; Datta et al., 2021; Mulick et al., 2022). In this work, we consider two types of measurement error defined as (1) false negatives that occur when a particular category or class is present in the population but it is not observed in the sample and (2) false positives that occur when a sampled observation is misclassified into the wrong category. While both types of measurement error may be present in the data, existing methods are not designed to accommodate them simultaneously which may bias inference. Motivated by multispecies wildlife monitoring data collected in ecological research, we propose a unified framework for accommodating both forms of measurement error when modeling multinomial data. Our approach differs from existing methods in that it explicitly models the probability of misclassification for each observation, accommodates individual-level covariates associated with the probability of being a true/false negative, the relative abundances of the multinomial count data, as well as the probability of misclassification, and is scalable to high-dimensional classification problems.
Modeling false negatives in multinomial data is closely related to the notion of handling zero-inflation in univariate and multivariate count data. See Blasco-Moreno et al. (2019) for an in-depth discussion of zero counts in the context of ecological research studies and Scharf et al. (2022) for an overview of hierarchical models for occupancy data. Count data are considered zero-inflated when the number of zeroes observed in the data set is larger than expected under the assumptions of the sampling distribution. Zero-inflated count models are typically constructed as a two-component mixture of a point mass at zero and a sampling distribution for the count data (e.g., Poisson or negative binomial distributions in the univariate setting) (Xu et al., 2015; Zhang and Yi, 2020; Jiang et al., 2021; Shuler et al., 2021). To achieve this, an at-risk indicator is introduced into the model to differentiate between at-risk zeros (i.e., a zero count is observed, but there is a positive probability of occurrence) and structural zeros (i.e., a zero count is observed because there is zero probability of occurrence) (Neelon, 2019). Mapping these to the measurement error definitions above, at-risk (structural) zeros are synonymous with false (true) negatives. In multivariate settings, researchers link zero-inflated univariate count models via latent parameters that control the dependence structure between counts (Aitchison and Ho, 1989; Chiquet et al., 2021). This approach models the multivariate counts unconditionally on the total count for the sample and is therefore not designed for multinomial classification problems or multivariate compositional count data when the total number of counts is fixed. Koslovsky (2023) introduced a zero-inflated Dirichlet-multinomial (DM) distribution for handling excess zeros in multivariate compositional count data, which differs from traditional approaches for modeling zero-inflation (or false negatives) in count data by assuming a mixture distribution on the count probabilities as opposed to the sampling distribution. Using a combination of data augmentation strategies, their approach is scalable to large compositional spaces, can accommodate covariates associated with zero-inflation and relative abundances, and has shown promising estimation performance in simulation.
Existing methods designed to model false positives in multinomial data typically assume the observed classifications follow a multinomial distribution given the true (latent) classification (Swartz et al., 2004; Wang et al., 2020; Perez et al., 2007; Frenay and Verleysen, 2013). With this approach, the number of rows for the resulting matrix of misclassification probabilities equals the number of true categories, and the number of columns equals the number of observed classes with each row summing to one. The task of modeling false positives in multinomial data draws parallels to a popular approach for entity resolution. Entity resolution is the process of resolving duplicates in many overlapping data sets without the benefit of a unique identifying attribute. In the hit-miss approach to entity resolution, observed records are assumed to either represent the true records associated with an entity (hit) or a distorted version of this truth (miss) (Tancredi and Liseo, 2011; Copas and Hilton, 1990). These potentially noisy records are then directly modeled using a mixture model in which two records that are associated with the same latent truth refer to the same entity and can be duplicated (Steorts et al., 2016). This direct approach of modeling measurement error in the likelihood is an analog to the false positive misclassification problem we are interested in addressing in that the observed classifications can either be the true value (hit) or a distorted version of that truth (miss). Potential benefits of directly modeling the distortion process in a
hit-miss framework include the ability to choose any appropriate distribution for the misclassification, incorporation of expert knowledge into the model via the priors on misclassification probabilities, and inference on the probability of misclassification after obtaining data.
As mentioned previously, this work was motivated by multispecies occupancy modeling in ecological research. The goal of occupancy modeling in ecological research studies is to draw inference on true occurrence given a set of observations that are subject to measurement error (i.e., imperfect detection). Imperfect detection typically occurs in two different ways; (1) a species may go undetected (i.e., false negative) and (2) an observed individual may be misclassified (i.e., false positive). Even with increased sampling effort, imperfect detection may still occur, resulting in biased inference if ignored when modeling (Kellner and Swihart, 2014).
Historically, statistical methods developed to handle imperfect detection have focused on false negatives (Bayley and Peterson, 2001; MacKenzie et al., 2002; Royle and Nichols, 2003; MacKenzie et al., 2003; Tyre et al., 2003; Broms et al., 2015; Dorazio et al., 2006, 2011; Devarajan et al., 2020). However, more recently researchers have proposed methods that account for both false positives and negatives, in part due to the emergence of automated species detection methods (e.g., unmanned aerial systems and automated recording units) and volunteer-based surveys (e.g., citizen science) for monitoring wildlife populations. When developing single- or multispecies (community) occupancy models, researchers typically take a hierarchical approach, often referred to as occupancy-detection models, which jointly model the ecological and observation (or detection) process. This technique allows researchers to differentiate between latent species occupancy and observed species detection and effectively account for potential false negatives. Typically, this is achieved by introducing a site- or location-specific latent species indicator that models whether or not a species is present. If the species is present (absent) at that site, there is a positive (zero) probability of detecting it.
Methods that handle potential species misclassification in occupancy modeling were initially developed for single species studies (Royle and Link, 2006; Miller et al., 2011; Chambert et al., 2015; Ruiz-Gutierrez et al., 2016; Chambert et al., 2018). Chambert et al. (2018) introduced a two-species occupancy model that accounts for both species misidentification and non-detection. Their approach is based on the premise that false detections for a given species occur due to the misidentification with a closely related species. Recently, Wright et al. (2020) developed a multispecies occupancy model that handles both forms of measurement error for two or more species aggregately at each site visit. By assuming (1) the true count of each species follows a Poisson distribution given it is present at the site visit, (2) the number of detections for each species follows a multinomial distribution given the true species counts, and (3) the detection counts are independent across species, the authors demonstrate how the observed/detected counts can be directly modeled without conditioning on the true count for each species. Spiers et al. (2022) developed a multispecies occupancy model similar to Wright et al. (2020) that accommodates individual-level validation data (as opposed to site-level) which allows for more flexibility when modeling heterogeneity with covariates and morphogenesis. This class of models has shown promising estimation performance on simulated
data that are generated consistently with the model assumptions. However, it is not clear how these methods would perform under alternative data generation processes (e.g., overdispersed species abundances) commonly encountered in ecological studies (Linden and Mantyniemi, 2011).
A fundamental issue shared by any model designed to accommodate misclassification is that the model is not identifiable without additional information about the zero-inflation or (mis)classification process beyond the raw data (Swartz et al., 2004). Occupancy-detection methods that accommodate false positives typically deal with identifiability using informative priors in Bayesian settings, auxiliary/calibration data to estimate the matrix of misclassification probabilities (typically referred to as a confusion matrix in ecological research settings) separately from the abundance model, and validating the true species of a subset of the data (often referred to as unambiguous/ambiguous detections in coupled classification models from site confirmation designs) (Wright et al., 2020; Chambert et al., 2015; Guillera-Arroita et al., 2017). Stratton et al. (2022) explore these strategies rigorously in simulation. Swartz et al. (2004) provide an extensive discussion of identifiability issues in multinomial classification models and propose using constraints to break the symmetry of the model, similar to what is done to accommodate label switching in Bayesian mixture models (Jasra et al., 2005).
In this work, we propose a novel method for simultaneously modeling false positives and false negatives in multinomial data. Using a zero-inflated DM distribution, we accommodate potential false negatives in the underlying true classification as well as potential overdispersion. We then introduce a latent hit-miss indicator to model misclassification which allows our approach to differentiate between true detections and detection by chance. Applied to wildlife monitoring data, our method can be thought of as a flexible alternative to existing multispecies occupancy-detection models that forgoes attempting to correctly model the ecological process in favor of simply modeling the true classifications. The model is developed at the individual level which allows for more granular covariate information to model heterogeneity and accommodate potential morphospecies. We assume a hierarchical structure for the concentration parameters of the true classification which allows borrowing of information across sites and/or visits, an important feature for efficient multispecies occupancy modeling (Iknayan et al., 2014).
## 2 Methods
In this section, we present a general formulation for modeling measurement error in multinomial data, making connections to relevant occupancy-detection modeling aspects as necessary. Let the \(C\)-dimensional vector \(\mathbf{y}_{ijl}\) represent the observed classification for the \(i^{th}\) (\(i=1,\ldots,N\)) observation (or site/location) at the \(j^{th}\) (\(j=1,\ldots,n_{i}\)) measurement (or visit) for the \(l^{th}\) (\(l=1,\ldots,L_{ij}\)) individual (or organism), where \(y_{ijlc}=1\) indicates the observed individual was classified into the \(c^{th}\) category (species) and 0 otherwise. In general, the model does not require \(n_{i}>1\), however in various fields, including ecological monitoring, each site is typically visited multiple times to improve
inference (MacKenzie et al., 2002; Lele et al., 2012). We let the \(T\)-dimensional vector \(\mathbf{z}_{ijl}\) represent the individual's true classification (or species), where \(z_{ijlt}=1\) indicates the individual truly belongs to the \(t^{th}\) category and 0 otherwise. For ease of presentation, we assume \(T=C\) and that the ordering of the elements is the same in \(\mathbf{y}_{ijl}\) and \(\mathbf{z}_{ijl}\) (i.e., \(t=c\) corresponds to the same category).
For each observed individual, we introduce a latent hit-miss or misclassification indicator \(\tau_{ijl}\in\{0,1\}\), where 0 indicates that \(\mathbf{y}_{ijl}=\mathbf{z}_{ijl}\). To model the observed classifications, we assume
\[\mathbf{y}_{ijl}|\mathbf{\theta}_{t},\tau_{ijl},\mathbf{z}_{ijl}\sim\tau_{ijl}\text{ Multinomial}(1,\mathbf{\theta}_{t})+(1-\tau_{ijl})\delta_{\mathbf{z}_{ijl}}(\mathbf{y}_{ijl}), \tag{1}\]
where \(\mathbf{\theta}_{t}\) is a \(C\)-dimensional vector of observed classification probabilities for the \(t^{th}\) true classification and \(\delta_{w}(\cdot)\) is a Dirac delta function at \(w\). With this formulation, we assume that if there is no misclassification (i.e., \(\tau_{ijl}=0\)), then \(\mathbf{y}_{ijl}=\mathbf{z}_{ijl}\), otherwise, the observed individual is considered misclassified with \(\tau_{ijl}=1\). Note that this approach allows for \(\mathbf{y}_{ijl}\) to be misclassified into the correct category (i.e., \(\mathbf{y}_{ijl}=\mathbf{z}_{ijl}\), but \(\tau_{ijl}=1\)). As such, our modeling approach places a positive probability of a "lucky guess" to occur. In section 3, we discuss how to restrict the model to prevent this from occurring if desired.
We assume the classification probabilities depend on the true classification of each individual with \(\mathbf{\theta}_{t}\sim\text{Dirichlet}(\mathbf{\nu}_{t})\), where \(\mathbf{\nu}_{t}\) is a \(C\)-dimensional vector of concentration hyperparameters. To allow the classification probabilities to depend on an observed set of covariates, \(\nu_{tc}\) can be replaced with a log-linear regression model similar to Wadsworth et al. (2017). Next, we let the latent misclassification indicators,
\[\tau_{ijl}|\mathbf{z}_{ijl},\mathbf{\beta}_{\psi_{t}},\mathbf{x}_{ijl}\sim\text{Bernoulli} (\psi_{tijl}), \tag{2}\]
where \(\text{logit}(\psi_{tijl})=\mathbf{x}^{\prime}_{ijl}\mathbf{\beta}_{\psi_{t}}\), \(\mathbf{x}_{ijl}\) is a \(P_{\psi}\)-dimensional vector of observed covariates that are observation-, measurement-, and/or individual-specific (including an intercept term), and \(\mathbf{\beta}_{\psi_{t}}\) are the corresponding regression coefficients. We assume \(\beta_{\psi_{t_{p}}}\sim\text{Normal}(\mu_{\psi},\sigma_{\psi}^{2})\). Note that the covariate effects on misclassification are allowed to vary based on the true classification of the individual.
Next we model the true classification of each observation
\[\mathbf{z}_{ijl}|\mathbf{\Theta}_{ij}\sim\text{Multinomial}(1,\mathbf{\Theta}_{ij}), \tag{3}\]
where \(\mathbf{\Theta}_{ij}\) is a \(T\)-dimensional vector of true classification probabilities, which we assume follows a \(\text{Dirichlet}(\mathbf{\gamma}_{ij})\) with \(\mathbf{\gamma}_{ij}\) a \(T\)-dimensional vector of concentration hyperparameters. With the availability of validation data, some of the \(\mathbf{z}_{ijl}\) will be known and fixed in the model to inform the estimation of the classification matrix \(\mathbf{\theta}=(\mathbf{\theta}^{\prime}_{1},\dots,\mathbf{\theta}^{\prime}_{T})^{\prime}\), similar to Wright et al. (2020) and Spiers et al. (2022). We refer to \(\mathbf{\theta}\) as a classification matrix to differentiate it from methods that use a "confusion matrix" to model false positives in multinomial data which, unlike our model, do not explicitly model misclassfication.
We can equivalently model \(\mathbf{\Theta}_{ij}\) as a set of independent gamma random variables normalized by their sum (i.e., \(\mathbf{z}_{ijl}|\mathbf{\alpha}_{ij}\sim\text{Multinomial}(1,\frac{\alpha_{ijt}}{ \bar{\alpha}_{ij}})\), where \(\alpha_{ijt}\sim\text{Gamma}(\gamma_{ijt},1)\)
and \(\bar{\alpha}_{ij}=\sum_{t=1}^{T}\alpha_{ijt}\)). This reparameterization enables us to account for potential zero-inflation (i.e., false negatives or non-detection) by introducing a latent at-risk (or occupancy) indicator \(\zeta_{ijt}\) for the \(t^{th}\) category at the \(i^{th}\) observation and \(j^{th}\) measurement. Specifically, we instead let
\[\alpha_{ijt}|\zeta_{ijt},\gamma_{ijt}\sim\zeta_{ijt}\text{Gamma}(\gamma_{ijt}, 1)+(1-\zeta_{ijt})\delta_{0}(\alpha_{ijt}), \tag{4}\]
similar to Koslovsky (2023). To model the latent at-risk indicators, we assume \(\zeta_{ijt}|\boldsymbol{\beta}_{\eta_{t}},\boldsymbol{x}_{i}\sim\text{Bernoulli }(\eta_{it})\), where \(\text{logit}(\eta_{it})=\boldsymbol{x}_{i}^{\prime}\boldsymbol{\beta}_{\eta_{t}}\), \(\boldsymbol{x}_{i}\) is a \(P_{\eta}\)-dimensional set of observation-specific covariates (including an intercept term), and \(\boldsymbol{\beta}_{\eta_{t}}\) represent the corresponding true classification-specific regression coefficients. We then let \(\beta_{\eta_{tp}}\sim\text{Normal}(\mu_{\eta},\sigma_{\eta}^{2})\). To allow the relative abundances to depend on a set of covariates, we set \(\log(\gamma_{ijt})=\boldsymbol{x}_{ij}^{\prime}\boldsymbol{\beta}_{\gamma_{t}}\) with \(\beta_{\gamma_{tp}}\sim\text{Normal}(\mu_{\gamma},\sigma_{\gamma}^{2})\) and \(\boldsymbol{x}_{ij}\) an observation- and/or measurement-specific set of covariates.
## 3 Posterior Sampling and Inference
For posterior inference, we construct a Metropolis-Hastings within Gibbs sampler. The full joint distribution is defined as
\[\prod_{i=1}^{N}\prod_{j=1}^{n_{i}}\prod_{l=1}^{L_{ij}}p(\boldsymbol {y}_{ijl}|\boldsymbol{\theta}_{t},\tau_{ijl},\boldsymbol{z}_{ijl})p(\boldsymbol {z}_{ijl}|\boldsymbol{\Theta}_{ij})p(\tau_{ijl}|\boldsymbol{z}_{ijl}, \boldsymbol{\beta}_{\psi_{t}},\boldsymbol{x}_{ijl})p(\omega_{\tau_{ijl}})\] \[\times\prod_{i=1}^{N}\prod_{j=1}^{n_{i}}\prod_{t=1}^{T}p(\alpha_{ ijt}|\zeta_{ijt},\boldsymbol{\beta}_{\gamma_{t}},\boldsymbol{x}_{ij})p( \zeta_{ijt}|\boldsymbol{\beta}_{\eta_{t}},\boldsymbol{x}_{i})p(\omega_{\zeta_{ tij}}) \tag{5}\] \[\times\prod_{i=1}^{N}\prod_{j=1}^{n_{i}}p(\mu_{ij}|\bar{\alpha}_{ ij})\prod_{t=1}^{T}\left[p(\boldsymbol{\beta}_{\eta_{t}})p(\boldsymbol{\beta}_{ \psi_{t}})p(\boldsymbol{\beta}_{\gamma_{t}})p(u_{t}|\bar{a}_{t})\prod_{c=1}^{C }p(a_{tc})\right],\]
where we introduce an auxiliary parameter \(\mu_{ij}|\bar{\alpha}_{ij}\sim\text{Gamma}(1,\bar{\alpha}_{ij})\) for efficient sampling of \(\alpha_{ijt}\). Similarly, we reparameterize \(\theta_{tc}=a_{tc}/\bar{a}_{t}\) and assume \(a_{tc}\sim\text{Gamma}(\nu_{tc},1)\) with auxiliary parameter \(u_{t}\sim\text{Gamma}(1,\bar{a}_{t})\) and \(\bar{a}_{t}=\sum_{c=1}^{C}a_{tc}\). In addition to enabling efficient sampling of \(\boldsymbol{\theta}_{t}\), this step provides the opportunity to easily incorporate covariates and restrict the model to disallow correct classifications by chance by fixing \(a_{tt}=0\). Additionally, we introduce a latent set of auxiliary parameters \(\omega_{\zeta_{tij}}\sim\text{PG}(1,0)\) and \(\omega_{\tau_{ijl}}\sim\text{PG}(1,0)\) following Polson et al. (2013). A graphical representation of the proposed approach for modeling misclassification in zero-inflated Dirichlet-multinomial models, missZIDM, is presented in Figure 1. The Markov chain Monte Carlo (MCMC) sampler used to implement our model is outlined below in Algorithm 1 with more details provided in the Supplementary Material.
## 4 Empirical Studies
In this section, we first compare the proposed model in simulation to alternative methods for handling false positives or false negatives in multinomial data. In a second
```
Input data \(\mathbf{y}_{ijl}\), \(\mathbf{x}_{ijl}\), \(\mathbf{x}_{ij}\), \(\mathbf{x}_{i}\). Initialize parameters: \(\mathbf{z}_{ijl}\), \(\tau_{ijl}\), \(\mathbf{\beta}_{\psi_{t}}\), \(\mathbf{\beta}_{\eta_{t}}\), \(\mathbf{\beta}_{\gamma_{t}}\), \(\omega_{\tau_{ijl}}\), \(\omega_{\xi_{t}ij}\), \(\mathbf{\alpha}_{ij}\), \(\mathbf{\zeta}_{ij}\), \(\mu_{ij}\), \(\mathbf{a}_{t}\), \(u_{t}\). Specify hyperparameters: \(\mu_{\psi},\mu_{\eta},\mu_{\gamma},\sigma_{\psi}^{2},\sigma_{\eta}^{2},\sigma_{ \gamma}^{2}\). for iteration \(m=1,\ldots,M\)do for\(i=1,\ldots,N\)do for\(j=1,\ldots,n_{i}\)do Update \(\mu_{ij}\sim\text{Gamma}(L_{ij},\bar{\mathbf{\alpha}}_{ij})\). for\(l=1,\ldots,L_{ij}\)do if\(\mathbf{y}_{ijl}=\mathbf{z}_{ijl}\)then Update \(\tau_{ijl}\sim\text{Bernoulli}(\psi_{ijl}^{*})\). else Update \(\tau_{ijl}=1\). endif if\(\tau_{ijl}=0\)then Update \(\mathbf{z}_{ijl}\sim\delta_{\mathbf{y}_{ijl}}(\mathbf{z}_{ijl})\). else Update \(\mathbf{z}_{ijl}\sim\text{Multinomial}(1,\mathbf{\Theta}_{ij}\otimes\mathbf{\theta}_{c})\). endif Update \(\omega_{\tau_{ijl}}\sim\text{PG}(1,\mathbf{x}_{\psi_{ijl}}^{\prime}\mathbf{\beta}_{ \psi_{t}})\). endfor for\(t=1,\ldots,T\)do Jointly update \(\alpha_{ijt}\) and \(\zeta_{ijt}\) with an Expand/Contract Step via Koslovsky (2023). Update \(\alpha_{ijt}|\zeta_{ijt}=1\sim\text{Gamma}\left(\sum_{l=1}^{L_{ij}}I(z_{ijl}=t )+\gamma_{ijt},1+\mu_{ij}\right)\). Update \(\omega_{\xi_{t}ij}\sim\text{PG}(1,\mathbf{x}_{\eta}^{\prime}\mathbf{\beta}_{\eta_{t}})\). endfor endfor endfor for for\(t=1,\ldots,T\)do Update \(u_{t}\sim\text{Gamma}(\sum_{i=1}^{N}\sum_{j=1}^{n_{i}}\sum_{l=1}^{L_{ij}}I(z_{ ijl}=t),\bar{a}_{t})\). Update \(\mathbf{\beta}_{\psi_{t}}\sim N_{P_{\psi}}(\bar{\mathbf{\beta}}_{\psi_{t}},\bar{\Sigma}_{ \psi_{t}})\). Update \(\mathbf{\beta}_{\eta_{t}}\sim N_{P_{\eta}}(\bar{\mathbf{\beta}}_{\eta_{t}},\bar{ \Sigma}_{\eta_{t}})\). Update \(\mathbf{\beta}_{\gamma_{t}}\) via a Metropolis-Hastings step. for\(c=1,\ldots,C\)do \(a_{tc}\sim\text{Gamma}(\sum_{i=1}^{N}\sum_{j=1}^{n_{i}}\sum_{l=1}^{L_{ij}}I(y_{ ijl}=c,z_{ijl}=t)+\nu_{tc},u_{t}+1)\). endfor endfor endfor endfor
```
**Algorithm 1** MCMC Sampler
scenario, we compare the proposed model to a multispecies occupancy model in settings designed to mimic the motivating application.
The first scenario examines the estimation performance of missZIDM with respect to the at-risk probability, \(\mathbf{\eta}\), the probability of misclassification, \(\mathbf{\psi}\), the true relative abundance of each category, \(\mathbf{\Theta}\), and the confusion matrix, \(\mathbf{\theta}^{*}\), at varying percentages of at-risk observations and misclassification. Estimates for the confusion matrix, \(\mathbf{\theta}^{*}\), using the proposed model were obtained by combining the estimated misclassification probabilities, \(\mathbf{\psi}\), and classification matrix, \(\mathbf{\theta}\), accordingly. In the first scenario, we compare missZIDM to a similar approach that does not accommodate false negatives (missDM), an approach that assumes the true and observed classifications follow Dirichlet-multinomial models which does not explicitly model misclassification or handle false negatives (DMDM, similar to Swartz et al. (2004)), and the recently developed zero-inflated Dirichlet-multinomial model (ZIDM) (Koslovsky, 2023), which is designed to accommodate false negatives but not false positives. In this scenario, the models ignore any potential covariates and therefore only estimate intercept terms for \(\mathbf{\eta}\), \(\mathbf{\psi}\), and \(\mathbf{\Theta}\). For the DMDM model, we set \(\nu_{tt}=(\text{logit}^{-1}(\mu_{\psi})/T+(1-\text{logit}^{-1}(\mu_{\psi}))) \times T/\text{logit}^{-1}(\mu_{\psi})\), which places a similar prior probability for correct classification as the methods with misclassification indicators. All methods were implemented in R using Rcpp (Eddelbuettel and Francois,
Figure 1: Graphical representation of missZIDM with covariate dependence. Note that auxiliary parameters and hyperparameters have been suppressed for clarity. \(\mathbf{N}\) - total observations; \(\mathbf{n}_{i}\) - total measurements per observation; \(\mathbf{L}_{ij}\) - number of individuals at each measurement; \(\mathbf{T}\) - true number of categories; \(\mathbf{C}\) - observed number of categories.
2011).
In this scenario, we generated \(N=50\) observations of \(L_{ij}=100\) individuals to cluster into \(C=10\) categories. We assumed that the true number of categories, \(T\), matched the potentially observed number of categories, \(C\). We evaluated the model in four settings with varying percentages of at-risk observations (1 - % true negatives) and misclassification (false positives). In these settings, we set \(n_{i}=1\) (i.e., no repeated measurements). Observation-specific at-risk indicators were sampled from a Bernoulli distribution with the probability of an at-risk observation set to either 0.25 or 0.75. The true classification of each individual was generated from a Dirichlet-multinomial distribution with concentration parameters set to one and overdispersion parameter set to 0.01, so that the model assumptions did not match the true data generation process. Misclassification occurred with 0.25 or 0.75 probability. The observed classifications were generated from a Dirichlet-multinomial model with a similar overdispersion parameter as above. We set concentration parameters \(\mathbf{\nu}_{t}\) equal to their index (e.g., \(\nu_{tc}=c\) ) with the \(t^{th}\) element also equal to one, placing the least probability on a correct classification by chance. No validation data were used in these settings.
In the second scenario, we investigate how the proposed method performs when used for inference in multispecies occupancy-detection settings with data generated to mimic the application data set. We evaluate the performance of the method with varying percentages of overdispersion in the true counts. We compare the proposed model to a similar version of the multispecies occupancy-detection model presented in Wright et al. (2020) that assumes the ecological processes follow a zero-inflated Poisson distribution and the observation process follows a Dirichlet-multinomial model, which we refer to as DMZIP. Code to implement DMZIP was adapted from Stratton (2022).
Specifically, we generated \(N=50\) observations with \(n_{i}=5\) measurements per observation and \(C=10\) possible species to observe. No covariates were used in the baseline simulation setting. The occurrence probability was set using the observed proportion of zeros in the data, ranging from 23% to 90%. Instead of assuming the total number of counts was fixed, as in scenario 1, the relative activity or encounter rates for each species at each site visit was generated from a negative binomial distribution with mean \(\zeta_{it}*\lambda_{t}\) and variance \((\zeta_{it}*\lambda_{t})^{2}/\sigma\), where \(\zeta_{it}\) is the site-level occupancy indicator for a given species, \(\lambda_{t}\) is the expected number of detections or encounter rate of species \(t\) obtained from the application data which ranged from 2.0 to 28.2, and \(\sigma\) is an overdispersion parameter. Note that as \(\sigma\) increases, the variance of the sampling distribution approaches the mean, and the data are more Poisson-like. The models were evaluated with \(\sigma\in\{0.1,1,100\}\) and 25% validation data.
We then evaluated model performance on data generated similar to scenario 2 but with covariates informing the occupancy probability and the true encounter rates. We simulated 5 continuous covariates from a standard normal distribution in both levels of the model. In this setting, we set the overdispersion parameter for the negative binomial distribution \(\sigma=1\). The intercept terms \(\beta_{\eta_{t0}}\) (\(\beta_{\gamma_{t0}}\)) were randomly sampled uniformly from logit(0.25) to logit(0.95) (0 to log(10)) with covariate effects set to \(\pm 1\) (\(\pm 0.2\)) with equal probability. The off-diagonal elements of the classification matrix \(\mathbf{\theta}\) were sampled uniformly from 0.01 to 0.2 with diagonal elements uniformly sampled from
0.5 to 0.95. Thereafter, the rows of \(\mathbf{\theta}\) were scaled to sum to one, and the individual classifications were sampled from a Multinomial\((1,\mathbf{\theta}_{t})\). In this setting, we assumed 25% of the data were validated. Additionally, we evaluated the models in various other data generation settings including those with different sample sizes, sampling efforts, and percent validated data.
In both scenarios, each of the MCMC algorithms were run for 5,000 iterations treating the first 2,500 as burn-in and thinning to every other iteration, providing 1,250 iterations for inference. We assumed non- or weakly-informative priors \(\gamma_{tc}=\nu_{tc}=\sigma_{\eta}^{2}=\sigma_{\psi}^{2}=\sigma_{\gamma}^{2}=1\). In settings with no covariates in the model, we set \(\mu_{\eta}\) and \(\mu_{\psi}\) following the data generation process for all models. In settings with covariates in the model, the prior mean for the regression coefficients was set to 0. In the sensitivity analysis presented in the Supplementary Material, we explore the impact of prior misspecification of these hyperparameters on inference. To initialize each model, we set the true classifications \(\mathbf{Z}_{i}\) to the observed classifications \(\mathbf{Y}_{i}\), with \(\mathbf{\tau}_{i}\) set accordingly. Auxiliary parameters, \(\omega_{\tau_{tijl}}\) and \(\omega_{\zeta_{tij}}\), and at-risk indicators, \(\zeta_{tij}\) and \(\alpha_{tc}\), were initialized at one. The auxiliary parameters \(u_{t}\) and \(\mu_{ij}\) were randomly initialized from a Gamma(1,1).
We evaluated the models in terms of the average absolute value of the difference between the estimated and true probabilities (ABS), Frobenius norm (FROB), which is the square root of the sum of the squared difference of the estimated and true probabilities, and 95% coverage probabilities (COV) for \(\mathbf{\eta}\), \(\mathbf{\psi}\), \(\mathbf{\theta}^{*}\), and \(\mathbf{\Theta}\). Note that the proposed method is the only method that provides estimates for all parameters simultaneously. In settings where covariates were incorporated into the data generation process (results presented in the Supplementary Material), the models were compared with respect to the estimation of the occupancy probability regression coefficients, \(\mathbf{\beta}_{\eta}\), and the confusion matrix, \(\mathbf{\theta}^{*}\), because these maintained similar interpretation among all models. Results we report below were obtained by averaging over 50 replicated data sets for each setting.
### Results
In the first scenario, the estimation performance for the probability of an at-risk observation, \(\mathbf{\eta}\), improved as the true percentage of at-risk observations increased for missZIDM and ZIDM, with missZIDM demonstrating better performance in settings with more structural zeros. We observed that the proposed missZIDM always outperformed missDM, which ignores potential zero-inflation, when estimating the probability of misclassification, \(\mathbf{\psi}\). All methods obtained relatively similar estimation performance for the true relative abundances, \(\mathbf{\Theta}\), with the proposed method demonstrating a slight advantage in the setting with 25% at-risk observations and misclassification. Estimation accuracy for the confusion matrix, \(\mathbf{\theta}^{*}\), reduced as the percentage of misclassification increased for all methods. The proposed method and DMDM both outperformed missDM with respect to estimating the confusion matrix, \(\mathbf{\theta}^{*}\), however, DMDM obtained marginally better estimates when data were generated with higher misclassification percentages. Recall that missZIDM, unlike DMDM, does not directly provide estimates for \(\mathbf{\theta}^{*}\), but they can be obtained using the estimated \(\mathbf{\psi}\) and \(\mathbf{\theta}\) values. Both missZIDM and ZIDM overestimated coverage probabilities for \(\mathbf{\eta}\), while all methods underestimated
coverage probabilities for \(\mathbf{\Theta}\). Coverage for \(\mathbf{\psi}\) and \(\mathbf{\theta}^{*}\) varied as a function of misclassification and at-risk observation probabilities, with the proposed model obtaining near nominal coverage with lower classification probabilities. A similar trend was observed for missDM when estimating \(\mathbf{\psi}\).
In scenario 2, we found the DMZIP model obtained the best estimation performance for \(\mathbf{\beta}_{\eta}\) and \(\mathbf{\theta}^{*}\) when \(\sigma=100\) (Table 2). However in settings with more overdispersion (\(\sigma=1\) and \(0.1\)), the proposed method outperformed DMZIP with respect to both parameters. Notably, estimation performance was worse for both methods when \(\sigma=0.1\). Similar trends were observed with \(75\%\) validated data (Supplementary Table S1). These results demonstrate how the proposed method is preferred in the presence of overdispersion. Additionally, we found that missZIDM obtained the best estimation performance for \(\mathbf{\beta}_{\eta}\) and \(\mathbf{\theta}^{*}\) with covariates in the model, more species types, visits, and sites (Supplementary Table S2).
One of the major challenges of modeling measurement error in multinomial data is non-identifiability of the parameters, because there is no information contained in the raw data to inform zero-inflation or misclassification probabilities. Our approach is designed to account for non-identifiability through informative prior specifications and/or incorporating a subset of validation data to inform parameter estimates. As such, inferential results obtained by our method, and any method designed to model
\begin{table}
\begin{tabular}{c c c c c c c c c c c c c} \hline \hline & \multicolumn{6}{c}{25\% at-risk observations and 25\% misclassification} \\ \hline & \multicolumn{3}{c}{\(\mathbf{\eta}\)} & \multicolumn{3}{c}{\(\mathbf{\psi}\)} & \multicolumn{3}{c}{\(\mathbf{\Theta}^{*}\)} \\ \cline{2-13} & ABS & FROB & COV & ABS & FROB & COV & ABS & FROB & COV & ABS & FROB & COV \\ \cline{2-13} missZIDM & 0.05 & 0.25 & 1.00 & 0.02 & 0.07 & 0.98 & 0.02 & 0.71 & 0.23 & 0.01 & 0.14 & 0.95 \\ missDM & - & - & - & 0.18 & 0.56 & 0.00 & 0.05 & 1.49 & 0.18 & 0.04 & 0.62 & 0.34 \\ DMDM & - & - & - & - & - & - & 0.05 & 1.66 & 0.16 & 0.01 & 0.15 & 1.00 \\ ZIDM & 0.13 & 0.43 & 1.00 & - & - & - & 0.05 & 1.63 & 0.17 & - & - & - \\ \hline & \multicolumn{6}{c}{25\% at-risk observations and 75\% misclassification} \\ \cline{2-13} & \multicolumn{3}{c}{\(\mathbf{\eta}\)} & \multicolumn{3}{c}{\(\mathbf{\phi}\)} & \multicolumn{3}{c}{\(\mathbf{\theta}^{*}\)} \\ \cline{2-13} & ABS & FROB & COV & ABS & FROB & COV & ABS & FROB & COV & ABS & FROB & COV \\ \cline{2-13} missZIDM & 0.09 & 0.33 & 1.00 & 0.36 & 1.15 & 0.00 & 0.11 & 3.76 & 0.09 & 0.08 & 1.29 & 0.66 \\ missDM & - & - & - & 0.47 & 1.49 & 0.00 & 0.12 & 3.75 & 0.06 & 0.09 & 1.61 & 0.60 \\ DMDM & - & - & - & - & - & - & 0.12 & 3.93 & 0.12 & 0.04 & 0.45 & 1.00 \\ ZIDM & 0.14 & 0.45 & 1.00 & - & - & - & 0.12 & 3.93 & 0.03 & - & - & - \\ \hline & \multicolumn{3}{c}{75\% at-risk observations and 25\% misclassification} \\ \cline{2-13} & \multicolumn{3}{c}{\(\mathbf{\eta}\)} & \multicolumn{3}{c}{\(\mathbf{\psi}\)} & \multicolumn{3}{c}{\(\mathbf{\Theta}^{*}\)} \\ \cline{2-13} & ABS & FROB & COV & ABS & FROB & COV & ABS & FROB & COV & ABS & FROB & COV \\ \cline{2-13} missZIDM & 0.03 & 0.12 & 1.00 & 0.03 & 0.10 & 0.97 & 0.03 & 0.84 & 0.71 & 0.01 & 0.17 & 0.99 \\ missDM & - & - & - & 0.09 & 0.28 & 0.10 & 0.03 & 0.82 & 0.71 & 0.02 & 0.35 & 0.89 \\ DMDM & - & - & - & - & - & - & 0.03 & 0.81 & 0.70 & 0.01 & 0.15 & 1.00 \\ ZIDM & 0.02 & 0.07 & 1.00 & - & - & - & 0.03 & 0.81 & 0.70 & - & - & - \\ \hline & \multicolumn{6}{c}{75\% at-risk observations and 75\% misclassification} \\ \cline{2-13} & \multicolumn{3}{c}{\(\mathbf{\eta}\)} & \multicolumn{3}{c}{\(\mathbf{\psi}\)} & \multicolumn{3}{c}{\(\mathbf{\Theta}^{*}\)} \\ \cline{2-13} missZIDM & 0.03 & 0.11 & 1.00 & 0.40 & 1.26 & 0.00 & 0.07 & 1.99 & 0.51 & 0.08 & 1.40 & 0.64 \\ missDM & - & - & - & 0.44 & 1.40 & 0.00 & 0.06 & 1.70 & 0.56 & 0.09 & 1.53 & 0.64 \\ DMDM & - & - & - & - & - & - & 0.06 & 1.53 & 0.53 & 0.04 & 0.45 & 1.00 \\ ZIDM & 0.01 & 0.04 & 1.00 & - & - & - & 0.06 & 1.53 & 0.53 & - & - & - \\ \hline & \multicolumn{6}{c}{75\% at-risk observations and 75\% misclassification} \\ \cline{2-13} & ABS & FROB & COV & ABS & FROB & COV & ABS & FROB & COV & ABS & FROB & COV \\ \cline{2-13} missZIDM & 0.03 & 0.11 & 1.00 & 0.40 & 1.26 & 0.00 & 0.07 & 1.99 & 0.51 & 0.08 & 1.40 & 0.64 \\ missDM & - & - & - & 0.44 & 1.40 & 0.00 & 0.06 & 1.70 & 0.56 & 0.09 & 1.53 & 0.64 \\ DMDM & - & - & - & - & - & - & 0.06 & 1.53 & 0.53 & 0.04 & 0.45 & 1.00 \\ ZIDM & 0.01 & 0.04 & 1.00 & - & - & - & 0.06 & 1.53 & 0.53 & - & - & - \\ \hline \hline \end{tabular}
\end{table}
Table 1: Simulation Results for Scenario 1: Estimation performance for \(N=50\) observations, \(n_{i}=1\) measurements, \(L_{ij}=100\) individuals, and \(T=10\) categories at varying percentages of at-risk observations and misclassification with 0% validation data. ABS - absolute value of the difference between the estimated and true probabilities; FROB - Frobenius norm; COV - 95% coverage probabilities.
measurement error in multinomial data, will be sensitive to the amount of validation data used to inform the model as well as the specification of the hyperparameters. In the Supplementary Material, we present an extensive sensitivity analysis of the proposed model with varying percentages of validated data (Supplementary Tables S3 and S4) and hyperparameter specification (Supplementary Tables S5 and S6). Based on these results, we recommend taking advantage of validation data when available. With only 10% validation data, the proposed method was able to obtain less than 0.05% bias for all probability estimates on average. In the absence of validation data, the model performed better with lower concentration parameters for the true relative abundances and the classification probabilities. Additionally, we found that the model was relatively robust to misspecification of the misclassification prior for all other parameters. Similar results were observed for changes in the prior for the at-risk probability.
## 5 Application Study
We demonstrate the proposed method on data collected in a multispecies bat acoustic monitoring study conducted in British Columbia, Canada between 2016 and 2020. Details of the study design and data are found in Stratton et al. (2022) and Stratton (2022). Briefly, one to six stationary acoustic recording devices were placed in 55 sites following the North American Bat Monitoring Program guidelines and were typically activated for seven nights (Loeb et al., 2015). Similar to Stratton et al. (2022), we analyze detections from the first and last nights to minimize potential overlap and dependencies. There were 10 total bat species categories available for analysis, including an _other_ category for species that were difficult to detect acoustically or that were not widespread. Each acoustic recording was classified using Kaleidoscope Pro acoustic classification software for bats ([https://www.wildlifeacoustics.com](https://www.wildlifeacoustics.com)). A unique attribute of these data is that each acoustic recording was additionally validated by a bat expert. For this analysis, we let half of all revisits from every site include validation data, similar to Stratton et al. (2022). Additionally, we included site-specific covariates measuring year
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline & \multicolumn{4}{c}{\(\sigma=0.1\)} \\ \cline{2-6} & \multicolumn{3}{c}{\(\boldsymbol{\eta}\)} & \multicolumn{3}{c}{\(\boldsymbol{\theta}^{*}\)} \\ \cline{2-6} & ABS & FROB & COV & ABS & FROB & COV \\ \cline{2-6} missZIDM & 0.27 & 0.94 & 0.07 & 0.01 & 0.25 & 0.81 \\ DMZIP & 0.46 & 1.54 & 0.00 & 0.04 & 1.24 & 0.66 \\ \hline \hline \multicolumn{6}{c}{\(\sigma=1\)} \\ \cline{2-6} & \multicolumn{3}{c}{\(\boldsymbol{\eta}\)} & \multicolumn{3}{c}{\(\boldsymbol{\theta}^{*}\)} \\ \cline{2-6} & ABS & FROB & COV & ABS & FROB & COV \\ \cline{2-6} missZIDM & 0.15 & 0.60 & 0.24 & 0.01 & 0.22 & 0.72 \\ DMZIP & 0.22 & 0.76 & 0.08 & 0.03 & 1.04 & 0.75 \\ \hline \hline \multicolumn{6}{c}{\(\sigma=100\)} \\ \cline{2-6} & \multicolumn{3}{c}{\(\boldsymbol{\eta}\)} & \multicolumn{3}{c}{\(\boldsymbol{\theta}^{*}\)} \\ \cline{2-6} & ABS & FROB & COV & ABS & FROB & COV \\ \cline{2-6} missZIDM & 0.17 & 0.66 & 0.10 & 0.02 & 0.28 & 0.73 \\ DMZIP & 0.03 & 0.12 & 0.93 & 0.01 & 0.15 & 0.90 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Simulation Results for Scenario 2: Estimation performance for data generated similar to the application study with 25% validation. ABS - absolute value of the difference between the estimated and true probabilities; FROB - Frobenius norm; COV - 95% coverage probabilities.
(categorical with 2016 as the reference), annual mean elevation (kilometers), precipitation (millimeters), and temperature (degrees Celcius) for the occupancy (or at-risk) portion of the model. We included nightly minimum air temperature (degrees Celcius), total precipitation (millimeters), and percentage of the moon illuminated by the sun (percent) measured from the centroid of the site at each visit to model the encounter rates or true classifications depending on the model used. All continuous covariates were standardized prior to analysis.
In this analysis, the average number of observed individuals per site visit was 62.6, ranging from 1 to 992. The means, variances, and percent zero counts for each species across site visits are presented in Table 3. We observed mean counts ranging from 1.7 to 25.9 with variances ranging from 25.7 to 4339.6 and percent zeros ranging from 0.26 to 0.83, indicating that the Poisson assumption was not justified for the number of encounters.
We compared the results of the proposed model and a Bayesian model similar in construction to Wright et al. (2020) that assumes the true count process follows a Poisson distribution, which we refer to as DMZIP. Both methods were run for 10,000 iterations, treating the first 5,000 as burn-in and thinning to every other iteration. The models were initialized similar to scenario 2 of the simulation study. We assumed relatively weak or non-informative priors with \(\nu_{tc}=1\), \(\gamma_{t}=1\), \(\mu_{\beta_{\eta}}=\mu_{\beta_{\psi}}=0\), and \(\sigma_{\eta}^{2}=\sigma_{\psi}^{2}=\sigma_{\gamma}^{2}=1\). Similar assumptions were made for DMZIP. Convergence and mixing of the models was visually inspected using traceplots. Additionally, we ran another chain from a different seed and compared the two chains with the Gelman-Rubin statistic, which was less than 1.1 for each parameter (Brooks and Gelman, 1998).
Figure 2 presents the estimated classification probabilities for the proposed mis-SZIDM model, as well as the differences with the DMZIP model. The models found relatively similar results. The largest differences in misclassification estimates were for EPFU, MYCO, and the _other_ category, with DMZIP estimating more misclassification. Additionally, the proposed method estimated more misclassification for the LACI and MYYU species compared to DMZIP.
\begin{table}
\begin{tabular}{c c c c} \hline Common Name & Scientific Name & Mean (Variance) & \% Zero \\ \hline Big brown bat & _Eptesicus fuscus_ (EPFU) & 3.5 (256.0) & 66 \\ Hoary bat & _Lasiurus cinerea_ (LACI) & 3.1 (396.8) & 65 \\ Silver-haired bat & _Lasiomycteris noctivagans_ (LANO) & 12.4 (2703.4) & 34 \\ California myotis & _Myoti s californicus_ (MYCA) & 4.5 (264.3) & 54 \\ Western small-footed myotis & _Myoti s ciliolabrum_ (MYCI) & 2.9 (265.4) & 83 \\ Western long-eared myotis & _Myoti s evotis_ (MYEV) & 1.7 (28.2) & 63 \\ Little brown myotis & _Myoti s lucifugus_ (MYLU) & 25.9 (4339.6) & 26 \\ Long-legged myotis & _Myoti s volans_ (MYVO) & 2.7 (114.3) & 59 \\ Yuma myotis & _Myoti s yumanensis_ (MYYU) & 5.0 (625.5) & 69 \\ Other & - & 0.98 (25.7) & 77 \\ \hline \end{tabular}
\end{table}
Table 3: Observed mean, variance, and % zero observations for each species in the case study.
We investigated the estimated covariate associations for occupancy. Since occupancy is a binary outcome, the estimates plotted in Figure 3 are interpreted as log odds ratios for occupancy at each site visit. Overall, the results were quite similar between the models. Neither method found much of an effect for time on occupancy. However, both models did find a decrease in the odds of occupancy for EPFU in 2019 compared to 2016, and DMZIP estimated an increase in the odds of occupancy for LANO in 2020 compared to 2016. We found that an increase in elevation was associated with an increase in the odds of occupancy for most species, with the exception of MYVO. We observed mostly negative associations between precipitation and occupancy, though most of the 95% credible intervals contained 0 for both models. The strongest relation was for MYCI, where a millimeter increase in precipitation was associated with a 95% decrease in occupancy. EPFU, LACI, LANO, MYCA, MYCI, and MYYU were all found to have positive associations between temperature and occupancy. However, temperature was negatively associated with occupancy for MYVO. Typically the proposed method was more conservative than DMZIP with respect to parameter uncertainty for all covariate effects.
Figure 4 plots the estimated relations between temperature, precipitation, and illumination and relative abundance using the proposed method. Exponentiation of a regression coefficient is interpreted as the multiplicative factor of change in the proportion of a compositional element with a one unit change in the corresponding standardized covariate while holding all else constant (Chen and Li, 2013). We found that temperature typically had a positive association with the true relative abundances for all species, with the exception of MYEV. Again, we found similar negative relations between precipitation and the true relative abundance. The association between illumination and relative abundance varied across species. Supplementary Figure S1 presents the associations between the same covariates and relative activity estimated with the DMZIP model. It is important to note that the regression coefficients for missZIDM and DMZIP at this level of the model do not have the same meaning as the former assumes the true (latent) relative abundances follow a ZIDM distribution while the latter assumes the latent encounter rate of each species follows a Poisson distribution. Due to these differences in interpretation, the results are not directly comparable. However, overall we found very similar patterns in terms of the directionality of the relations, with the proposed method typically providing more conservative estimates.
## 6 Conclusions
In this work, we propose the first method for simultaneously accommodating false positives and false negatives in multinomial data. Our model can naturally incorporate existing knowledge of zero-inflation or misclassification through prior specification and/or easily accommodate validation data to inform the model. In simulation, we demonstrated that our approach obtains similar or improved estimation performance for at-risk, misclassification, true count, and misclassification probabilities compared to alternative methods that ignore one or both forms of measurement error. Motivated by data collected in multispecies occupancy-detection studies, we further showed how the proposed approach can provide more accurate estimation for occupancy and
(mis)classification probabilities than existing approaches when the observed count data are overdispersed. Existing multispecies occupancy-detection models could be easily adjusted to accommodate potential overdispersion by replacing the Poisson distribution for the ecological process with negative binomial distributions. The implementation of this model would require modeling the latent counts explicitly, similar to our approach for modeling misclassification. This approach serves as a potentially viable alternative for modeling misclassification in overdispersed multivariate count data, but it would not be appropriate for multinomial or compositional data settings where the total number of counts is fixed.
Motivated by the task of modeling imperfect classification in acoustic bat data, the proposed method is applicable to other settings in which multinomial data with potential misclassification are collected within and outside of ecological research. Naturally, the method can be applied to other occupancy monitoring studies in which potential misclassification occurs (e.g., citizen science, aerial surveys). Another very promising application of the proposed method in ecological research is to environmental DNA data (Ficetola et al., 2016; Schmidt et al., 2013; Willoughby et al., 2016; Lahoz-Monfort et al., 2016). In the context of conservation research and wildlife monitoring studies, the proposed method can be extended in various directions to answer pressing research questions. For example, to accommodate occupancy dynamics, one could assume the probability a site is in a given occupancy state is governed by a Markov process, similar to Miller et al. (2013). Additionally in the current formulation, we assume that the total
Figure 2: Posterior estimates of classification probabilities for each species using the proposed missZIDM and DMZIP models.
number of species, \(T\), matches the number of species observed. As such, our model does not provide inference on species richness. To estimate species richness, we could take a parameter-expanded data augmentation approach, similar to (Royle and Dorazio, 2012), under the assumption that there were other species in the surveyed regions that went unobserved, (i.e., \(T>>C\)).
## Supplementary Material
Supplementary Material. The Supplementary Material contains detailed derivations of the MCMC algorithm, sensitivity analysis, and additional figures.
missZIDM R package. This file contains code to generate data similar to the simulation study, apply the method, and perform inference.
Figure 3: Posterior estimates of regression coefficients for occupancy for the proposed missZIDM and DMZIP models. Dot represents the posterior mean with error bars capturing the 95% credible intervals.
Figure 4: Posterior estimates of regression coefficients for relative abundance for the proposed missZIDM model. Dot represents the posterior mean with error bars capturing the 95% credible intervals.
#### Acknowledgments
MDK gratefully acknowledges the support of NSF grant DMS-2245492. The opinions, findings, and conclusions expressed are those of the authors and do not necessarily reflect the views of the NSF. The authors would like to thank Dr. Kathi Irvine for providing useful discussions about modeling bat acoustic data.
|
2308.13157
|
Federated Learning in IoT: a Survey from a Resource-Constrained
Perspective
|
The IoT ecosystem is able to leverage vast amounts of data for intelligent
decision-making. Federated Learning (FL), a decentralized machine learning
technique, is widely used to collect and train machine learning models from a
variety of distributed data sources. Both IoT and FL systems can be
complementary and used together. However, the resource-constrained nature of
IoT devices prevents the widescale deployment FL in the real world. This
research paper presents a comprehensive survey of the challenges and solutions
associated with implementing Federated Learning (FL) in resource-constrained
Internet of Things (IoT) environments, viewed from 2 levels, client and server.
We focus on solutions regarding limited client resources, presence of
heterogeneous client data, server capacity, and high communication costs, and
assess their effectiveness in various scenarios. Furthermore, we categorize the
solutions based on the location of their application, i.e., the IoT client, and
the FL server. In addition to a comprehensive review of existing research and
potential future directions, this paper also presents new evaluation metrics
that would allow researchers to evaluate their solutions on
resource-constrained IoT devices.
|
Ishmeet Kaur andAdwaita Janardhan Jadhav
|
2023-08-25T03:31:22Z
|
http://arxiv.org/abs/2308.13157v1
|
# Federated Learning in IoT: a Survey from a Resource-Constrained Perspective
###### Abstract
The IoT ecosystem is able to leverage vast amounts of data for intelligent decision-making. Federated Learning (FL), a decentralized machine learning technique, is widely used to collect and train machine learning models from a variety of distributed data sources. Both IoT and FL systems can be complimentary and used together. However, the resource-constrained nature of IoT devices prevents the widespread deployment FL in the real world. This research paper presents a comprehensive survey of the challenges and solutions associated with implementing Federated Learning (FL) in resource-constrained Internet of Things (IoT) environments, viewed from 2 levels, client and server. We focus on solutions regarding limited client resources, presence of heterogeneous client data, server capacity, and high communication costs, and assess their effectiveness in various scenarios. Furthermore, we categorize the solutions based on the location of their application, i.e., the IoT client, and the FL server. In addition to a comprehensive review of existing research and potential future directions, this paper also presents new evaluation metrics that would allow researchers to evaluate their solutions on resource-constrained IoT devices.
_keywords -_ federated learning, internet-of-things, survey
## I Introduction
Internet of Things (IoT) refers to the vast network of interconnected devices embedded with sensors, aimed at exchanging data with each other and the cloud over the internet [1]. In the era of technological advancements, the rapid growth of IoT stands as a key influencer across diverse sectors, ranging from healthcare, automation, and transportation [2].
These IoT devices continuously generate a vast amount of data, and to efficiently utilize this rich data source, recently Machine Learning (ML) techniques have been employed [2]. In many of these applications, ML models are primarily trained at a centralized location, such as a cloud server, where data originating from various IoT devices is aggregated [3]. These trained models are then deployed to the IoT devices where they perform tasks like object classification [1] on new data thereby making decisions.
However, the traditional approach of deploying centralized ML training on IoT data faces significant obstacles. The training server often graphes with resource limitations, in terms of computational power and storage capacity [4]. Furthermore, the process of transferring vast quantities of data from the edge devices to the central server can introduce significant latency [5]. Thus, there is an interest in "edge" learning, where the ML models are trained directly on the devices, thus aslo enhancing data privacy. [1].
Federated learning (FL), a decentralized machine learning approach, has emerged as a compelling method to perform "edge" learning. According to McMahan et al. [3], FL is a decentralized learning paradigm, enabling devices to collaboratively learn a shared model while keeping all the training data on the original device [3]. By shifting the computation closer to the data sources, FL not only reduces latency and bandwidth usage but also enhances data privacy and security.
Most existing FL techniques are designed with the assumption of relatively powerful devices and stable network connections, which is often not the case in IoT environments. Energy efficiency is paramount in this context, as IoT devices often operate on limited power supplies, and excessive energy usage for FL can significantly shorten device lifetimes. Some challenges of IoT-centric FL systems include the managing the heterogeneous and resource-constrained nature of IoT devices [1], unreliable or unstable networks [3], scaling to hundreds or thousands of IoT devices [3], potential delays in
Fig. 1: Overview of the Federated Learning process and the types of optimizations possible on IoT clients (Level 1) and the FL server (Level 2). Yellow bubbles are IoT client optimizations. Green bubbles are FL server optimizations.
model convergence and learning efficiency, and efficient model training and updating in decentralized environments [6]. These challenges have opened up a new field of research that focuses on balancing model accuracy and energy efficiency to improve the deployability of FL on low-power IoT devices.
This paper provides a comprehensive review of federated learning in the context of IoT, specifically focusing on resource-constrained devices. We survey existing research and state-of-the-art techniques, addressing the challenges, potential solutions, and future directions in the field. The aim of this survey is to provide a comprehensive review and critical analysis of existing FL techniques within the IoT framework, explicitly considering the unique challenges posed by resource limitations. Recognizing the complex interplay between different components of an FL system, we propose a novel categorization that investigates FL techniques from two perspectives: the client level and the server level. This approach allows us to delve deeper into each component's constraints and requirements, enabling a better understanding of the intricacies involved in deploying FL in IoT devices. Fig. 1 provide an overview of the findings in our survey.
In the following sections, we first highlight prominent IoT client optimizations, and then the FL server optimizations. In the final section, we present our research recommendations and conclude this paper.
## II Level 1: Internet-of-Things Client
In FL systems, clients locally train model updates on their own data and send these updates to a central server where these updates are aggregated to create a global model reflecting knowledge from all clients [3]. This cycle of local training and global aggregation continues until the model reaches convergence [3]. This decentralized approach, which keeps raw data on client devices rather than a central server, enhancing privacy, is particularly appealing for IoT devices and encourages wider adoption of FL [1].
Training on clients, however, introduces complexity into the system due to device and behavioral heterogeneity, such as differences in computational capabilities, network speeds, and availability for training. Furthermore, clients can be exposed to differing data distributions, leading to a phenomenon known as Non-IID data [7], which can pose challenges to the learning process. Therefore, effective client management and optimization are key to maximizing the benefits of federated learning systems. In the next subsections, we provide an overview of some solutions to the aforementioned challenges. We divide the solutions into four broad categories: (1) Dynamic Client Participation and Selection, (2) Adaptive Learning, (3) Model Compression, and (4) Heterogeneous Models.
### _Dynamic Client Participation and Selection_
Most widely used FL paradigms makes an assumption that all the clients in the FL system are resource sufficient. However, in IoT networks, clients differ significantly [1] and can lead to resource wastage, where clients perform training work that does not contribute to enhancing the model, whether due to updates that are ultimately discarded, or poor data distribution. This resource wastage deters users from participating in FL making the scaling to larger deployments problematic.
To address these challenges, researchers have developed client selection techniques that use new resource-to-accuracy metrics in heterogeneous FL systems. OORT [2] is one such novel FL scheme which accounts for poorly performing clients by assigning each client a trust score. OORT is particularly useful in dealing with resource-constrained FL-based IoT clients because it evaluates each IoT client's resource availability before assigning a task. The FL server constantly monitors the client's activities and updates their trust scores based on their continued performance. This method allows OORT to improve the time-to-accuracy training time by 1.2\(\times-\)14.1\(\times\) and final model accuracy by 1.3%\(-\)9.8% when compared with the unoptimized FL training scheme.
Despite its advantages, OORT's main downside is the overhead from managing IoT client's trust scores. Subsequent works, such as PyramidFL [8] and REFL [9], improve the speed of FL training by designing novel methods to assign representative trust scores. In particular, PyramidFL [8] uses the FL server to first determine the initial trust scores and then allows the IoT clients to continually optimize their own trust scores based on the available resources. In other veins of work, Bonawitz et al. [10] take into account the time of the day to improve the quality of trust scores with temporal data, FedPARL [11] accounts for previous training activities to find trustworthy clients, and [12] showcases methods to assign trust scores to mobile IoT clients like autonomous vehicles and robotics. FedMCCS [13] assigns priorities to the IoT clients by solving a bi-level optimization problem that considers the availability of resources, communications overhead, and distribution of training data. Another work [14] proposes a dynamic algorithm called ELASTIC based on the trade-off between maximizing the selection and minimizing the energy consumption of the participating clients. [13, 15] takes in account communication/network costs and resource capabilities of clients for selection.
Despite progress in Client Selection optimization, existing literature misses solutions for dynamic, on-the-fly IoT client selection, as most techniques target static environments without client turnover. Potential research areas include enabling clients to share trust scores among each other for optimized participation, integrating selection and participation scoring into overall training, and exploring incentive models in FL for resource-constrained IoT settings.
### _Adaptive Learning_
Adaptive learning involves tailoring the learning process to the specific characteristics, capabilities, or constraints of each client device by adjusting the training parameters. This group of techniques allows for flexible client participation, efficient resource allocation, variable frequency of model updates, and personalized learning approaches for the client IoT devices.
A common method to perform Adaptive Learning modifies the IoT client's loss function to scale gradient updates based
on client characteristics, accommodating heterogeneous client updates. But, deployability challenges arise due to the difficulty of choosing an appropriate loss function modification.
In order to make better use of system resources and improve performance in a more straightforward manner, FedAwo and FedAwo* [16] use server resources to solve the problems of statistical and system heterogeneity without increasing the load of clients. The authors propose an algorithm for automatic weight optimization (FedAwo), where the server calculates the optimal weight for the local model through the machine-learning algorithm. FedAwo* reduces the training cost by dynamically adjusting the training epoch times of local model training. Another approach to prioritize clients is to modify the number of epochs run on each client. A dynamic epoch parameter in the model training is proposed in BePOCH [17]. Here, an algorithm is used to identify the best number of epochs per training round in an FL model such that the reduces resource consumption and training time. A combination of different Adaptive Learning schemes is yet to be explored, and will be important future work.
Personalized Federated Learning (PFL) is a group of techniques that create dedicated, resource-efficient local models for individual IoT clients. Techniques such as pFedGate [18] use a trainable gating layer to speed up training for certain data distributions, and FedSpa [19] employs personalized sparse masks for custom local models. Despite its success, PFL has drawbacks such as limited adaptability to diverse model architectures and high deployment costs due to substantial server computing needs.
### _Model Compression_
Since most IoT devices have limited memory, model compression is a commonly used technique for reducing memory requirements during the local training phase in FL systems. Compression techniques reduce the number of learnable parameters in the model by (a) factorizing the weight matrices or (b) removing redundant parameters.
Fig. 2 shows an example of weight matrix factorization. In this example, a \(3\times 3\) convolution (Fig. 2(a)) kernel is broken up into \(3\times 1\) and \(1\times 3\) kernels (Fig. 2(b)). By doing so, the number of parameters decreases from 9 to 6 (33%).
However, when scaling to larger models, finding such factorizations may not be easy or even possible [11, 20]. To overcome this problem, researchers [21, 22] perform the matrix factorization in a pre-training step. Although matrix factorization reduces the memory requirements, finding optimal matrix factorization may be a time-consuming process. Future research should aim to create a theoretical framework for selecting decomposition strategies based on model hyperparameters. This is crucial for scaling FL as each IoT client might need a distinct decomposition approach.
Other model compression techniques reduce the model memory requirements by identifying and removing redundant parameters [23]. Research has shown that not all model parameters are needed for the model to converge [23]. Model pruning techniques have been developed to identify and remove such redundant model parameters [11]. These techniques quantify the importance of the parameters by measuring the DNN accuracy losses when the weights are removed. If the accuracy losses are significant, the weight is considered to be important. Similar to factorization, the pruning process is done in a pre-training phase. Pruning techniques are still not used widely because they require special hardware that can perform sparse computation. More research is needed to deploy pruning techniques on general-purpose hardware used in IoT settings.
### _Heterogeneous Models_
One promising way to enhance FL performance on IoT involves using heterogeneous on-device models. HeteroFL [24] was proposed to adaptively assign a subset of global model parameters to an on-device model, assuming the local IoT client model could be subset of a larger server model. However, it's found that not all model architectures can be easily subdivided.
Rather than exchanging gradients or parameters, FedH2L [25] shares predictions on a pre-distributed seed set and performs decentralized optimization. Participants focus on finding the best non-conflicting gradient for simultaneously fitting local data and incorporating feedback from peers.
Some recent research has enabled devices to design their IoT client models independently based on federated knowledge distillation techniques [25, 26, 27]. In federated knowledge distillation, only the logit information of the FL server is shared with the IoT client models. This enables the IoT clients to combine their local logits with the FL server's logits to update their weights. FedZKT [28] proposes a zero-shot federated distillation approach, contrasting previous research as it requires no on-device data. It allows devices to create models from heterogeneous local resources for knowledge transfer across these models without needing private data.
Transfer Learning methods enhance FL deployability on IoT systems because they help train efficient yet accurate models. For example, Group Knowledge Transfer [29], uses an alternating minimization algorithm to train small CNNs for IoT clients, reducing computational requirements and communication while maintaining accuracy.
Another work called hierarchical split federated learning framework [30] efficiently trains FL models through a hierarchical organization of the IoT. This solution reduces the burden
Fig. 2: Model compression reduces the #parameters.
on the individual IoT devices by reducing the communication with the distant FL server.
These techniques are successful in scaling FL to large heterogeneous IoT networks but mostly require significant knowledge about every IoT client's hardware resources and/or their private data. Relaxing these constraints through future work would make these solutions more practical.
## III Level 2: Federated Learning Server
Although the actual learning takes place on individual IoT clients, the server plays a pivotal role to coordinates the learning process, aggregates the model updates, distribute the updated model to different clients, handles device heterogeneity and security Therefore, the system's overall performance hugely depends on how the server utilizes its resources.
In FL for IoT systems, the model updates are frequently transmitted from multiple devices to the central server. If updates aren't promptly processed and merged, it creates a bottleneck, slowing the learning process. Hence, it's vital to optimize the server's time for aggregating and transmitting updates to IoT devices. Such optimizations involves using parallel computing, smart task scheduling, and enhanced network infrastructure for faster data transmission. Additionally, the server must securely store and manage data from numerous devices without exhausting memory.
In the following subsections, we describe some popular solutions to the aforementioned challenges and highlight some open research areas. For clarity, we divide the solutions into three main categories: (1) Asynchronous Updates, (2) Aggregation Algorithms, and (3) Model Quantization.
### _Asynchronous Updates_
Typically, federated learning systems aggregate updates only after all devices have sent their data. This has been shown to slow down model convergence and increase resource demand [5]. It can result in increased communication rounds and idle time for both the client and server. Sometimes, numerous client updates can overload the server, leading to delays. Many Federated Learning (FL) techniques address this by using asynchronous update mechanisms to optimize server resources and cater to device heterogeneity. Essentially, clients send updates when ready, and servers aggregate them immediately, eliminating wait times for slower devices. The concept of Asynchronous stochastic gradient descent(ASGD) [31] is the initial framework that was lead the way for early AsyncFL works [32, 33, 34, 35]. These techniques decouple model training on the IoT client side from global model by straggler-aware aggregation at the FL server making model converge faster. However, these techniques suffer from a common problem, i.e., data races [36] on the global model where clients try to update the global model concurrently. For example, if multiple clients simultaneously send local models to the server for aggregation, it may result in merging conflicts, leading to inconsistent or incorrect global model updates. Data races on the FL server can lower device utilization and slow training.
To combat the data race problem, some works like FedCrowd [36], focus on the concurrency of the FL server by using a shadow model at the server along with multiple threads.At a time, one thread aggregates client updates to the shadow model, another copies these updates to the global model, and a final thread dispatches the global model update to clients. This approach is well suited to tackle the data race problem, however, it has been shown that it cannot be scaled well for production FL systems. This is because the shadow models significantly increase the memory requirements at the FL server. Another approach of using a buffer is discussed in FedBuff [5]. Here, the buffer at the FL server is used to store the client updates and the FL server uses this buffer to aggregate and update the global model. FedBuff reports to be 2.5x more efficient than the basic FedAsync. This work extends to PAPAYA [37] which is a production-ready secure aggregation asynchronous protocol.
Another important consideration for Asynchronous Updates is the variance in the processing speeds among the IoT clients. Faster devices participate in global updates many more times than slow devices, and some slow devices cannot join in the global aggregation even once due to staleness control. The stale model problem can come up in the aforementioned approaches, where the newly arrived update is calculated based on a set of stale weights that are older than the current global model, thus hurting the convergence of the model. AsyncFedED [38] and TimelyFL [39] present asynchronous federated learning frameworks with an adaptive weight aggregation algorithm to solve the staleness issue with heterogeneous clients. However, this problem is far from solved, and more research is required to ensure the resources in large heterogeneous IoT networks are being maximally utilized. Furthermore, new training algorithms that focus on optimizing time-to-accuracy metrics are required.
### _Aggregation Algorithms_
In FL, clients send their updates which can be in the form of gradients, parameter differences, or even entire model weights, and the server uses an aggregation method to combine these. Usually, the server uses averaging [3], where the server calculates the average of the updates received from the device. The choice of the aggregation algorithms usually does not change the computational complexity of the FL system, but can significantly impact the number of iterations required to reach convergence. [40]. Reducing the number of training iterations can save energy on the server as well as the participating IoT clients. These techniques also dramatically reduce communication overheads.
Concept of Hierarchical Aggregation [41, 42, 43] has been introduced where aggregation model updates from clients are aggregated at multiple levels of the hierarchy. It helps to relieve communication overhead and reduce bandwidth requirements while maintaining privacy and improving the scalability of the system. Similarly motivated research [43, 41, 42] describes methods that use a three-layer hierarchy of Mobile Edge Computing. This design approach is suitable
in FL systems where plenty of resources are available in the IoT clients (because of large numbers) thus allowing the offloading of work from the server. Aggregation algorithms that dynamically adapt the frequency of global aggregation under a fixed resource budget have also been developed [44]. In such techniques, a controller learns the data distribution, system dynamics, and model characteristics to ensure that the most important model updates are processed first. Reducing the number of model updates also decreases the memory and computation requirements [45].
Researchers have found that the standard aggregation algorithms such as FedAvg [3] take several times longer to converge on non-IID data. There has been a recent focus on adaptive aggregation methods based on data heterogeneity across IoT clients. Some works consider adding momentum-based optimizers such as SLOWMO [46] and FedAdam [47] to improve the convergence time, and consequently the computational burden on the FL server. [48] propose an attention-based aggregation technique that combines each IoT client's data distribution with the data distribution of the entire group. Although these techniques show promising results, they are not able to completely avoid the deterioration of model convergence due to the client drift caused by non-IID local updates. This is an important area of future research.
### _Model Quantization_
By default, most numbers in machine learning libraries are represent floating-point values using 32 bits [49]. These 32-bit values are not always needed during the training phase [23]. There are numerous benefits to using formats with lower precision than 32-bit floating point: (1) requires less memory, enables the FL server to hold larger models and coordinate more client updates. (2) requires less memory bandwidth which speeds up data transfer operations [50]. (3) math operations run faster in reduced precision [51].
However, there exists a tradeoff when using extremely low-precision representations. Fig. 3 highlights this tradeoff. When using 4-bit or 8-bit formats, it is important to consider training convergence. The model may not converge if all the operations are performed in the 4-bit format [6]. Thus, it is an open research problem to automatically identify which precision is suitable for the different model parameters. Solving this problem would enable more effective scaling of FL systems. Enhancing software support to support non-standard quantization formats is another important area of open research.
## IV Discussion and Conclusion
In this section, we propose metrics that will help researchers evaluate their solutions' deployability on IoT FL systems. We summarize our findings (in TABLE I) and conclude this paper.
### _Proposed Evaluation Metrics_
1. Non-IID Datasets: The FL papers mostly use synthetic Non-IID [7, 52] split techniques on datasets like CIFAR-10 and ImageNet that are not representative of real-world IoT data [4, 5]. The data from different devices may have different temporal and data distributions, lighting conditions, and angles. Testing on datasets such as Market-1501 or having a benchmark technique of generating synthetic non-IID split may be more valuable.
2. Baselines for time-to-accuracy: FL techniques are often measured on the amount of time taken to converge for certain accuracy on a dataset. However, most techniques are measured on different baselines, e.g., FedBuff [5] presents results for the time taken to reach 60% accuracy on CIFAR-10, while FedBalancer [4] measures the time taken to reach 80% on FEMNIST. Standardizing the baselines is important for future research.
3. Energy Delay Product (EDP): EDP combines energy consumption and latency by multiplying the energy consumed by the FL model with the corresponding latency. This is important for both the IoT clients and the FL servers to ensure that neither is becoming a bottleneck.
4. Round efficiency: The number of communication rounds taken by the system to converge helps measure the performance of the aggregation algorithms.
### _Conclusion_
FL systems have become more ubiquitous in the last few years because of the widespread deployment of IoT devices. FL is well suited to train large-scale machine-learning models because of its ability to guarantee convergence, preserve privacy, and save network bandwidth. However, deploying FL in IoT environments has many challenges: (1) large numbers of FL clients, (2) unreliable or unstable networks, (3) heterogeneous clients, and (4) limited computing resources and memory. In this survey, we highlight and summarize the key research topics that help improve the deployability of FL in IoT environments. We find that the existing research can be divided into optimizations for the FL server and the IoT clients. Within each type of optimization, we find
Fig. 3: Decreasing the precision of the model reduces the memory requirement, but also impacts the accuracy.
\begin{table}
\begin{tabular}{|c|c|} \hline Application Area & Open Research Areas \\ \hline IoT Client & High accuracy with small models; consistent model convergence with non-IID data. \\ \hline FL Server & Reduce accuracy losses with int4 weights; improve convergence rates asynchronous updates. \\ \hline \end{tabular}
\end{table} TABLE I: Open research areas to improve the deployability of FL on IoT devices.
that similar solutions can be categorized together. We use this categorization to highlight the advantages, disadvantages, and potential future work. We also provide insights about evaluation metrics and how they can be improved.
|
2307.02807
|
Dynamics of a droplet in shear flow by smoothed particle hydrodynamics
|
We employ a multi-phase smoothed particle hydrodynamics (SPH) method to study
droplet dynamics in shear flow. With an extensive range of Reynolds number,
capillary number, wall confinement, and density/viscosity ratio between the
droplet and the matrix fluid, we are able to investigate systematically the
droplet dynamics such as deformation and breakup. We conduct the majority of
the simulations in two dimensions due to economical computations, while perform
a few representative simulations in three dimensions to corroborate the former.
Comparison between current results and those in literature indicates that the
SPH method adopted has an excellent accuracy and is capable of simulating
scenarios with large density or/and viscosity ratios. We generate slices of
phase diagram in five dimensions, scopes of which are unprecedented. Based on
the phase diagram, critical capillary numbers can be identified on the boundary
of different states. As a realistic application, we perform simulations with
actual parameters of water droplet in air flow to predict the critical
conditions of breakup, which is crucial in the context of atomization.
|
Kuiliang Wang, Hong Liang, Chong Zhao, Xin Bian
|
2023-07-06T06:53:20Z
|
http://arxiv.org/abs/2307.02807v1
|
# Dynamics of a droplet in shear flow by smoothed particle hydrodynamics
###### Abstract
We employ a multi-phase smoothed particle hydrodynamics (SPH) method to study droplet dynamics in shear flow. With an extensive range of Reynolds number, capillary number, wall confinement, and density/viscosity ratio between the droplet and the matrix fluid, we are able to investigate systematically the droplet dynamics such as deformation and breakup. We conduct the majority of the simulations in two dimensions due to economical computations, while perform a few representative simulations in three dimensions to corroborate the former. Comparison between current results and those in literature indicates that the SPH method adopted has an excellent accuracy and is capable of simulating scenarios with large density or/and viscosity ratios. We generate slices of phase diagram in five dimensions, scopes of which are unprecedented. Based on the phase diagram, critical capillary numbers can be identified on the boundary of different states. As a realistic application, we perform simulations with actual parameters of water droplet in air flow to predict the critical conditions of breakup, which is crucial in the context of atomization.
keywords: droplet; multiphase flow; SPH;
## 1 Introduction
The deformation and breakup of droplets in shear flow are ubiquitous in engineering applications. On microfluidic chips, droplets are utilized for microbial cultivation and material transport [1; 2], and a thorough understanding of their dynamics in confined flows may improve the efficiency of production and transportation. In other environmental and industrial applications such as protection against harmful aerosols, ink-jet printing and atomization in nozzles [3; 4; 5; 6; 7], liquid droplets are typically in gas flows. Accordingly, a decent knowledge on their dynamics with a high density/viscosity ratio against the matrix fluid is significant. To this end, a comprehensive investigation on the dynamics of a droplet in shear flow, which involves a wide range of Reynolds number, capillary number, confinements of the wall, viscosity/density ratio between the two phases, is called for.
Since pioneering works by Taylor on droplet deformation in shear and extensional flows [8; 9], enormous theoretical and experimental studies have been conducted. A series of works by the group of Mason [10; 11; 12] further studied the deformation and burst of droplets, and even depicted the streamlines inside and around the droplets. Chaffey and Brenner [13] extended a previous analytical approximation to a second order form, which is crucial for the non-elliptic deformation of a highly viscous droplet under large shear rate. Barthes-Biesel and Acrivos [14] expressed the solution of creeping-flow equations in powers of deformation parameters and applied a linear stability theory to determine the critical values for the droplet breakup. Hinch and Acrivos [15] investigated theoretically the stability of a long slender droplet, which is largely deformed in shear flow. However, early analytical works rarely considered effects of finite Reynolds number or wall confinements. In addition, numerous experimental studies have been conducted on the droplet deformation and breakup [16; 17; 18; 19], where not only the effects of viscosity ratio between the droplet and the matrix fluid [20; 21], but also wall confinements [22; 23] have been taken into account.
With advance in computational science, numerical simulation has become a popular approach to study droplet dynamics in the past decades. Boundary integral method was among the first to be applied to study deformation of droplets in stationary and transient states [24], non-Newtonian droplets [25], and migration of a droplet in shear flow [26]. Moreover, Li et al. [27] employed a volume-of-fluid (VOF) method and Galerkin projection technique to simulate the process of droplet breakup. In the work of Amani et al. [28],
a conservative level-set (CLS) method built on a conservative finite-volume approximation is applied to study the effect of viscosity ratio and wall confinement on the critical capillary number. In addition, lattice Boltzmann method (LBM) has been widely employed to study deformation, breakup and coalescence of droplets [29; 30; 31; 32; 33]; to model viscoelastic droplet [34] and surfactant-laden droplet [35]. We note that an interface tracing technique such as VOF, CLS, a phase-field formulation, or immersed boundary method is often necessary by a flow solver based on Eulerian meshes.
As a Lagrangian method, smoothed particle hydrodynamics (SPH) method has some advantages in simulating multiphase flows. Since different phases are identified by different types of particles, the interface automatically emerges without an auxillary tracing technique, even for a very large deformation. Moreover, inertia and wall effects can be taken into account straightforward, in contrast to theoretical analysis or the boundary integral method. Since its inception in astrophysics, SPH method has been largely developed and widely applied in various flow problems [36; 37]. Morris [38] considered the surface tension based on a continuous surface force model and simulated an oscillating two-dimensional rod in SPH. Hu et al. [39] proposed a multi-phase model that handles both macroscopic and mesoscopic flows in SPH, where a droplet in shear flow was selected as a benchmark to validate the method. Other improvements and modifications have also been proposed for SPH in the context of multiphase problems [40; 41; 42; 43]. Furthermore, a droplet or matrix flow with special properties can also be considered. For example, Moinfar et al. [44] studied the drop deformation under simple shear flow of Giesekus fluids and Vahabi [45] investigated the effect of thixotropy on deformation of a droplet under shear flow. Saghatchi et al. [46] studied the dynamics of a 2D double emulsion in shear flow with electric field based on an incompressible SPH method. There are also studies on colliding and coalescence process of droplets by SPH [47; 48]. Simulation of bubbles in liquid is similar, but can encounter special challenges [49], due to the reverse density/viscosity ratio as that of droplet in gas.
Previously, simulations of multiphase flows by SPH method often investigated specific circumstances. Therefore, the objective of this paper is two fold: firstly, to simulate an extensive range of parameters to examine the SPH method for multiphase flows; secondly, to fill gaps of unexplored range of parameters and systematically investigate their influence on the droplet dynamics. The rest of the paper is arranged as follows: in Sec. 2, we introduce the multiphase SPH method and a specific surface tension model. We
present validations and extensive numerical results in Sec. 3. We summarize this work after discussions in Sec. 4.
## 2 Method
### Governing equations and surface tension model
We consider isothermal Navier-Stokes equations with a surface tension for multiphase flow in Lagrangian frame
\[\begin{split}\frac{d\rho}{dt}&=-\rho\nabla\cdot \mathbf{v},\\ \frac{d\mathbf{v}}{dt}&=\frac{1}{\rho}\left(-\nabla p +\mathbf{F}_{b}+\mathbf{F}_{v}+\mathbf{F}_{s}\right),\end{split} \tag{1}\]
where \(\rho\), \(\mathbf{v}\) and \(p\) are density, velocity and pressure respectively. \(\mathbf{F}_{b}\) is the body force, which is not considered in this study. \(\mathbf{F}_{v}\), \(\mathbf{F}_{s}\) denote viscous force and surface tension at the interface between two phases, respectively.
Following previous studies of quasi-incompressible flow modeling [38], an artificial equation of state relating pressure to density can be written as
\[p=c_{s}^{2}\left(\rho-\rho_{\mathrm{ref}}\right), \tag{2}\]
where \(c_{s}\) is an artificial sound speed and \(\rho_{\mathrm{ref}}\) is a reference density. Theoretically, subtracting the reference density has no influence on the gradient of pressure, but it can reduce the numerical error of SPH discretizations for the gradient operator.
For a Newtonian flow, the viscous force \(\mathbf{F}_{v}\) simplifies to
\[\mathbf{F}_{v}=\mu\nabla^{2}\mathbf{v}, \tag{3}\]
where \(\mu\) is the dynamic viscosity. We assume surface tension to be uniform along the interface and do not consider Marangoni force. Therefore, the surface tension acts on the normal direction of the interface. Moreover, its magnitude depends on the local curvature as
\[\mathbf{F}_{s}=\sigma\kappa\mathbf{\hat{n}}\delta_{s}, \tag{4}\]
where \(\sigma\), \(\kappa\), \(\mathbf{\hat{n}}\) are surface tension coefficient, curvature and unit normal vector to the concave side, respectively; \(\delta_{s}\) is a surface delta function and its discrete form shall be described later.
To describe the surface tension at the interface between two fluids, a continuous surface tension model is adopted. As a matter of fact, surface tension my be written as the divergence of a tensor \(\mathbf{T}\)[50; 51]
\[\sigma\kappa\mathbf{\hat{n}}\delta_{s}=\nabla\cdot\mathbf{T}, \tag{5}\]
where
\[\mathbf{T}=\sigma\left(\mathbf{I}-\mathbf{\hat{n}}\otimes\mathbf{\hat{n}} \right)\delta_{s}. \tag{6}\]
To represent a multiphase flow, we define a color function \(c\) and set a unique value for each phase, that is, \(c^{\mathrm{I}}=0\) and \(c^{\mathrm{II}}=1\) for the two phases, respectively. Apparently, the color function has a jump from 0 to 1 at the interface between phase I and II. Therefore, the unit normal vector can be represented by the normalized gradient of the color function as
\[\mathbf{\hat{n}}=\frac{\nabla c}{|\nabla c|}, \tag{7}\]
and the surface delta function is replaced by the scaled gradient as
\[\delta_{s}=|\mathbf{n}|=\frac{|\nabla c|}{|c^{\mathrm{I}}-c^{\mathrm{II}}|}. \tag{8}\]
### SPH method
In SPH, fluid is represented by moving particles carrying flow properties such as density, velocity and pressure. We largely follow the work of Hu and Adams [39] and provide a brief derivation here. Density of a particle is calculated by interpolating the mass of neighboring particles as
\[\rho_{i}=m_{i}\sum_{j}W_{ij}, \tag{9}\]
where mass \(m_{i}\) is constant for every particle. \(W_{ij}\) denotes a weight function for interpolation
\[W_{ij}=W\left(\mathbf{r}_{ij},h\right), \tag{10}\]
where \(\mathbf{r}_{ij}=\mathbf{r}_{i}-\mathbf{r}_{j}\) is a relative position vector from particle \(j\) to \(i\) and \(h\) is the smoothing length. We further define
\[V_{i}=\frac{1}{\sum_{j}W_{ij}}, \tag{11}\]
to be an equivalent volume of particle \(i\) so that \(V_{i}=m_{i}/\rho_{i}\).
The pressure gradient can be computed as
\[-\left(\frac{1}{\rho}\nabla p\right)_{i}=-\sum_{j}\left(V_{i}^{2}p_{i}+V_{j}^{2} p_{j}\right)\frac{\partial W}{\partial r_{ij}}\mathbf{e}_{ij}, \tag{12}\]
where \(p_{i}\) and \(p_{j}\) are obtained by Eq. (2). The viscous force can be calculated as
\[\left(\mu\nabla^{2}\mathbf{v}\right)_{i}=\sum_{j}\frac{2\mu_{i}\mu_{j}}{\mu_{i }+\mu_{j}}\left(V_{i}^{2}+V_{j}^{2}\right)\frac{\mathbf{v}_{ij}}{r_{ij}}\frac{ \partial W}{\partial r_{ij}}, \tag{13}\]
where \(\mathbf{v}_{ij}=\mathbf{v}_{i}-\mathbf{v}_{j}\) is the relative velocity of particle \(i\) and \(j\) and \(r_{ij}=\left|\mathbf{r}_{ij}\right|\) is the distance between them.
As suggested by Morris [38] and Hu et al. [39], a part of pressure contribution \(\sigma\frac{d-1}{d}\delta_{s}\) is removed to avoid attractive force and improve the stability of the interactions between SPH particles. Therefore, we employ
\[\mathbf{T}^{\prime}=\sigma\left(\frac{1}{d}\mathbf{I}-\mathbf{\hat{n}}\otimes \mathbf{\hat{n}}\right)\delta_{s} \tag{14}\]
to replace Eq. (6), where \(d\) is the spatial dimension. Combining Eq. (8), Eq. (7) and Eq. (14), we obtain
\[\mathbf{T}^{\prime}=\frac{\sigma}{\left|c^{\mathrm{I}}-c^{\mathrm{II}}\right| \left|\nabla c\right|}\left(\frac{\left|\nabla c\right|^{2}}{d}\mathbf{I}- \nabla c\otimes\nabla c\right). \tag{15}\]
The gradient of color function between phase I and phase II can be calculated in SPH as
\[\nabla c_{i}=\frac{1}{V_{i}}\sum_{j}V_{j}^{2}\left(c_{j}-c_{i}\right)\frac{ \partial W}{\partial r_{ij}}\mathbf{e}_{ij}, \tag{16}\]
where \(c_{i}\) (or \(c_{j}\)) is initially assigned to be \(c^{\mathrm{I}}\) or \(c^{\mathrm{II}}\) according to which phase particle \(i\) (or \(j\)) consititutes. Substitute Eq. (16) into Eq. (15) to obtain stress tensor
\[\mathbf{T}^{\prime}_{i}=\frac{\sigma}{\left|\nabla c_{i}\right|\left|c^{ \mathrm{I}}-c^{\mathrm{II}}\right|}\left(\frac{\left|\nabla c_{i}\right|^{2}}{ d}\mathbf{I}-\nabla c_{i}\otimes\nabla c_{i}\right). \tag{17}\]
Finally, the surface force term is calculated by the stress tensor using the SPH expression for divergence
\[\left(\sigma\kappa\mathbf{\hat{n}}\delta_{s}\right)_{i}=\sum_{j}\frac{\partial W }{\partial r_{ij}}\mathbf{e}_{ij}\cdot\left(V_{i}^{2}\mathbf{T}^{\prime}_{i}+ V_{j}^{2}\mathbf{T}^{\prime}_{j}\right). \tag{18}\]
It is simple to see that the discrete version of \(\delta_{s}\) in SPH is
\[\left(\delta_{s}\right)_{i}=\frac{1}{V_{i}\left|c^{\mathrm{I}}-c^{\mathrm{II}} \right|}\left|\sum_{j}V_{j}^{2}\left(c_{j}-c_{i}\right)\frac{\partial W}{ \partial r_{ij}}\mathbf{e}_{ij}\right|, \tag{19}\]
which has a finite support to remove the singularity and distributes the surface tension onto a thin layer of two fluids across the interface.
### Computational settings
The quintic kernel is adopted as weight function
\[W=\phi\begin{cases}(3-R)^{5}-6(2-R)^{5}+15(1-R)^{5}&0\leq R<1;\\ (3-R)^{5}-6(2-R)^{5}&1\leq R<2;\\ (3-R)^{5}&2\leq R<3;\\ 0&R\geq 3,\end{cases} \tag{20}\]
where \(R=r/h\) and \(h\) is the smoothing length. \(\phi\) is a normalization coefficient which equals \(1/120\), \(7/(478\pi)\) and \(1/(120\pi)\) in one, two and three dimensions, respectively. We set \(h=1.2\Delta x\) with \(\Delta x\) as the initial spacing distance between particles. This means that the support domain of the kernel function is truncated at \(3.6\Delta x\), namely the cutoff \(r_{c}=3.6\Delta x\). According to our tests, a smoothing length of \(1.2\Delta x\) is almost optimal for an excellent accuracy while avoiding the pairing instability. A detailed discussion on this issue is referred to Price [52].
Since we adopt a weakly compressible formulation, the sound speed \(c_{s}\) should be large enough to restrict the density fluctuations. Based on a scale analysis, Morris et al. [38; 53] suggested that \(c_{s}^{2}\) should be comparable to the largest of
\[\frac{U^{2}}{\Delta},\ \frac{\mu U}{\rho_{0}L\Delta},\ \frac{FL}{\Delta},\ \frac{ \sigma\kappa}{\rho_{0}\Delta}, \tag{21}\]
where \(\Delta\) is the density variation and \(U\), \(L\), \(F\), \(\kappa\) and \(\sigma\) are typical velocity, length, body force, curvature and surface tension coefficient, respectively. Accordingly, for multiphase flows the sound speed may be different for each phase. In all simulations, we set identical \(\Delta\leq 0.5\%\) for each phase and calculate \(c_{s}\) accordingly.
At every time step, the minimal relative density is recorded among all particles, that is,
\[\rho_{min}=min\left\{min\left\{\frac{\rho_{i}}{\rho_{0}^{\mathrm{I}}}\right\},min\left\{\frac{\rho_{j}}{\rho_{0}^{\mathrm{II}}}\right\}\right\}, \tag{22}\]
where particle \(i\) belongs to phase I and particle \(j\) belongs to phase II; \(\rho_{0}^{\rm I}\), \(\rho_{0}^{\rm II}\) are initial densities for the two phases, respectively. Thereafter, \(\rho_{\rm ref}^{\rm I}=0.99\rho_{min}\rho_{0}^{\rm I}\), \(\rho_{\rm ref}^{\rm II}=0.99\rho_{min}\rho_{0}^{\rm II}\) are subtracted as reference density for each phase in Eq. (2) to compute the particle pressure. This operation is performed to reduce numerical errors in calculating the pressure gradient while still keeping repulsive forces between particles.
The explicit velocity-Verlet method is adopted for time integration and a time step is chosen appropriately for stability [38].
## 3 Numerical Results
We consider a shear flow generated by two parallel walls with opposite velocity of magnitude \(U\). Periodic boundaries apply in the \(x\) direction. The computational domain is with length \(L\) and height \(H\). A circular droplet with radius \(R_{0}\) is initially located at the center of the computational domain, as shown in Fig. 1. No-slip boundary condition is applied at the wall-fluid interfaces using the method proposed by Morris [53].
Five dimensionless parameters that determine the deformation of the
Figure 1: Schematic representation of a droplet with initial radius \(R_{0}\) in a matrix fluid between two parallel walls with distance \(H\). The blue dashed lines represent periodic boundaries with a distance \(L\). The continuous phase has viscosity \(\mu_{c}\) while the dispersed phase has viscosity \(\mu_{d}=\lambda\mu_{c}\).
droplet are Reynolds number \(Re=\rho_{c}\dot{\gamma}R_{0}^{2}/\mu_{c}\), Capillary number \(Ca=\dot{\gamma}R_{0}\mu_{c}/\sigma\), confinement ratio \(R_{0}/H\), viscosity ratio \(\lambda=\mu_{d}/\mu_{c}\) and density ratio \(\alpha=\rho_{d}/\rho_{c}\), where \(\dot{\gamma}=2U/H\) is the shear rate, \(\sigma\) is the surface tension coefficient, \(\rho_{d}\) and \(\mu_{d}\) are density and viscosity of the dispersed fluid phase inside the droplet while and \(\rho_{c}\) and \(\mu_{c}\) are for the continuous fluid phase, respectively.
In Sec. 3.1, we study the deformation for an intact droplet while considering the effects due to the five dimensionless numbers. In Sec. 3.2, we examine the breakup of the droplet. In Sec. 3.3, we summarize the droplet dynamics for both intact shape and breakup in phase diagrams. In Sec. 3.4, we demonstrate the deformation and breakup with physical parameters of a water droplet in air flow as an industrial application.
### Droplet deformation
When the shear is mild, the droplet remains intact and deforms to arrive at a stable shape eventually. The degree of droplet deformation can be quantified by the Taylor deformation parameter \(D=(A-B)/(A+B)\), where \(A\) is the greatest length and \(B\) is the breadth of the droplet as shown in Fig. 2. To validate our method, we first compare our results of transient deformations with that of Sheth and Pozrikidis using immersed boundary method within the finite difference method [54]. We follow their work to set \(L=H=4R_{0}=1\), \(\rho_{d}=\rho_{c}=1\), \(\mu_{d}=\mu_{c}=0.5\) and adjust shear rate and surface tension. The two walls slide with velocities \(\pm\frac{1}{2}\dot{\gamma}H\) to generate a clockwise rotation of the droplet. Two resolutions are considered for particles initially placed on squared lattice: \(\Delta x=2R_{0}/25\) and \(R_{0}/25\), corresponding to the droplet containing \(N=484\) and 1976 particles, respectively.
Figure 2: Trace of interface and parameters for the measurement of droplet deformation.
Figure 3: Particle distribution for \(H=4R_{0}\), \(\alpha=\lambda=1\), \(Re=0.125\), \(Ca=0.45\), \(L=4R_{0}\) at (a) initial configuration. (b) maximum elongation. (c) steady state. (d) Taylor deformation parameter as a function of time.
We present particle distributions and \(D\) as functions of time in Fig. 3 for a typical simulation with \(Re=0.125\) and \(Ca=0.45\). We note that the deformation of the droplet may oscillate in time and its maximum elongation does not necessarily takes place at the steady state of a very long time. We further focus on the transient deformations in short time in Fig. 4 so that we can compare our results with those of Sheth and Pozrikidis [54]. It can be readily seen that our results with low resolution \(\Delta x=0.02\) or \(N=484\) already reproduce the reference very well for different Reynolds numbers and/or capillary numbers. As the reference is within a rather short time period, some interesting phenomenon such as oscillation of the Taylor deformation parameter \(D\) is not captured, as indicated for \(Re=12.5\) and \(Ca=0.025\) on Fig. 4(d).
To validate our method for vanishing Reynolds numbers, we calculate the stationary deformation and orientation of the droplet with respect to \(Ca\). We follow Zhou and Pozrikidis [55] to set \(L=H=8R_{0}=2\), \(\rho_{d}=\rho_{c}=1\), \(\mu_{d}=\mu_{c}=0.5\) and adjust shear rate and surface tension accordingly. The deformation parameter \(D\) and orientation \(\theta\) (defined on Fig. 2) as functions of \(Ca\) (up to \(Ca=1\)) for \(Re=0.01\) are shown in Fig. 5. Results for \(Re=0.1\) and \(1\) are also given for comparison, where droplet breakup already takes places at \(Ca\gtrsim 0.4\) for \(Re=1\). The difference between the results of \(Re=0.1\) and \(Re=0.01\) is insignificant and they both resemble the results of boundary integral method for Stokes flow [55]. We can readily conclude that \(Re=0.1\) is small enough to approximate the Stokes flow and present the steady shapes accordingly on Fig. 5(c). We further present the contours and streamlines for a typical evolution of droplet deformation at vanishing Reynolds number in Fig. 6.
We commence to investigate the effects of confinement and set \(L=16R_{0}\) to minimize the periodic artifacts. We first restrict out attention to \(Re=0.1\), \(\alpha=1\) and \(\lambda=1\). Four ratios of confinement are considered: \(H=2.4R_{0}\), \(4R_{0}\), \(8R_{0}\) and \(16R_{0}\). The deformation parameter as a function of \(Ca\) is shown in Fig. 7. As we can see, a smaller distance of the two walls enhances the elongation of droplet and makes its long axis align more horizontally. As we relax the confinement, the relation between \(D\) and \(Ca\) becomes linear and the difference between \(H=16R_{0}\) and \(H=8R_{0}\) is already negligible.
Furthermore, we simulate cases where the droplet and the matrix flow are two fluids with different physical properties. We first consider two fluids of the same density but with different viscosities. We choose a computational domain of \(16R_{0}\times 16R_{0}\) and set \(Re=0.1\), \(\alpha=1\) and \(\lambda\) ranges from \(0.1\) to \(10\)
Figure 4: Transient deformations of a droplet compared with results of Sheth and Pozrikidis [54] with \(\alpha=\lambda=1\), \(H=4R_{0}\), \(L=4R_{0}\) and (a) \(Re=0.25\), \(Ca=0.1\), \(0.45\). (b) \(Re=2.5\), \(Ca=0.1\), \(0.4\). (c) \(Re=12.5\), \(Ca=0.1\), \(0.2\). (d) \(Re=25\), \(Ca=0.025\), \(0.2\)
Figure 5: Effects of \(Re\) and \(Ca\) on 2D droplet deformation when \(\alpha=\lambda=1\), \(H=8R_{0}\), \(L=8R_{0}\). (a) Taylor deformation parameters. (b) Droplet orientation. (c) Steady shapes under different \(Ca\) when \(Re=0.1\). The droplet already breaks up at \(Ca\gtrsim 0.4\) for \(Re=1\), results of which are omitted here and presented in a later section.
Figure 6: A typical evolution of deformation for an initially circular 2D droplet in shear flow: \(\alpha=\lambda=1\), \(Re=0.1\), \(Ca=0.4\), \(H=16R_{0}\), \(L=16R_{0}\). (a) Droplet deformation over time. (b) Streamlines: the color represents the magnitude of velocity and a red line indicates the droplet interface.
Figure 7: Effects of confinement ratio and \(Ca\) on 2D droplet deformation when \(\alpha=\lambda=1\), \(Re=0.1\), \(L=16R_{0}\). (a) Taylor deformation parameters. (b) Droplet orientation.
Initial space \(\Delta x\) among nearest particles is \(2R_{0}/25\) so a droplet contains 484 particles. The deformation parameter as a function of \(Ca\) is shown in Fig. 8. As we can see, the deformation increases as \(\lambda\) increases from 0.1 to 10. In this range of \(\lambda\), a droplet with lower viscosity has a smooth inside circulation and fast reaction which can reduce the elongation [16; 20].
The other case is that fluids inside and outside the droplet have the same viscosity but different densities. The sound speed is chosen according to the ratio of initial density to balance the pressure
\[\frac{c_{s}^{c}}{c_{s}^{d}}=\frac{\rho_{ref}^{d}}{\rho_{ref}^{c}}=\frac{\rho_{ d}}{\rho_{c}}=\alpha, \tag{23}\]
where \(c_{s}^{c}\), \(c_{s}^{d}\) and \(\rho_{ref}^{c}\), \(\rho_{ref}^{d}\) are sound speeds and reference densities used for fluids outside and inside the droplet. As shown in Fig. 9, the difference between deformations of droplet under density ratio \(0.1-10\) is very small except obvious lower inclination at small \(Ca\) when \(\alpha=0.1\). In this small Reynolds number regime (\(Re=0.1\)), the density ratio has negligible influence and only the capillary number determines the droplet deformation.
In 3D simulations, the width of simulation box \(W\) is an additional computational parameter compared to the 2D simi
Figure 8: Effects of viscosity ratio \(\lambda\) and \(Ca\) on 2D droplet deformation when \(\alpha=1\), \(Re=0.1\), \(H=16R_{0}\), \(L=16R_{0}\). (a) Taylor deformation parameters. (b) Droplet inclination.
Figure 10: Effects of box length \(L\) and width \(W\) on the Talor deformation parameter \(D\) of 3D droplet deformation when \(\alpha=\lambda=1\), \(Re=0.1\), \(Ca=0.2\), \(H=4R_{0}\), compared to the analytical prediction of Shapira and Haber (1990)
Figure 9: Effects of density ratio \(\alpha\) and \(Ca\) on 2D droplet deformation when \(\lambda=1\), \(Re=0.1\), \(H=16R_{0}\), \(L=16R_{0}\). (a) Taylor deformation parameters. (b) Droplet inclination.
analytical predictions or experiment data, the length and width of simulation box are numerical and should be large enough. One set of parameters of \(Re=0.1\), \(Ca=0.2\), \(\alpha=\lambda=1\), \(H=4R_{0}\) are selected and different length \(L\) and width \(W\) of simulation box are examined. According to our simulations, the deformation basically decreases with the increase of \(L\) and/or width \(W\). We compare the Taylor deformation parameter \(D\) in steady states of our simulations with the analytical prediction of Shapira and Haber [56]. The differences between our results and analytical prediction under different \(L\) and \(W\) are plotted in Fig. 10. It can be seen that when \(L\) is larger than \(24R_{0}\) and \(W\) is larger than \(8R_{0}\), the results has little change with the increase of \(L\) and/or \(W\). Fig. 11 shows the steady deformation of 3D droplets in shear flow when \(L=24R_{0}\), \(W=8R_{0}\), \(Re=0.1\) and \(\alpha=\lambda=1\) with different \(Ca\) and confinement in \(H\) direction, compared with theoretical predictions of Shapira and Haber [56] and experiment data of Sibillo et al. [57]. Our results agree well with both anlaytical and experiment references at \(Ca=0.1\) and \(0.2\), whereas are closer to the experimental data at \(Ca=0.3\). The deformation increases with the confinement ratio \(R_{0}/H\), which has the same trend as for 2D cases.
Figure 11: Effects of confinement ratio and \(Ca\) on 3D droplet deformation in shear flow when \(\alpha=1\) and \(Re=0.1\), \(L=24R_{0}\), \(W=8R_{0}\), compared with predictions of Shapira and Haber [56] and experiment data of Sibillo et al. [57]
### Droplet breakup
When the shear is strong, the droplet is over-stretched to break up. We find two patterns of breakup process under different viscosity ratios in simulations. As shown in Fig. 12, when \(\alpha=1\), \(\lambda=0.2\), \(Re=0.1\), \(Ca=10\), and \(L=H=16R_{0}\), a droplet is rotated and then stripped of its main body near the surface and gradually breaks apart. We call this breakup type A. This type is also found in the experiment study of Grace and they call it "tip streaming breakup" [20]. The conditions for type A breakup happening is exhibited in the next section. Fig. 13 shows another set of typical snapshots of the droplet shape and flow fields in shear flow breaks when \(\alpha=\lambda=1\), \(Re=0.1\), \(Ca=0.9\) and \(L=H=16R_{0}\). In this simulation, a droplet is stretched and its waist becomes slender and slender and finally breaks up. We call this breakup type B.
To encompass the breakup of a 3D droplet with a large elongation, we employ a rather long computational domain with \(L=32R_{0}\). Fig. 14 shows
Figure 12: Breakup type A: evolution of an initially circular 2D droplet breakup in shear flow with \(\alpha=1\), \(\lambda=0.2\), \(Re=0.1\), \(Ca=10\), \(H=16R_{0}\), \(L=16R_{0}\). (a) Droplet shape. (b) Streamlines: The color represents the magnitude of velocity and red outlines in the background represent the droplet interface.
Figure 14: Evolution of an initially spherical 3D droplet breakup in shear flow when \(\alpha=\lambda=1\), \(Re=0.1\), \(Ca=0.46\), \(H=2.857R_{0}\), \(L=16R_{0}\): particle distribution (left) and interface (right).
Figure 13: Breakup type B: evolution of an initially circular 2D droplet breakup in shear flow with \(\alpha=\lambda=1\), \(Re=0.1\), \(Ca=0.9\), \(H=16R_{0}\), \(L=16R_{0}\). (a) Droplet shape. (b) Streamlines: The color represents the magnitude of velocity and red outlines in the background represent the droplet interface.
the dynamics of the breakup with \(Re=0.1\), \(H=2.857R_{0}\), \(Ca=0.46\) and \(\alpha=\lambda=1\). Left side are SPH particle distributions and right side are corresponding contour interfaces processed by SPH kernel interpolation into mesh cells. The color represents the magnitude of velocity. We adopt the same \(Ca\) and \(R_{0}/H\) as the experiment in creeping flow by Sibillo et al. [57]. The shape of the droplet in the breakup process of our simulation is very close to their experimental observation. Only a slight difference appears in the final stage: in the experiment, the droplet is divided into three main parts, while in our simulation the middle part continues to split into two smaller droplets. In contrast to the 2D case, a 3D droplet has a more slender shape before breaking up.
### Phase diagram
To clearly visualize the states of a droplet in different conditions, we consider a range of Reynolds numbers, capillary numbers, and confinements/density/viscosity ratios and summarize our simulation results into phase diagrams. Thereafter, we may estimate the critical capillary number \(Ca_{c}\) that segments the intact and breakup states and further investigate how it is influenced by other dimensionless parameters.
Figure 15: Phase diagram of 2D droplets states under different confinement, \(Re\) and \(Ca\) when \(\alpha=\lambda=1\), \(L=16R_{0}\).
Figure 16: Phase diagram of 2D droplets states under different confinement, \(Re\) and \(Ca\) when \(\alpha=\lambda=1\), \(L=16R_{0}\). (a) \(H=2.4R_{0}\).(b) \(H=4R_{0}\). (c) \(H=8R_{0}\). (d) \(H=16R_{0}\)
For \(\lambda=\alpha=1\), we perform a group of 2D simulations with different Reynolds number \(Re=0.01,0.1,1,10\) and confinement \(H=2.4R_{0},4R_{0},8R_{0},16R_{0}\) with \(Ca\in[0.1,1.1]\) and \(L=16R_{0}\). For a general overview, the states of the droplet are summarized in Fig. 15. To get a clear view, we slice the phase diagram by two perspectives. Firstly, we divide results into groups of the same confinement to reveal the influence of \(Re\) on \(Ca_{c}\) as shown in Fig. 16. Overall it is apparent that a higher \(Re\) reduces \(Ca_{c}\). Three scenarios are special: under confinement \(H=2.4R_{0}\), \(4R_{0}\) and \(8R_{0}\), we can not differentiate \(Ca_{c}\) between \(Re=0.01\) and \(0.1\).
From another perspective of \(Ca\) versus confinement ratio for each \(Re\) on Fig. 17, we are not able to find a universal pattern. Under \(Re=0.01\), \(Ca_{c}\) decreases with \(R_{0}/H\) while under \(Re=10\), \(Ca_{c}\) increases with \(R_{0}/H\). Whereas, under \(Re=0.1\) and \(1\), \(Ca_{c}\) has no monotonic relation with \(R_{0}/H\).
Furthermore, we investigate effects of viscosity ratio \(\lambda=\mu_{d}/\mu_{c}\in[0.1,10]\) on the droplet dynamics for \(Re=0.1\) and three confinement ratios \(H=4R_{0}\), \(8R_{0}\) and \(16R_{0}\). The results are shown in Fig. 18. For breakup type A, the droplet rotates and is stripped off as described in Sec. 3.2; Breakup type B represents that a droplet is stretched and breaks up in the middle. Under \(Re=0.1\), type A is observed only if the droplet has a much smaller viscosity compare to the matrix fluid. Overall, \(Ca_{c}\) decreases with the increase of \(\lambda\). However, we notice a flatten trend or even a reverse trend with small difference for \(Ca_{c}\) from \(\lambda=5\) to \(\lambda=10\), as shown on the insets of Fig. 18. According to the study of Karam et al. and Grace [16; 20], a maximum transfer of energy takes place across an interface, which demands this trend.
Due to highly computational cost in 3D, we only consider a moderate confinement \(H/R_{0}=4\) and perform a group of simulations to draw a phase diagram in the plane of \(Ca\) and \(Re\), as shown in Fig. 19. The size of the simulation box is \(L=32R_{0}\), \(W=8R_{0}\), \(H=4R_{0}\). As in 2D case, the critical \(Ca_{c}\) decreases with increasing \(Re\) in 3D, as shown in in Fig. 16(b). However, the critical capillary number \(Ca_{c}\) in 3D case is significantly smaller than that of 2D case.
### Water droplet in air flow
As one specific application, we employ our method to predict the breakup of a water droplet in shear flow of air. The critical capillary number or the shear rate determined is helpful to design an effective atomization device. Actual physical properties of water and air around \(20^{\circ}\)C are adopted: \(\rho_{d}=998.2\ kg\cdot m^{-3}\), \(\mu_{d}=1.0087\times 10^{-3}\ Pa\cdot s\) and \(\rho_{c}=1.205\ kg\cdot m^{-3}\),
Figure 17: Phase diagram of 2D droplets states under different confinement, \(Re\) and \(Ca\) when \(\alpha=\lambda=1\), \(L=16R_{0}\). (a) \(Re=0.01\). (b) \(Re=0.1\). (c) \(Re=1\). (c) \(Re=10\)
Figure 18: Phase diagram of 2D droplets states under different confinement, viscosity ratios \(\lambda\) and \(Ca\) when \(\alpha=1\), \(Re=0.1\), \(L=16R_{0}\). (a) \(H=4R_{0}\). (b) \(H=8R_{0}\). (c) \(H=16R_{0}\). (d) two patterns of breakup
\(1.81\times 10^{-5}\)\(Pa\cdot s\) are set for water (dispersed) phase and air (continuous) phase, respectively; surface tension coefficient \(\sigma=72.75\times 10^{-3}\)\(N\cdot m^{-1}\) is set for the water-air interface.
We perform a relative large range of Reynolds numbers and depict a phase diagram on the plane of \(Re\) and \(Ca\) in logarithmic-logarithmic scales on Fig. 20. This allows us to connect the results with the same droplet size and observe its behavior while changing \(Re\) and \(Ca\). Points on each dotted line represent the droplet of the same radius, as marked in the figure. For example, we have a line of dynamics for the droplet with \(R_{0}=10\mu m\) under shear rates of \(1\times 10^{6}s^{-1}\), \(2\times 10^{6}s^{-1}\), \(5\times 10^{6}s^{-1}\), \(1\times 10^{7}s^{-1}\), \(2\times 10^{7}s^{-1}\); another line of dynamics for the droplet with \(R_{0}=100\mu m\) under shear rates of \(5\times 10^{4}s^{-1}\), \(1\times 10^{5}s^{-1}\), \(2\times 10^{5}s^{-1}\), \(5\times 10^{5}s^{-1}\), \(1\times 10^{6}s^{-1}\). Furthermore, we observe that if the \(Re\) is on the order of 100, the critical \(Ca\) for breakup is very sensitive to \(Re\). We also perform a group of 3D simulations for a droplet with \(R_{0}=50\mu m\) under shear rates of \(1\times 10^{5}s^{-1}\), \(2\times 10^{5}s^{-1}\), \(5\times 10^{5}s^{-1}\), \(1\times 10^{6}s^{-1}\), \(2\times 10^{6}s^{-1}\). The 3D results for the critical point of breakup is close to that of the 2D results.
Figure 19: Phase diagram of 3D droplets states under different \(Re\) and \(Ca\) when \(\alpha=\lambda=1\), \(H=4R_{0}\), \(L=32R_{0}\), \(W=8R_{0}\).
## 4 Conclusions and disucssions
In this study, we employed a multi-phase SPH method to simulate droplet deformation and breakup subjected to a simple shear flow in an extensive range of physical parameters. We performed both 2D and 3D simulations and validated them by benchmarks: transient deformations and steady shapes of droplets are compared with previous simulations, analytical derivations and experimental data. These results indicate that the method is reliable to simulate droplet dynamics in general. We wish to emphasize the convenience of SPH method in simulating multi-phase problems, as we can leverage on its Lagrangian nature and differentiate different phases by particle species. In addition, the algorithm and data structure for 2D and 3D simulations have tiny difference and therefore, it is a simple task to extend the code from 2D to 3D. Economical 2D simulations allow us to investigate a wide range of physical parameters in five dimensions, which serve as a guide to 3D realistic situations. From the results, we come to the following conclusions.
(1) A larger Reynolds number \(Re\) or capillary number \(Ca\) leads to a more considerable deformation of the droplet. The transient and steady-state deformations of the droplet in our study are in good agreement with the previous studies but beyond their time limits [54; 55].
Figure 20: Diagram for states of water droplets in air shear flow under different \(Re\) and \(Ca\) when \(H=16R_{0}\), \(L=16R_{0}\).
(2) Under low Reynolds number (\(Re=0.1\)), a stronger confinement due to the walls enhances the steady-state deformation in both 2D and 3D simulations. When the walls are separated further apart, the Taylor deformation parameter is almost linear with respect to \(Ca\). The influence of confinement on the deformation of a droplet has been studied by Shapira and Haber by a first-order analytical solution based on Lorentz's reflection method. They proved that the walls do not influence the shape of deformed droplet but increases the deformation magnitude with a term of order \((R_{0}/H)^{3}\)[56]. The experiment data of Sibillo et al. illustrate satisfactory agreement with the predictions of Shapira and Haber except for the droplet being within a small gap, where the reflection analysis is expected to fail [57]. Our 3D simulation results resemble the whole set of experiment data even when the droplet is within the small gap, which suggests the method as an applicative tool for more realistic situations in microfluidics.
(3) The effects of wall confinement on the critical capillary number \(Ca_{c}\) are not universal under different \(Re\). When \(Re=0.1\), a closer gap of walls reduces \(Ca_{c}\). This is because a closer gap of walls increases the deformation as described above. But when \(Re\) is larger, the relation between \(Ca_{c}\) and the confinement ratio is unclear. From our observation, this non-monotonic relation results from an interplay of influences by the shear strength and the stability of the whole flow field. On the one hand, the shear stress transferred to the droplet from the wall is more pronounced in stronger confinement [56], thus closer walls reduce the \(Ca_{c}\). On the other hand, the narrower channel reduces the instability of the flow and restricts droplet movements, thus increases the \(Ca_{c}\).
(4) Under \(Re=0.1\) and the range of viscosity ratio \(\lambda\in[0.1,1]\), a higher \(\lambda\) causes a larger deformation. The effect of \(\lambda\) on \(Ca_{c}\) is not monotonic when \(\lambda>1\) and there is a minimum value of \(Ca_{c}\) between \(\lambda=1\) and \(\lambda=10\). The existence of a minimal \(Ca_{c}\) among different \(\lambda\) has also been found by previous experiment studies [16, 20], when \(\lambda\) is about 1. The discrepancy between our results and the previous ones are attributed to the difference between 2D and 3D cases. At the same \(Re\), the influence of density ratio on droplet deformation is much smaller compared with that of the viscosity ratio.
(5) As an application, a phase diagram obtained by actual physical parameters of water and air is depicted to predict the magnitude of shear rate for breaking a droplet of certain size, which is helpful in the designing atomization nozzles.
## Acknowledgements
K. Wang and X. Bian acknowledge the national natural science foundation of China under grant number: 12172330. This work is partially supported by Hangzhou Shiguangji Intelligent Electronics Technology Co., Ltd, Hangzhou, China.
## Acknowledgements
K. Wang and X. Bian acknowledge the national natural science foundation of China under grant number: 12172330. This work is partially supported by Hangzhou Shiguangji Intelligent Electronics Technology Co., Ltd, Hangzhou, China.
|
2307.08494
|
Visual Explanations with Attributions and Counterfactuals on Time Series
Classification
|
With the rising necessity of explainable artificial intelligence (XAI), we
see an increase in task-dependent XAI methods on varying abstraction levels.
XAI techniques on a global level explain model behavior and on a local level
explain sample predictions. We propose a visual analytics workflow to support
seamless transitions between global and local explanations, focusing on
attributions and counterfactuals on time series classification. In particular,
we adapt local XAI techniques (attributions) that are developed for traditional
datasets (images, text) to analyze time series classification, a data type that
is typically less intelligible to humans. To generate a global overview, we
apply local attribution methods to the data, creating explanations for the
whole dataset. These explanations are projected onto two dimensions, depicting
model behavior trends, strategies, and decision boundaries. To further inspect
the model decision-making as well as potential data errors, a what-if analysis
facilitates hypothesis generation and verification on both the global and local
levels. We constantly collected and incorporated expert user feedback, as well
as insights based on their domain knowledge, resulting in a tailored analysis
workflow and system that tightly integrates time series transformations into
explanations. Lastly, we present three use cases, verifying that our technique
enables users to (1)~explore data transformations and feature relevance,
(2)~identify model behavior and decision boundaries, as well as, (3)~the reason
for misclassifications.
|
Udo Schlegel, Daniela Oelke, Daniel A. Keim, Mennatallah El-Assady
|
2023-07-14T10:01:30Z
|
http://arxiv.org/abs/2307.08494v1
|
# Visual Explanations with Attributions and Counterfactuals on Time Series Classification
###### Abstract
With the rising necessity of explainable artificial intelligence (XAI), we see an increase in task-dependent XAI methods on varying abstraction levels. XAI techniques on a global level explain model behavior and on a local level explain sample predictions. We propose a visual analytics workflow to support seamless transitions between global and local explanations, focusing on attributions and counterfactuals on time series classification. In particular, we adapt local XAI techniques (attributes) that are developed for traditional datasets (images, text) to analyze time series classification, a data type that is typically less intelligible to humans. To generate a global overview, we apply local attribution methods to the data, creating explanations for the whole dataset. These explanations are projected onto two dimensions, depicting model behavior trends, strategies, and decision boundaries. To further inspect the model decision-making as well as potential data errors, a what-if analysis facilitates hypothesis generation and verification on both the global and local levels. We constantly collected and incorporated expert user feedback, as well as insights based on their domain knowledge, resulting in a tailored analysis workflow and system that tightly integrates time series transformations into explanations. Lastly, we present three use cases, verifying that our technique enables users to (1) explore data transformations and feature relevance, (2) identify model behavior and decision boundaries, as well as, (3) the reason for misclassifications.
Explainable AI, Time Series Classification, Visual Analytics, Deep Learning
## 1 Introduction
Deep learning (DL) achieves state-of-the-art performance in various tasks and domains such as computer vision (e.g., autonomous driving [1]) and natural language processing (e.g., machine translation [2]). Due to such cutting-edge applications, more and more fields incorporate deep learning models, e.g., time series forecasting (predictive maintenance [3]). However, time series are often _not intuitively intelligible_, and thus manually debugging such models is a tedious task. In many cases, transformations such as Fourier [4] or SAX [5] are applied to the data to convert them into a more human-understandable abstraction. These abstractions help to identify certain characteristics (e.g., frequencies), yet transformations cannot guarantee insights. Mainly, real-world sensor data can consist of various overlapping properties like periodicity. Class characteristics based on these single time series features are typically challenging to distinguish and are not guaranteed to be used by a classifier. Thus, domain knowledge is regularly very focused on specific properties that are useful for the task at hand. However, such domain knowledge is only sometimes readily available and is not guaranteed to, e.g., make classes distinguishable. Thus, complex algorithms, like neural networks, need an in-depth inspection of their decision rankings to provide possible insights into the data.
DARPA introduced the explainable artificial intelligence (XAI) initiative [6] to foster and accelerate research around the topic of explainable machine learning (ML) as well as increase trust in AI models. Thus, such initiatives foster gaining an understandable explanation of the model on the one hand and more insight into abstract data on the other hand. Such explanations can be generated on a global and local level [7]. The global level introduces methods to describe the overall decision-making of the model [7]. The local level, conversely, explains only single decisions, e.g., the prediction of a single data sample [7]. An approach for global explanations using visual analytics is RuleMatrix [8]. The decision-making can be inspected by extracting basic decision rules from a classifier. However, the current approach is time-consuming and works best on tabular data with limited features. In most cases, either a global or local explanation is provided [9, 10], and a switch between both is often tedious. Thus, enabling interaction between global and local explanations supports explaining model decisions while still allowing for an in-depth analysis of the data.
We propose a system based on a workflow to seamlessly integrate the transition between global and local explanations for time series, incorporating a technique to generate global explanations based on local attributions. As a baseline for our explanations, we include state-of-the-art attribution techniques for images, text, and tabular data, such as LRP [11] and SHAP [12], as there are only a limited amount of time series techniques. However, our technique can provide a global explanation based on any attribution method. In particular, we propose an approach designed to generate global explanations for time series classifiers. Our technique uses projection methods, e.g., PCA, to generate two-dimensional visualizations based on local explanations. The workflow further presents a way to explain a time series model interactively on the global level as an overview for applied data and, on a sample, providing local level explanations. Global explanation projections facilitate getting
an overview and finding decision boundaries for models. For instance, through the interactive exploration of regions in these projections on the local level, decision borders can be found by generating counterfactuals [13] changing the time series sample. Our approach combines local XAI techniques and projections to generate global explanations to represent and test the model representation of the data. Further, by selecting interesting samples (e.g., overlapping classes) at the global level, the local explanation of these can be inspected and compared. At the local level, the sample can be modified by the user based, e.g., on the attribution scores. Such a modified sample can then be projected again in the global explanation to verify or reject the hypothesis about the decision boundaries of the model. Through such interactions, a hypothesis can first be generated, verified, or rejected (what-if in local and back projection in global). Our presented use cases show the technique's applicability on time series, indicating more comprehensive explanations for, e.g., misclassifications.
We contribute: (1) A visual analytics workflow to bridge global and local explanations for time series. (2) An overview technique based on global explanations produced with local _attributions_. (3) A local _what-if_ analysis to automatically and interactively generate _counterfactuals_ for time series. A time series application of our proposed workflow is presented with use cases that can be explored online1.
Footnote 1: Demo: [https://visual-explanations.time-series-xai.dovis.de](https://visual-explanations.time-series-xai.dovis.de)
## 2 Related Work
Our related work generally introduces important XAI concepts and presents the combination of XAI and VA. Most XAI approaches and VA techniques target intelligible data, e.g., images or text, neglecting time series.
**Explainable Artificial Intelligence -** In machine learning, the definition of explainability is often ambiguous and, in most cases, includes drastic definition differences [14]. However, most definitions try to incorporate specific motivations such as fairness, privacy, reliability, and trust building [15] and thus follow the same larger goals to ensure these aspects [16]. XAI methods consist of various techniques applied ad-hoc or post-hoc onto black-box models [17]. In recent years, the number of post-hoc XAI methods increased with _explainers_ aiming to achieve explainability for simple tasks [18]. Such methods have become increasingly popular due to the upsurge of deep neural networks and the ability of XAI methods to be applied to trained models [18]. Guidotti et al. [7] categorize the methods into eight groups (e.g., feature importance, saliency masks) with various properties. One of the proposed properties describes the XAI method's level, either on the global model or on local samples, to explain the decision-making [7]. For example, many XAI methods highlight feature attribution for single samples, such as LIME [19], which offers a limited view of the model as it only shows a local explanation based on one sample. Other methods on images present saliency masks [20], which display the feature importance of the attention on an image as a heatmap [11]. However, such local explanations are generally not robust and can be fooled [21] or exploited [22].
Furthermore, only limited work exists on the automatic evaluation of explanations [23]. In most cases, such evaluations are often conducted by perturbating test data with the corresponding explanations to inspect the performance change in computer vision [24], text data [25], and time series [26]. Ribiero et al. [27], however, discuss why such explanations and their perturbation evaluation are insufficient to gain insights into the model's internals. Thus, an automated evaluation approach can be considered as a first step to highlight techniques working better than others on the model and data. However, humans need further inspection to be useful [23].
**Visual Analytics for XAI -** Visual analytics includes the human at various steps of the data analysis and machine learning workflow [28]. Through various interactions, users can explore data, generate and test a hypothesis, and extract knowledge [29]. Furthermore, humans can work not only with the data but also with models trained on the data for understanding, diagnosis, and refinement [30]. Addressing these tasks, Spinner et al. [18] introduce a framework and pipeline to steer the explanation process of XAI and support the tasks' understanding, diagnosis, and refinement. However, the framework lacks an extension towards incorporating the interaction between global and local explanations to integrate the human into the analysis loop for hypothesis generation and testing like Sacha et al. [29].
VA has various approaches explaining the decision-making of models or steering them into a user-preferred direction, such as Protosteer [31]. Especially, Liu et al. [30] introduce the three tasks of understanding, debugging, and refinement to guide the process of an ML model in VA. Further, Hohman et al. [32] survey VA works that focus on an interactive analysis of deep learning models. Their survey is based on an interrogative technique to categorize works, e.g., if these support an understanding or decision explanation. Further, related approaches surveyed in the previous work often operate only on local explanations. A related approach for sequence data, mostly text, is LST-MVis [33] as it provides activations of LSTM cells throughout sequence inputs. Another approach for sequence data provides Seq2Seq-Vis [10] and focuses on machine translation by visualizing other translation possibilities of the model as well as paths through the explanation space. An approach for images presents Summit [9] that combines feature visualizations of convolutional neural networks for global explanations with sample activation for local explanations. RuleMatrix [8] presents a global explanations solution for tabular data by extracting decision rules from a trained model to visualize feature importance, attributions, and distributions. In contrast, scaling and explaining decision rules for time series is rather complicated. For time series, ProtoSteer [31] proposes an interactive analysis to enrich models with user-steered prototypes for partially explainable components and, in exchange, for performance. Collaris et al. [34] introduce StrategyAtlas, which uses projected feature importance values through UMAP [35] to highlight model strategies. Such strategies demonstrate the model's internal clustering of the data. We use a similar approach. However, due to possible artifacts of UMAP due to incorrect parameters, we cover various techniques to mitigate such issues. Further, our work focuses primarily
on the transition between local and global explanations to facilitate the analysis of models and data on both levels.
**Visual Analytics for Time Series -** Basic line plots are generally the go-to solution for time series. However, such visualizations are challenging to scale for long time series or many time series samples. An approach to overcome these challenges uses SAX Navigator [36] by applying SAX [5] to transform time series into an abstract symbolic representation and cluster the results to explore the data on various levels. Other proposals use dimension reduction techniques to visualize time series as paths such as Time Curves [37] to cope with long time series and hard-to-read, as well as compare line plots. Such time paths can be used for various tasks such as time series clustering [38]. Also, such a technique can be applied to visualize decisions of algorithms to comprehend choices and outcomes [39]. One major drawback of time paths is the clutter produced by many overlapping time paths. Our approach focuses on many time series samples and, thus, on dimensionally reducing the time series to a single point.
**XAI for Time Series -** Theissler et al. [40] provide a comprehensive state-of-the-art survey and review of possible XAI methods for time series, including a categorization towards the level of explanation. We consider this categorization as the baseline for choosing possible techniques and focus on the ones for attributions and counterfactuals. Thus, we extract and collect the possible methods for time-point-based and instance-based explanations from Theissler et al. [40]. We mitigate subsequence-based explanations as these often need specific architectures. For further information on the topic, we recommend the survey [40].
**Combined XAI and VA for Time Series -** The combination of visualizations, visual analytics, and explainable AI for time series got a boost in the last few years. Schlegel et al. [41] discusses various approaches of XAI for time series and explores the recent visualization techniques for attributions on time series. Especially, Assaf et al. [42], and Schlegel et al. [26] propose early options on how to visualize attributions for time series data. However, most of these approaches are rather tricky to interpret for most users, experts, and non-experts. Attribution visualizations without further interaction options are limited in their communication of model behavior [41] as often the attribution techniques heavily influence the attributions [43]. Siddiqui et al. [44] propose with TSViz a technique to investigate CNNs applied to time series. Through attributions on the input data based on either the output or inner filters, they visualize the most relevant parts of the time series on the line chart. However, their approach focuses heavily on CNNs and needs some tweaking for other methods. We seamlessly support all kinds of neural network architectures.
## 3 Background And Characterization
The classification of _non-intelligible_ data (e.g., time series) using complex models encompasses problems regarding the underlying data, the applied model, and the user tasks. For instance, time series data often contains noise, sensor errors, and arbitrary segmentations producing even more errors [3]. Thus, raw time series are often complicated to interpret due to influencing error factors. Data processing and transformations applied by experts often lead to the first insights into the data. However, transformations such as Fourier-Transformation [4] often need constraints, e.g., segmentation, which is not readily applicable for critical real-world applications. Therefore, fast models need to be trained on the raw data in many use cases. For instance, failure detection on top of machines is often trained on raw sensor data, resulting in complex, tailored models. Especially, neural network models such as LSTM FCN [45] achieve state-of-the-art performance with an acceptable prediction speed. Such models, though, are generally not interpretable by design. Not only is the data hard to interpret, but also models lack comprehensible decisions.
Based on these findings, we discuss the needs of users who apply time series classification models daily and aim to facilitate their work as much as possible.
### _Users, Data, and Model_
**Users -** We focus on the two types of users, domain experts (DE) and data scientists (DS), working with time series. Domain experts are users applying pre-trained models on their daily work for, e.g., predictive maintenance [3]. Data scientists cover a range of users, including developers building models on raw time series data. Other users, such as end-users, shift the focus from debugging and refining models toward understanding. This understanding is often provided by domain- and task-specific visualizations, e.g., Schlegel et al. [43], to target the specific user group. Our approach primarily focuses on the general applicability of debugging and refinement of data and models. Domain knowledge supports the understanding as we mainly look at model and data connections. End-users are, therefore, only addressed to a limited extent. During the development of the system, we were in contact with DE and DS to steer the research direction.
**Data -** Sensor data is the base for many time series applications and is available as open-access datasets. In many real-world applications, interviewed users often reported a need for labels for such time series data and labeled their data by themselves on only a limited amount of samples. However, even with labels, many models run into unknown failures due to previously mentioned errors. As time series are _non-intelligible_, we target such data to support the analysis of models applied on raw time series with limited domain knowledge to enhance workability with such data. We further focus on time series classification to support the supervised learning of segmented time series data with labels to enable understanding and debugging. Our approach focuses mainly on uni-variate time series to present the overall methodology. However, the proposed approach can also be extended to multi-variate data.
**Model -** Our approach focuses on models applied to raw time series data to avoid as many transformations as possible. Additionally, we target deep neural networks as these provide state-of-the-art performance on many time series classification and forecasting tasks, e.g., LSTM-FCN [45]. However, we do not further constraint the architecture as transformer networks [46], convolution neural networks, and recurrent neural networks present nearly state-of-the-art results on time series in many cases [47].
### _User Needs_
Based on previous domain expert interviews and experience, we emphasize three needs (N) users have while working on their analysis.
The _needs_ we came accross: **(N1) _Explore_ data transformations and the internal model representations of data.** Time series are inherently _non-intelligible_ and suffer from heavily needed domain knowledge or transformations changing the data domain. Fourier transformations are the most commonly used way to process the data to a more accessible, readable domain. However, even such transformations can take time to interpret. Thus, the first need of users we identified is to comprehensively explore the data, transformations, and the corresponding internal model representations of the data.
**(N2) _Identify_ the model behavior on known and unknown data.** As state-of-the-art models get more complex during the last few years and their inner workings often need to be more easily understandable, another need is the fast identification of the model behavior for the applied data. Thus, users require an overview of the decisions and internals of a model during application on data to get insights into the models' behavior to be able to understand, e.g., misclassifications.
**(N3) _Understand_ sample misclassifications to investigate into the model.** After identification of model behavior, users need to be able to investigate the decisions of models in-depth on samples to understand, e.g., misclassifications. Thus, like in the Shneiderman mantra [48], the overview helps to guide to the details a user requires to understand the models' predictions overall. Afterward, information about the single sample prediction is essential to support users in understanding the models' local decisions.
## 4 Visual Explanations with Attributions and Counterfactuals
We propose a workflow to conceptualize global exploration and local inspection interactions to enable analysts to examine their time series models. As the global explanation level is often difficult to obtain for complex models, we apply projections to create global explanations based on local attributions similar to StrategyAtlas [34].
**Workflow -** Our workflow grounds in the KDD process [49], the explIAner pipeline by Spinner et al. [18] and extends these with techniques used by experts, including considerations of our previous work [41]. We extend the XAI component for global and local interactions on top of the data, the model, and the output to explore counterfactuals, attributions, and predictions. Our proposed workflow consists of three stages, as seen in Figure 1, in which the first two stages form the automatic phase, while the user purely drives the last stage. The automatic phase consists of the perturbation analysis and transformation projection stages. These two process the data and extract information for the last stage. At first, the perturbation analysis (automatic evaluation) of local attribution methods generates attributions for the dataset and pre-ranks suiting techniques based on their evaluation scoring for further analysis. Next, the transformation projection stage applies various projection techniques on the raw time series, the Fourier-transformed data, and the attributions to reduce the dimensionality to two. After the automatic phase, the explanation phase incorporates the user into the explanation process [50]. In the first global exploration, the previously calculated results visualize an overview of the data, the transformations, and the attributions. Through the global exploration or by domain knowledge about the data, the user gets to the local inspection in which the in-detail analysis can be done using, e.g., a what-if analysis similar to the What-If tool [51]. The following sections describe the components of the workflow in more detail.
### _Automatic Phase_
The automatic phase is pipelined into two stages. In the first stage, various attribution techniques extract attributions as explanations for the model, and the perturbation analysis calculates comparable scores of these attributions. Then, in the second stage, the generated attributions get projected into two-dimensional space for the global stage using various dimension reduction techniques.
**Perturbation Analysis -** At first, all selected attribution methods generate attributions for the model based on the selected data. The extracted attributions can be generated in various ways but always attribute a single relevance score to every input value based on the output of the model [40]. E.g. for an input time series sample of length 500, an attribution contains 500 relevance scores. For more information, Theissler et al. [40] survey and explain various techniques which are applicable to time series.
A perturbation analysis is a fundamental approach to evaluate such attributions on their correctness towards a model's behavior with a score [25]. In general, a perturbation modifies (remove, increase, decrease) the most relevant features based on the attributions from a sample to change the prediction of a model [24]. The change in performance (quality metric score) highlights if the feature was relevant for the prediction [25]. Such a perturbation analysis can be done fully automatically with just a few modifiable parameters [26]. Especially, fully automatic procedures enable to compare the performance of various attribution methods [25]. For instance, significant changes in the quality metric score (e.g., accuracy) highlight techniques that accurately describe the model's decision-making [24]. Random importance perturbation can be used to get a baseline to compare the methods against [26]. These changes help to sort the attribution techniques and inspect which performs best and if good-performing methods are better than random measures. However, as these attribution methods only work on the data in collaboration with the model, the model's inner workings are not explicitly explained. Further, attributions are sometimes hard to interpret visually, for instance, if using time series as time points are often shown as important without the context [41]. Nevertheless, through such a perturbation analysis, attribution methods can be evaluated and shown to be trustworthy explanations [40].
Such perturbation analyses have been done in computer vision [24], text data [25], and time series [26]. Schlegel
et al. [26] propose different perturbation approaches categorized into two groups (point and time) to use for time series to evaluate attribution methods on time properties. As stated before, the general idea for a perturbation is a change of data values of a sample to something else. For point perturbation, relevant time points (single data values) are perturbed to, e.g., zero. In literature, e.g., in computer vision [52], the value needs to be changed to a non-information holding value. However, in time series, such a value is hard to find, as data does not always have hard possible value boundaries. Schlegel et al. [43] also propose to modify them to their inverse, max, or min to have a larger spectrum of possible changes. To incorporate also the time component of time series, Schlegel et al. [26] propose the time perturbation with another approach to perturb the data values. For more information on such perturbation analysis, we recommend Schlegel et al. [43] and Theissler et al. [40].
Such an evaluation is vital to ensure the trustworthiness of the attribution methods at hand [25]. For instance, a faulty explanation misleads users into wrong insights of the model and worsens the understanding [16]. Robust explanations are a preferable entry point for further analysis and essential to be able to understand the models behaviour [16].
**Transformation Projections -** In the next stage, the data is used to extract more transformations together with the attributions, e.g., the Fourier transformation. Also, the models' activations of, e.g., the last layer before the softmax can be extracted and used as transformation. These transformations, attributions, and the raw data are projected using various dimension reduction techniques to two dimensions.
Figure 2 shows the general concept of the technique. The attribution technique takes all samples from the data and calculates the attributions for every sample. This new attribution data is then projected on two dimensions to visualize these in a global overview of the data. Depending on the dimension reduction technique, various properties like neighborhood preservation or distances hold and can help to mitigate artifacts in the projection as clusters are shared over different techniques showing cluster properties.
### _Global and Local Explanations_
After a successful automatic phase, the explanation loop starts with the global exploration of the projections. We first introduce the global exploration as an overview method based on projections of local attributions. Afterward, we present the local inspection to show the possibilities and limits of local attribution methods
**Global Exploration -** Global explanations enable to understand models in total, e.g., by visualizing decision boundaries or by comprehensible decision rules [7]. Decision rules are, in most cases, favorable [8]; however, the extraction process has a very high computation time and is often also hard to interpret for time series. For most data, decision boundaries are simpler to comprehend and show shortcomings of models (critical samples near borders). However, decision boundaries for high-dimensional or time series data are often hard to visualize. Such boundary visualizations have to solve the same challenges as the high-dimensional data and need further abstractions to display the decision borders in the same visualization.
Only a few techniques for complex models propose global explanations and solutions to overcome such visualization challenges. One of these examples is ANCHORS [27], which enhances LIME [19] by searching for decision boundaries for certain classes so-called anchors. However, the extracted rules consist of decision rules and, thus, are of rather limited use for e.g., time series. For a
Fig. 1: Workflow for Visual Explanations with Attributions and Counterfactuals: starting with an ML model, the workflow automatically ranks attributions methods using a perturbation analysis to get comparable scores, then projects the attributions and transformations into a two-dimensional space. In the manual analysis, the explanation loop starts with two subloops on the global exploration and local inspection level to ensure the generation of explanations.
Fig. 2: Generating global explanations for time series; first, an attribution method like LIME [19] generates feature attributions for each sample in the whole dataset; next, all calculated attributions are projected using a dimension reduction technique like UMAP [35]; a scatter plot then demonstrates an inner representation of the data for the model.
promising explanation, these explanations rules need further transformations, which introduce other problems such as limited interpretability or loss of information.
We incorporate attributions of a model on data to generate a global overview of the models' strategies similar to StrategyAtlas [34]. We take the results of the automatic phase, the pre-ranking, and the projections to visualize the data, transformations, and attributions. Due to, for example, artifacts or incorrect parameters, we propose to use not only one dimension reduction technique but a wider variety of approaches. By not only using a manifold, we also get more robust projections and options for users to dig into different projections and their results. Thus, outliers and clusters can be more easily investigated in the two-dimensional space. Further, comparing the different results of the techniques enables further investigation into the results. For example, an outlier in the attribution projection can lie in a cluster in the Fourier transformation or vice-versa. Such a sample then holds interesting information to base the further analysis on in the local inspection. So, the overview of the global exploration consists of different projections of the data, various transformations, and attribution techniques to ensure an interactive exploration of many analysis spaces.
**Local Inspection -** Local explanations consist of model decision explanations for single samples, e.g., the feature importance scores of such a sample [7]. For instance, attribution methods that generate local explanations are often easily applicable to many models (e.g., LIME [19]) after training generating attributions. Such attributions attribute a score to every input value based on the models output prediction. Thus, such techniques are a solution to extend models with explainable components to reveal their decision making [25]. Explanations for images and text often show the pixels and words with heatmap techniques to highlight their importance or relevance for the prediction [7]. Such a technique is also possible for time series, but provides only insights into certain time points and not regions [26]. Thus, providing only such a heatmap requires domain knowledge to get insights into the model's decision making.
So, to support users without much domain knowledge, we enhance the local inspection with a what-if toolkit similar to the What-If tool [51]. The toolkit consists of various methods to modify the time series of a selected sample. Further, counterfactuals of the time series can be generated to enhance the methods possible for users. Counterfactual explanations modify a selected sample as little as possible to flip the class predicted by the classifier. Thus, even without domain knowledge about the data, users can investigate into the models predictions using such counterfactual. If counterfactuals do not help to dig into the model decisions, another options in the toolkit is the search for nearest neighbors based on, e.g., the euclidean distance or the models last layer activations. Through this tool, users can explore similar samples and their features.
## 5 Visual Analytics Workspace
Our application2, an instantiation of our proposed system and workflow, consists of four views (settings, sessions, global projections, local what-if). The different views provide, besides their functionality, a descriptive entry point for the proposed workflow. The settings view enables to start a new automatic phase of the workflow for a selected model and data with user set parameters. The sessions view is the entry point after loading the application and presents the results of the automatic phase for various models to facilitate the selection of a session. The global projections view enables the analysis of the session's global explanations to, e.g., select a misclassification. Together with the global view, the local what-if view enables an in-depth analysis of explanations using attributions and counterfactuals.
Footnote 2: Source code online available:
[https://github.com/visual-info-line-series/visual-explations-on-time-series-classification](https://github.com/visual-info-line-series/visual-explations-on-time-series-classification)
Figure 3: Data (A), transformations (B), model activations, and attributions (D) projections as global explanation of a model for the FordA dataset. The color of the points corresponds to the confusion matrix result of the classifier. as correct normal state (true positive); as correct abnormal state (true negative); as incorrect normal state (false positive); and as incorrect abnormal state (false negative).
### _Settings and Sessions_
Both views are only supporting views with limited importance for the workflow. However, these views build the backbone of the application to analyze models and data.
**Settings -** The settings view facilitates parameter selections for the preliminary automatic phase of the workflow. It first enables analysts to select or upload data and models. Possible attribution methods are presented to apply to the model and the data to generate explanations. We include DeepLIFT [53], grad*input [54], Integrated Gradients [55], LIME [19], LRP [11], Occlusion [56], Saliency Maps [20], SHAP [12], and Shapely Sampling [57]. Since time series are non-intelligible data, the settings view enables to also apply transformations on the data. Our selected transformations include Fourier [4], Symbolic Aggregate Approximation (SAX) [5], Discrete Cosine transformations (DCT) [58], as well as first and second-order derivatives of the time series.
The settings view further introduces parameters for the automatic evaluation methods presented by Schlegel et al. [26]. Notably, the evaluation techniques (point and time perturbation) can be selected to be applied to the model and the data. Further, the thresholds for these perturbations can be adjusted to the data, e.g., the relevance threshold in a static percentage or data relative value. Also, the subsequence span can be set for time evaluation techniques. Lastly, the necessity for a check against randomization of relevance time points of the time series can be set to compare against random explanations [16].
**Sessions -** The sessions view is the main entry point in which pre-computed sessions can be selected to further work with them. These sessions retain different parameters, settings, and options for analysis in tabular format. The columns for the table are data as well as model name, selected XAI methods, evaluation settings, and results. The automatic results of the pre-ranking evaluation are shown as a heatmap, sorted by the performance from best to worst. The comparison between randomization and explanations evaluation highlights in the heatmap the usefulness of the parameters and the attribution methods on the model and data. Through the sorting, the results are also highlighted to present the best performing method to start in the global projections.
### _Global Projection View_
The global projection view visualizes the automatic phase data and results of the previously selected session in matrix form to present as much information as possible. The first column, Figure 3 (A), displays the raw time series projected by our selected techniques to show a first distribution of the data. The columns after the raw data, Figure 3 (B), consist of the selected transformations applied to the time series and enable a focus on the comparison of transformation to raw data. The next column, Figure 3 (C), presents the activations of the last layer before a potential softmax of the model to support users in identifying potential decision borders. After these columns, Figure 3 (D), the projected attribution methods are visualized and sorted by their automatically evaluated performance ranking based on the perturbation analysis. Rows of the matrix consist of the most common projection techniques, in our case, PCA [59], KernelPCA [60], ISOMAP [61], LLE [62], IPCA [63], t-SNE [64], LSA [65], and UMAP [35]. Due to each of the various data inputs for the projection techniques having different data distributions, estimating the perfect fitting parameters is not trivial. Thus, we incorporate different techniques to include various properties for the projections using default parameters to ensure some valuable projections. Through these, we can mitigate projection artifacts as clusters shared over multiple projection techniques (e.g., manifold and linear) visualize common group properties of the samples.
However, such an overview can get quite large with many scatter plot cells, e.g., for eight projection techniques, nine attribution techniques, and five transformations, we get more than one hundred cells. To support users and limit the number of shown projections, we use three metrics to calculate another score to hide or show such projections. Preferable visualizations for projections, in our case, have well-separated clusters with a high sample density. Thus, we incorporate the Davies-Bouldin score [66], which favors clusters far apart and less scattered. We further want to
Fig. 4: Global to local to global transitions: selecting interesting samples in the global projections for further analysis in the local what-if. Generating counterfactual samples and compare the counterfactual changes to the original time series. Cross and line from starting time series to the counterfactual show change of position in the global projection to inspect possible decision border.
penalize overlapping clusters and wrong-clustered points. Therefore, we want to give high scores to complete clusters far apart. Thus, we introduce the euclidean distance based on the centroids of the clusters as another score, which needs to be far apart to fulfill our previous requirement. We apply these two measures to the projections using the ground truth labels and the predictions to get four scores. Through experiments, we weight the prediction scores higher than the ground truth labels to highlight separating projections. The calculated _cluster score_ can be seen around the scatter plot color-coded and as a description at the top, see Figure 3. Also, projection techniques with bad scores will be hidden and other bad scores are made smaller, e.g., Figure 3.
Every exploration can be facilitated by using colors for different sample properties focusing on a task, e.g., to identify misclassifications. Thus, the global projections view enables three color schemes to support further analysis. The first color schema utilizes the ground truth labels of the data to color the plot. For up to twelve classes, a qualitative color schema is applied. Afterward, the color scale is exchanged to an interpolated diverging color schema due to color separability. Such a color schema visualizes the distribution of the ground truth regarding the transformed data. The second color applies the predictions as colors on the visualizations to enable users to analyze model boundaries. Through such a color scale, the model predictions can be compared in other transformations to support users in understanding model and transformations correlations. The third color schema corresponds to the confusion matrix of the data and the model. Because the confusion matrix is a cross-product of the labels, a qualitative color scale can only be used for up to three labels. In such a case, the confusion matrix gets overlaid by an interpolating diverging color schema starting from the top-left to the bottom-right. Through a confusion matrix color schema, samples with distinctive properties, e.g., false positives, can be easier identified.
Through the automatic phase, the projections are in two-dimensional space to break down a lot of information towards the different transformations and need a tailored visualization. A scatter plot visualizes each projection data using one dot for every sample. A contour plot extends the visibility of the clusters and how developed these are A dense pixel visualization on the left and on the bottom forms a distribution plot on the x- and y-axis. The confusion matrix color schema dyes these distribution plots to enable users to, e.g., identify false positives. Enabling filtering, selection, and comparison interactions is vital to explore global projections, as the visualization can be crowded. Hovering or brushing over samples highlights these in the other visualizations to better compare clusters and outliers. Further, hovering over one sample shows a tooltip of the properties of the sample, e.g., predictions, see Figure 3. Brushing over samples adds these to the local what-if view for an in-depth analysis.
### _Local What-If Analysis_
After brushing samples in the global projections, time series visualizations for these samples are added to the local what-if view. However, if no interesting samples are selected in the global view, we need other options for choosing compelling samples. Thus, we include a filtering mechanism through a confusion matrix, which enables to focus the inspection to, e.g., correct predicted samples to get insights into the classification and to wrong predictions to improve the model or data At first, the analyst can select a cell of the confusion matrix to filter for samples in the cell. Next, the selected samples are shown as index listings to focus on specific indices. These index listings show the standard deviation of the attributions of the sample of a selected XAI method, which defaults to the best performing. After selecting an index, the corresponding local explanations are added to the local what-if.
One proposed visualization for the local what-if utilizes for the time series local explanation a line plot with a heatmap style background, following common literature [41]. The local view visualizes the time series data corresponding to this technique as a line and the explanations as a background color indicating the attributions. The color scale is L2 normalized on the attribution scores of the sample. In addition to the heatmap plot of the sample, the activation maximization for a selected class label can be shown. Our activation maximization is calculated for the classifier layer before the _softmax_ to generate a time series activating a selected class prediction. A comparison between the activation maximization and the time series often already shows some overlaps or differences.
After being able to explore local explanations, users want to modify the sample and predict the changed version to test the model in a what-if scenario. Thus, the view enables a what-if analysis by dragging and dropping time points to facilitate the investigation, compare Figure 5. For instance, the prediction can change by altering time points with high relevance based on the attribution. Using dropouts in the model, we further support the new prediction with the proposed uncertainty estimation by Gal and Ghahramani [67]. We enable the smoothing of time points around possibly changed points algorithmically with a user-defined range to support users in the tedious task of time series adaption without destroying the flow. Further, users can brush the time series and set the brushed time points to other properties. Through interviews, we support users in setting the selected content to: a user-selected class activation maximization, the global mean, the local mean, the inverse, the moving average, and the exponential smoothing of the brushed time points as seen in Figure 5. Users can use these tools to change time series in a WYSIWYG editor-like style to enable easier what-if changes.
Even with smoothing, a manual modification can get tedious. Thus, we support additional tools to either change or compare time series samples. Users can search for nearest neighbors for a sample based on the Euclidean distance, the models' activations, and the attributions to get similar time series and get an explanation by example, which helps in some cases [41, 68]. As attributions alone are, in most cases, rather tedious and potentially misleading explanations [41], we further include the counterfactual explanation algorithm by Delaney et al. [69]. Delaney et al. [69] use
a native guide (the nearest neighbor in the dataset) to guide the generation of a counterfactual to avoid the high dimensional optimization needed in other methods, e.g., Wachter et al. [70]. Further, two line plots can be selected and compared into a single visualization with two time series lines to directly compare the changes, differences, and similarities. At last, newly generated time series can then be projected back into the global explanations to explore the neighborhoods around them and how these compare to their origins. Through these interactions, users can fulfill their needs to, e.g., find decision borders.
## 6 Evaluation
To verify our approach to enable analysts to (1) explore data transformations and feature relevance, (2) identify model behavior and decision boundaries, as well as (3) reason for misclassifications, we present three use cases. Through these cases, usage scenarios of the workflow are shown, and a particular focus on the interactions between global and local explanations is set. At first, we discuss our experience by working closely with experts to incorporate domain knowledge into the XAI process. Afterward, we show our selected use cases to tackle the tasks above.
### _Expert Study_
We collected expert user feedback from two signal processing engineers during the development to first extract essential needs and later to tackle important analysis features. As a baseline dataset, we used the FordA [71] dataset, as it resembles data our experts work with. The FordA dataset is a benchmark dataset for time series classification and forms a binary anomaly detection problem. The dataset contains 3601 training and 1320 test samples with a length of 500 time points for each sample. These samples consist of a measurement of engine noise to classify anomalies.
Most models perform quite well on the dataset, and state-of-the-art accuracy results in \(96,54\)% [71]. Our relatively simple model achieves a test score of \(89,31\)%. We use Tensorflow [72] as a deep learning library to create and train our model. The architecture starts with three Conv1D layers with filters of (3, 6, 9), a kernel size of 3, and a stride size of 1. We use MaxPooling with a size of 5 after each of the Conv1D layers to reduce the size of the outputs. After these layers, we flatten the output and apply a Dense Layer with 50 neurons, dropout of 0.5 as well as ReLu activation. Afterward, we use a softmax layer, results in our prediction. We train our model for 100 epochs with Adam-optimizer default parameters. We choose such an architecture as training time is quite short (\(<3\) mins), and performance on test data is useful. The architecture can be fine-tuned further, but generally solves the task.
**Exploration of data, model, and explanations -** Fig.4 visualizes a global representation of the FordA dataset on the left showing the raw data, the models' activation, and the attributions of Occlusion [56] projections. The global explanation presents a division of the predictions of the model into two clusters as correct (true positive) as well as wrong (false positive) predictions of anomaly build a cluster and correct (true negative) as well as wrong (false negative) results of non-anomaly the other. At first, our analysts want to get more insights into the data to find and correct possible errors. Thus, by inspecting the global explanation and the projections of the raw time series and the transformations, they are able to analyze the predictions and data distributions in more detail, Figure 3 (A), (B), and (C). After examining the raw data and other transformations (Fourier and DCT), the experts suspect that the model did not learn a transformation function but some other interesting features. By further analyzing the feature attributions of, e.g., the PCA projections, Figure 3 (D) third row, our analysts see a diverging pattern between the predictions. The feature attributions demonstrate a mental representation of the model applied to the data. To their surprise, the projections show clear clusters of the model towards the predictions of the classes. After hovering over a point in the critical region, our experts want to investigate the class anomaly of the time series sample. By selecting the point, the corresponding time series sample is added to the local explanation, and our analysts are now able to analyze them in detail, Figure 6. At first, they note that the attributions differ from each XAI technique, Figure 6 (E1) LRP, grad*input, DeepLIFT. After the initial thoughts, our analysts want to understand how and why the sample got its prediction. Initially, they modify single time points with high relevance for the attribution method, Figure 6 (E2). However, slowly our experts start to realize that changing single points even on highly relevant points is not enough and try to modify a region with a focus on a time point with a high relevance, Figure 6 (E3). After adapting this selected region to be lower (more similar to the
Fig. 5: The local what-if supports the modification of individual points with a smoothing around the dragged point, the brushing to the activation maximization, to the global mean, the local mean, the inverse, the moving average, and the exponential smoothing of the selected area.
activation maximization), the prediction changes, Figure 6 (U3) left, and our analysts suspect that time series with lower values result in the normal class.
### _Use Cases_
We present, based on the previous feedback and tasks, three usage scenarios tackling (1) the exploration of data transformations and feature importance, (2) the finding of decision boundaries, as well as, (3) the reason for misclassifications. For these use cases, we will further use the FordA dataset. However, we extend our previous network and fit it a bit better (92,11%) to the data for the third task (3).
**Explore Transformations and Attributions -** Figure 3 establishes the overview with the projections on the FordA dataset with our model. Exploring the different cells can be tedious. However, slowly starting with the projections of the raw data supports understanding the data. As we see in Figure 3 (A)(U1), the raw data does not help us with further analysis as the cluster score is relatively low. Next, transformations, in general, can help to dig into the data, Figure 3 (B)(U1). In our case, the data can be somewhat grouped with the Fourier transformation and a better cluster score. However, there is still a lot of overlap in the clusters. So, our next exploration step is the model activations, which follows a higher cluster score and more visually pleasing clusters. In our case, we already can see in Figure 3 (C)(U1) that there is a split between the predicted classes. Primarily, if we focus on the confusion matrix, we see that the clusters correspond to the models' predictions. However, we have a considerable overlap if the model is not very sure about the predictions. The attributions, Figure 3 (D), split the predictions even further with a similar cluster pattern to the model activations. Depending on the attribution and projection technique, we get good separating clusters supported by high cluster scores. Through such an analysis, users can explore their applied data a bit more and look into the representation of the model's behavior with the data. Remarkably, the attributions enable us to gain insights and help to identify fascinating samples of, e.g., misclassifications, Figure 3 (D)(U1), and decision borders Figure 4 (U2).
**Finding Decision Boundaries -** After exploring the data and how the model handles the data, we want to further look into the model and, e.g., find decision borders. For such a case, we incorporate parts of the analysis of our domain experts. In Figure 4 on the left, we first look at the raw time series. As we have seen above, such a projection does not help much. Next, we can look into the model activations, Figure 4, and see that we already get two clusters with a rather large overlap area. We generate the hypothesis that a decision border divides the clusters. To find the decision border interactively, we select a misclassification and a correct prediction to further look into the models' predictions. In our local what-if Figure 4 middle, we inspect the prediction probabilities and see that both predictions are quite confident towards their class. Thus, we use the algorithm by Delaney et al. [69] and generate counterfactuals for these samples.
We further compare the generated counterfactual explanations to our initial time series. In the misclassification case, the first three line plot heatmaps on top, we see that a relatively large part of the time series needs to be changed to flip the prediction, and the model is still not that confident about the prediction. However, if we further look into the changes, we see that the modified time points are still rather similar to the initial time series. In the correct prediction, next three line plot heatmaps, we see a different outcome. The model has high confidence in the new misclassification with low uncertainty. Further, inspecting the direct comparison shows only a slight change in the time series necessary to flip the prediction. However, we need further domain knowledge about the data to analyze the changed segments of the counterfactual explanations. So, in Figure 4 on the right (U2), we project the newly generated time series back into the global projections to explore the representation of the model a bit further and potentially verify our generated hypothesis about the decision border. We see that the counterfactuals really cross our previous hypothesized decision border between the clusters. Based on our previous exploration of the attributions and the better splits in them, we see that the crossing in the attributions is even larger. The samples change their cluster membership quite drastically, Figure 4 (U2). Thus, through the easy example, we already found a decision border in the model and the attributions, while no major changes happen in the raw time series projections.
Fig. 6: **Left side:** First three fine charts depict a diverse set of attribution techniques with different attributions for the same time series sample. Forth fine chart presents the sample modifications on a few time points without a major change in the prediction. The fifth line chart demonstrates a focused modification of the time series using smoothing to achieve a prediction change. Forth and fifth include the activation maximization of the model. **Right side:** First line chart presents the misclassification and the changes to flip the class. The second shows the same sample predicted with a better model and the same time point modifications as before, but the prediction does not change. Still, it is misclassified. The third line chart visualizes another modification of the time points based on the activation maximization for this model. And thus, the prediction changes. Forth and fifth show the generated counterfactual by Delaney et al. [69] with more changes needed for the flip.
The workflow and application support such a scenario for any time series classification model and dataset.
**Reason for Misclassification -** After we discovered a decision border at the global level, we want to dig further into the local time points to investigate misclassifications. Especially, interesting are regions where a slight change of the time series enables a flipping of the prediction. An optimization-based counterfactual approach like Wachter et al. [70] can potentially find such minor modifications. However, such an algorithm can potentially also degenerate the time series in a none plausible way to achieve the class change. Thus, we enable the user to explore the local time series to reason about misclassifications. For our focused FordA model, we can change single time points to flip the prediction in many cases as our model learned arguably by heart some flow patterns to be essential for a class. These changes are not ideally convincing counterfactual explanations. Thus, a what-if analysis enables users to inspect the classification more in-depth. Our experts showed some similar, more interesting results in Figure 6 (E3) / (U3). With just a few changes which still incorporate smooth time series, the prediction can be flipped.
However, what happens if we try something similar to a more advanced model. We add another Conv1D layer and increase the filters for each layer to [10, 50, 100, 150] to improve the accuracy score to 92,82%. On the right in Figure 6, we can see the change from the first to the second line chart. The attribution scores differ heavily from the two models and also the activation maximization is distinguishable. If we change the same time points like in our previous worse model, we do not see the same flip in the prediction. Such an observation suggests that the improved model learned some other patterns for the classification. However, in the third line chart, we modified some other parts of the time series to resemble the line chart and can observe the wanted prediction change. Thus, we our initial assumption seems correct, and the improved model learns something different and more robust for the prediction. Further, the forth presents the counterfactual generated with Delaney et al. [69] and the fifth the comparison between the initial time series and the counterfactual. We see that even the counterfactual algorithm needs more time point changes for the flip of the prediction.
## 7 Discussion
Our workflow for visual explanations with attributions and counterfactuals introduces seamless interactions between global and local explanations by incorporating local attributions into global projections. To generate such global explanations, we transform local attributions to the global level by using a projection over data applied to the model. By supporting interactions between these two levels of explanations, users can get a general overview of the models' representation and then analyze in-depth single samples and single model decisions. Through diverse algorithms, visualization, and interaction techniques, we present an application incorporating the workflow and demonstrate the applicability of time series classification, giving users a low obstacle to exploring their models. Using the application, our presented use cases demonstrate examples of possible tasks that can be tackled using the workflow to facilitate the analysis of models and datasets.
**Lessons Learned -** Although demonstrating the applicability of our approach, there are some lessons we consider after developing the proposed workflow and application. While we knew that attribution methods work on time series models and that these models learn the time component of time series, the attributions show, in most cases, only single time points as most important for the classification [41]. We thought that the attributions would present essential regions for the classification. Such regions facilitate the analysis of the samples by identifying possible data errors.
**Limitations -** Our proposed workflow enables a seamless transition between global and local explanations but holds some limitations regarding the incorporated methods. One of the most prominent limitations is the dependence on high-quality local attribution methods and their applicability to the model and the data. We introduce a way to overcome this limitation through our automatic evaluation of the method. However, the evaluation metrics are imperfect and depend on parameters. Our current approach focuses heavily on time series inspection and analysis by evaluating only this data type in detail. Nevertheless, the workflow can be applied to any data type and application with working attribution methods.
**Research Opportunities -** Based on our identified limitations, we present research opportunities to enhance our workflow to improve generalizability. To handle the most critical point, the evaluation of attribution methods, we argue for improved techniques to incorporate data-specific challenges, such as better zero reference points, similar to Schlegel et al. [43]. Further, the identification of thresholds can be automated by implementing heuristics to compare the results over the selected XAI methods and adjust the thresholds until a certain lower bound is found. Through such an automated analysis, the inspection of the global and local explanations can be facilitated by having more descriptive scores.
Another option for the global projections demonstrates path projections [38]. These projections can be used on the time series and attribution data, as these also show time importance. Such an extension to the global explanations supports the analyst in improving their understanding of the model and especially the data.
Further, as the workflow applies to every data type with working attribution methods, support for these data types in the local inspection has to be explored as, e.g., even the implemented line chart heatmaps are not perfect [41]. Mainly, improved heatmap concepts [41] and what-if interactions facilitate an analysis of the local level to project back to the global explanation. For instance, decision borders in image classification can be found through an improved method in such a case, which enables a debugging of computer vision models. Such an analysis potentially enables to identify borders between real data and adversarial images in the global explanation and steers model developers into improving datasets and models.
## 8 Conclusion
In this paper, we presented visual explanations with attributions and counterfactuals, a visual analytics workflow, and an application to explore and explain complex time series classifiers, e.g., deep learning models. The workflow presents an automatic phase to apply and evaluate possible XAI techniques while later letting users to inspect global and local explanations in a loop to enrich both explanation levels mutually. We further present an instantiation that incorporates the workflow with implementation variants. We show the applicability of the workflow and the application using an expert study and use cases on various time series classification datasets and models. The underlying workflow is not limited to time series classification and can be extended to work with any data type and nearly any model.
## Acknowledgments
This work has been partially supported by the Federal Ministry of Education and Research (BMBF) in the VIKING (13N16242) project.
|
2308.11998
|
Economic Recommender Systems -- A Systematic Review
|
Many of today's online services provide personalized recommendations to their
users. Such recommendations are typically designed to serve certain user needs,
e.g., to quickly find relevant content in situations of information overload.
Correspondingly, the academic literature in the field largely focuses on the
value of recommender systems for the end user. In this context, one underlying
assumption is that the improved service that is achieved through the
recommendations will in turn positively impact the organization's goals, e.g.,
in the form of higher customer retention or loyalty. However, in reality,
recommender systems can be used to target organizational economic goals more
directly by incorporating monetary considerations such as price awareness and
profitability aspects into the underlying recommendation models. In this work,
we survey the existing literature on what we call Economic Recommender Systems
based on a systematic review approach that helped us identify 133 relevant
papers. We first categorize existing works along different dimensions and then
review the most important technical approaches from the literature.
Furthermore, we discuss common methodologies to evaluate such systems and
finally outline the limitations of today's research and future directions.
|
Alvise De Biasio, Nicolò Navarin, Dietmar Jannach
|
2023-08-23T08:35:59Z
|
http://arxiv.org/abs/2308.11998v2
|
# Economic Recommender Systems - A Systematic Review
###### Abstract
Many of today's online services provide personalized recommendations to their users. Such recommendations are typically designed to serve certain user needs, e.g., to quickly find relevant content in situations of information overload. Correspondingly, the academic literature in the field largely focuses on the value of recommender systems for the end user. In this context, one underlying assumption is that the improved service that is achieved through the recommendations will in turn positively impact the organization's goals, e.g., in the form of higher customer retention or loyalty. However, in reality, recommender systems can be used to target organizational economic goals _more directly_ by incorporating monetary considerations such as price awareness and profitability aspects into the underlying recommendation models. In this work, we survey the existing literature on what we call _Economic Recommender Systems_ based on a systematic review approach that helped us identify 133 relevant papers. We first categorize existing works along different dimensions and then review the most important technical approaches from the literature. Furthermore, we discuss common methodologies to evaluate such systems and finally outline the limitations of today's research and future directions.
keywords: Recommendations, Business Value, Price and Profit, Multistakeholder, Survey +
Footnote †: journal: publication
## 1 Introduction
_Recommender Systems_ (_RSs_) [147] have become an integral part of many modern online services, for example, on Amazon, Netflix, YouTube, or Spotify. Typically, the recommendations provided by the system are designed to serve certain user needs. On the mentioned e-commerce and media streaming sites, for example, these systems support users in navigating large information spaces, thereby helping them discover relevant content that they were previously not aware of.
The academic literature on recommender systems has traditionally focused on the different types of value that such systems create for _users_, in particular by proposing increasingly sophisticated machine learning models to predict which items are relevant for them in a given situation. An underlying assumption of this user-centric perspective is that by creating value for consumers through personalized recommendations, providers expect certain benefits for the organization as well, for example, through increased customer engagement, loyalty, and retention [76; 92; 127; 106].
Only during the last few years, researchers increasingly emphasize the fact that in practical applications of recommender systems, the interests of multiple stakeholders have to be _explicitly_ taken into account. Correspondingly, the underlying systems have to be designed to create value both for consumers, recommendation providers, and maybe even further stakeholders [1; 146; 140].
In practice, the business value a recommender system creates for providers is measured through various _key performance indicators_ (_KPIs_), see [70; 144]. Besides the mentioned indirect effects of recommendations on customer engagement and retention, organizations rely on various forms of conversion rates to gauge the effectiveness of a system. In many cases, firms directly assess the impact of recommendations by analyzing the effects on sales numbers.
Therefore, it becomes desirable for companies to incorporate relevant knowledge into the underlying algorithms so that the resulting recommendations can drive these KPIs more directly in the desired direction. One important domain-independent approach in this context is to consider purchase-related information in the recommendation models, in particular regarding the profit that results from individual purchase transactions, see [141]. Moreover, such algorithms may implement several other theories and mechanisms from the economics and marketing literature, considering, for example, the role of promotions and discounts, price sensitivity, or consumer utility.
These approaches, which we call _Economic Recommender Systems_ (_ECRSs_), are highly relevant in practice. Unfortunately, the literature on this topic is largely scattered. With this paper, we provide a systematic review of the field, which should serve researchers and practitioners alike as a starting point to understand the state-of-the-art in
the area. Our systematic literature search surfaced more than one hundred relevant papers, which we categorize into five main dimensions of analysis, see Section 3. In the main part of the survey, Section 4, we then discuss existing ECRSs technical approaches. Afterward, we analyze existing methodologies to evaluate such systems in Section 5. The paper ends with a discussion of open challenges.
## 2 Background and Related Work
In this section, we first provide more background on the business value of recommendations. We then characterize the concept of economic recommender systems in more depth. Finally we discuss the relationships of ECRSs to neighboring topics in recommender systems research.
### Business Value of Recommender Systems
Recommender systems, as mentioned above, are often designed to serve both user [40; 55; 124; 136; 234] and organizational purposes [44; 107; 140; 192]. Regarding the organizational purposes, there are various ways in which an RS can generate value for a business [9; 18; 19; 33; 106; 231], considering economics and marketing aspects [118; 168; 206; 224; 245].
In the literature, a number of general categories were identified to characterize how RSs may create business value and how the business value can be measured [70; 144]. The typical measures and corresponding KPIs include [144]:
* The number of _user clicks_ on the recommendations, often measured by the _click-through rate_ (_CTR_);
* The degree of _user adoption_ of the system, often measured by the _conversion rate_ (_CVR_);
* The overall _revenue_ generated from the _sales_ of the firm's products and services;
* The possible effects on the _sales distribution_ of the items sold, e.g., shifting toward more profitable items;
* The overall degree of _user engagement_ with the platform, as indicator of _customer satisfaction_.
Depending on the business industry (e.g., retail, entertainment, manufacturing) or the revenue model (e.g., transaction-based, advertising, subscription) [54; 123; 126; 174; 197; 221; 227] the company may want to optimize certain business values rather than others. For example, in the case where the revenue model is primarily transaction-based (e.g., Walmart), since there is a direct link between purchases and revenue, the company might be interested in shifting the customer behavior towards the purchase of the more profitable items [210]. In contrast, in case the organization's revenue model is based on ads (e.g., YouTube), the company may be interested in increasing the number of clicks [69] as this is directly related to the consumption of ads that providers pay to see their brand advertised. Finally, a company might also be interested in optimizing user engagement [106] in the case of subscription-based models (e.g., Netflix) as this positively correlates with retention.
### Economic Recommender Systems
While there are various ways in which an RS can create value for users and providers, and while there are several KPIs that firms might seek to optimize, ultimately, the provision of a recommendation service almost always serves some economic goal of the organization such as profit and growth. However, we note that some forms of value creation are more directly targeting profitability aspects than others. An increase in revenue through recommendations or a shift in the sales distribution toward the most lucrative items is almost directly reflected in a profit improvement [52; 130; 216]. On the other hand, a growth in user engagement, as in the case of Netflix [106], with more customers joining and fewer leaving, is sometimes only indirectly reflected in higher long-term profits for the organization.
In this survey, we focus on the first type of the described recommendation approaches, i.e., those that target economic effects in a more direct way. Typical examples in this context are: RSs that consider company profit and customer relevance in a balanced way [43; 52; 62; 162; 201]; systems that leverage discounts and pricing algorithms to trigger purchases [5; 145; 149; 151; 289]; or methods that consider customers' price sensitivity to recommend items more in line with their price preferences [51; 97; 294; 295; 59]. We call such systems economic recommender systems, and we informally characterize them as follows:
_An Economic Recommender System (ECRS) is an RS that exploits price and profit information and related concepts from marketing and economics to directly optimize an organization's profitability._
Later in this work (Section 3), we identify five key approaches from the literature to build ECRSs, which we divide into customer and organization-oriented ones, depending on the focus of the underlying algorithms. Customer-oriented approaches in the literature, for instance, integrate purchasing behavior mechanisms (e.g., price sensitivity) into the models to generate more relevant recommendations that will automatically lead to more profit. Organization-oriented ones, on the other hand, apply particular organizational strategies (e.g., profit awareness, promotional pricing) to optimize profit.
Since most recommender systems may at least indirectly target some profit-related or growth-related goal, the boundaries between an economic RS and a "traditional" one may sometimes appear blurry. However, a clear distinction can often be made depending on the underlying revenue model [54; 126; 227]. For example, click-through rate maximization may be seen as an indirect method for profit optimization in case it is only about increasing site interactions [115; 267]. However, it may also be considered as an ECRS method in case there is some revenue associated with each click event (e.g., commissions suppliers pay
to marketplaces for each generated impression), as in the case of an advertising revenue model [189; 244; 288].
Concluding our characterization of ECRSs, it is important to note that considering certain types of economic information to an inappropriate extent may also lead to _unintended negative effects_ and _behavioral harms_ of recommendations [8; 79; 121]. Specifically, it is vital to ensure that an ECRS does not negatively impact the user's trust [178] in the organization [29; 101; 130; 210]. Indeed, trust is one of the most important factors driving adoption [34; 161] and purchase intention [205; 214]. Recommendations that are irrelevant [281; 202; 255; 48], manipulative [6; 8; 65; 111; 171; 247; 269], or poorly explainable [64; 262; 280] because they are too biased towards the profitable items [263] can harm trust, leading customers to reactance [86; 274] or churning.
Besides trust, there are also other possible harms that may emerge in case the recommendation strategy is oriented too strongly toward profit. While algorithms are often designed to improve sales diversity [10; 170] or to stimulate the sales or consumption of niche items [194; 275], they in practice might sometimes nudge users to buy the most popular ones [87; 88; 89; 129; 172; 173]. Such effects may in turn have profit implications considering that popular items sometimes have lower margins [93]. Finally, competition effects [102; 175; 299] may also be important to consider, since rewarding higher-margin items could push sellers to increase prices [298], thus impacting customers' willingness-to-pay [7], and market demand [20; 282].
### Related Areas in RSs Research
Economic recommender systems are related to other important research areas, including the following:
* _Multi-Stakeholder Recommender Systems_[1; 2]: where the system is designed to meet the interests of multiple stakeholders (e.g., consumers, providers, suppliers);
* _Multi-Objective Recommender Systems_[15; 297]: where the system is designed to optimize several objectives simultaneously (e.g., accuracy, diversity);
* _Fair Recommender Systems_[71; 215; 219; 276; 277]: where the system is designed to avoid possible discrimination against certain user or item groups.
The relationships between ECRSs and these other areas can be characterized as follows. Regarding multi-stakeholder RSs, we note that probably any ECRS in practice does not _exclusively_ focus on provider profitability but considers the interests of other stakeholders--in e-commerce, in particular, those of consumers or suppliers as well [43; 52]. Such multi-stakeholder considerations mean that ECRSs in practice are _multi-objective_ RSs that consider different competing objectives, e.g., profitability vs. consumer value [62; 101] or short-term vs. long-term profits. However, not every multi-stakeholder RS necessarily is an economic one, e.g., considering that an RS may also be designed to recommend users to other users (e.g., on dating platforms). Likewise, a multi-objective RS could also optimize non-economic goals, e.g., popularity, which may in turn have a direct inverse relationship with profitability under certain circumstances [93]. Finally, in terms of fairness, when building an ECRS there is always the possibility that by designing a system too biased [49] toward profitable items [52; 200], the organization might risk being perceived as unfair by consumers. However, there are various other application areas of fair recommender systems, which are not related to economic aspects or firm profitability, e.g., when the recommender system is designed to avoid discrimination of underrepresented groups in the recommendations.
Various surveys have been published in the mentioned areas of multi-stakeholder [1; 2] and multi-objective [15; 297] RSs, and on related topics such as fairness [215; 219; 276; 277], diversity [170], trust [77], and explainability [246; 285]. We refer the readers to these important works for in-depth coverage of the respective topics. The present survey has certain affinities with previous reviews on value-aware [70] and price- and profit-aware [141] RSs. It however differs from these previous works in various ways. First, our study is the first systematic review of economic recommender systems based on PRISMA guidelines [209]. Moreover, existing value-aware RSs research [70] investigated how to generically optimize business value through RSs, whereas our research on ECRSs is focused on direct optimization of profitability. In addition, the present research also embraces customer-oriented approaches (e.g., price-sensitive recommendations). Previous work on price- and profit-aware RSs [141] also focused on profitability optimization. However, this earlier research did not cover a number of important approaches that were identified in this survey (e.g., economic utility modeling methods). Finally, our present review also discusses methodological questions (e.g., performance evaluation methods) that were not addressed in previous works.
## 3 Methodology
The present study follows a systematic review process based on _Preferred Reporting Items for Systematic Reviews and Meta-Analyses_ (_PRISMA_) [209] guidelines. The PRISMA article selection process is recognized throughout the scientific community as a rigorous and reliable methodology. The process aims to identify, evaluate, and interpret all available research relevant to a particular research question, topic area, or phenomenon of interest while ensuring high reproducibility of results. In the following, we report: the dimensions of analysis considered in the study, the underlying research questions, the eligibility criteria for article inclusion, the search queries used, the overall article analysis and selection process, and the possible limitations of the survey.
### Decomposing Economic Recommender Systems into Different Dimensions of Analysis
Economic recommender systems can be characterized by several interrelated topics. To identify relevant articles, we therefore followed an inductive process starting from two related surveys [70; 141], decomposing ECRSs into different _dimensions of analysis_ (_DAs_). As Figure 1 shows, we identified five types of approaches that can be divided into customer and organization-oriented ones, depending on their main focus. Customer-oriented approaches aim to integrate RSs models with purchasing behavior mechanisms to generate more relevant recommendations that could in turn lead to more value for the firm. Instead, organization-oriented ones make use of specific organizational strategies to directly or indirectly optimize business KPIs. Below, we explain the rationale behind each of them.
* **DA1**: **Price Sensitivity** approaches aim to explicitly consider customers' price preferences in the recommendation process. In fact, price is one of the variables that most strongly influence customers' buying behavior [16; 179]. For example, customers are often willing to pay more for certain types of items based on presumed greater utility, better aesthetics, brand prestige, supplier reliability, or a combination of various factors [145]. By considering customers' price sensitivity in the algorithms [141], more accurate and relevant recommendations could directly increase the probability of purchase and thus lead to higher sales revenue for the organization.
* **DA2**: **Economic Utility Modeling** approaches aim to explicitly consider the utility of recommendations for the customer in accordance with an economic perspective. There are many utilitarian dynamics [233] related to the particular type of purchased products [99; 258]. For example, if a customer has just purchased a computer or a smartphone, it is very likely that he or she will not purchase the same or a similar product again within a short time. Conversely, there are other products, such as dog food or diapers, for which he or she is very likely to continue buying repeatedly for an extended period of time. Generating more relevant recommendations by considering the customer's utilitarian behavior could increase conversion rates and generate more profits for the firm.
* **DA3**: **Profit Awareness** approaches aim to directly incorporate profit information into the recommendation models. In fact, profit (i.e., sales revenue minus costs) is one of the most important business KPIs for a successful enterprise [243]. Depending on the particular level of this indicator, a company may or may not invest in research and development to grow the business, attract investors to finance its operations, obtain possible financing from banks, and many other issues of strategic interest to entrepreneurs and managers [100]. Overall, generating more profitable recommendations by explicitly considering profit information could directly optimize the organization's economic goals.
* **DA4**: **Promotional** approaches generate recommendations while strategically setting the prices of certain products or focusing the customer's attention on certain brands or promotions. For example, the company can offer certain products at a discounted price (individually or in bundles) to incentivize impulsive buying behaviors [104; 199]. Similarly, the firm can make customers aware of certain products that they would be unlikely to discover on their own and indirectly trigger a possible purchase in the future [165]. Both approaches can be integrated into the recommendation process to optimize profit.
* **DA5**: **Long-Term Value Sustainability** approaches aim to generate recommendations considering a long-term economic perspective. In fact, long-term sustainable business growth is one of the most important aspects for a company [186; 190; 222]. For example, a company may be interested in making customers progressively purchase more and more products and services over time to increase their customer lifetime value. Generating recommendations by considering such long-term economic goals of the company thus has the potential to stimulate business growth in a sustainable way over time.
### Research Questions
Having identified these dimensions of analysis, the goal of our work is to review the state-of-the-art of current ECRSs research. More specifically, the present survey aims to answer the following _research questions_ (_RQs_):
* **RQ1**: What technical approaches are used to build ECRSs?
* **RQ2**: What evaluation methods are used to assess the performance of an ECRS?
Figure 1: Economic recommender systems dimensions of analysis.
* **RQ3**: What are the main challenges and future research directions in the area of ECRSs?
### Search Queries
As mandated by the PRISMA guidelines, our survey aims to answer previous RQs by systematically querying online libraries. In particular, we queried Elsevier Scopus, IEEE Xplore, Springer Link, and ACM Digital Library to identify relevant articles. We created a _search query_ for each of the previous DAs by analyzing the most recurring key terms identified in a series of specialized articles extracted from the literature of two related surveys [70; 141]. In Table 1, we report the used search queries and the number of identified documents.
### Eligibility Criteria
To be included in the review, articles must pass a rigorous analysis process. Specifically, articles must meet the following _eligibility criteria_ (_EC_):
* **EC1**: Articles must focus on research questions related to one of the dimension of analysis of ECRSs.
* **EC2**: Articles must explicitly mention the business KPIs included in the search queries.
* **EC3**: Articles must be unique, written in English, and the full content must be accessible to the authors.
* **EC4**: Articles must be peer-reviewed by either scientific journals or conferences.
* **EC5**: Graduate theses and doctoral dissertations are not included.
### Article Selection Process
As shown in the PRISMA flow diagram in Figure 2, we followed a multi-stage process to identify all the relevant
\begin{table}
\begin{tabular}{l l l l l l l l} \hline \hline ID & Dimension & Search Query & Scopus & IEEE & Springer & ACM & Total \\ \hline \multirow{3}{*}{**DA1**} & \multirow{3}{*}{Price Sensitivity} & (("recommender system”) AND (“price preference” OR “price sensitivity” OR “price elasticity” OR “willingness to pay” OR “price-aware”)) & \multirow{3}{*}{670} & \multirow{3}{*}{2} & \multirow{3}{*}{469} & \multirow{3}{*}{57} & \multirow{3}{*}{**1198**} \\ & & & (("recommender system”) AND ("economic”) AND (“utility modeling”)) & & & & & \\ & & ("recommender system”) AND (“multi-stakeholder” OR “profitAwareness” OR “volume aware””) & \multirow{3}{*}{351} & \multirow{3}{*}{5} & \multirow{3}{*}{290} & \multirow{3}{*}{63} & \multirow{3}{*}{**709**} \\
**DA4** & \multirow{3}{*}{Promotional} & (("recommender system”) AND (“dynamic pricing” OR “price personalization” OR “product bundling”)) & & & & & \\
**DA5** & \multirow{3}{*}{Long-Term Value} & (("recommender system”) AND (“customer lifetime value” OR “post sustainability” OR “cumulative profit” OR “long-term value”)) & & & & & \\ \hline \hline \end{tabular}
\end{table}
Table 1: Search queries and results divided by online database of the different dimensions of analysis on which this article focuses. Queries were run on _May 12, 2023_ looking for all documents published since _January 1, 2000_.
Figure 2: PRISMA flow diagram.
resources included in this review. In the first identification phase 2227 articles from Elsevier Scopus, 11 articles from IEEE Xplore, 1820 articles from Springer Link, and 201 articles from ACM Digital Library were identified for subsequent analyses. In this phase, 329 duplicated records and 39 non-English articles were identified and removed. In the second screening phase, the titles and abstracts of the remaining 3891 articles were analyzed, and 3479 records were removed because the covered topics were not relevant to the present review. In this phase, 414 articles were then sought for retrieval and assessed for eligibility, excluding 350 articles after full text reading. From this subset of 64 eligible articles, an additional 616 articles were identified by searching for references in their bibliographies. These articles were then assessed for eligibility, removing 547 records after reading the full text. At the end of this overall process, 133 studies were included in the review. In Figure 3, we show some statistics of the references obtained at the end of the analysis process by reporting the distribution by year of surveyed papers divided by subdimension of analysis. As can be seen from the figure, there is a growing interest in the literature for all ECRSs dimensions.
### Study Limitations
The possible study limitations (SL) are the following:
* **SL1**: Articles were primarily selected from Elsevier Scopus, IEEE Xplore, Springer Link, ACM Digital Library, and from reference searches in the bibliographies of articles that passed the screening stage. Additional online libraries may be considered in future research.
* **SL2**: The study does not cover preprints, non-English articles, non-accessible articles, graduate theses, doctoral dissertations, industry products, and demos.
* **SL3**: Other dimensions of analysis of ECRSs beyond those identified in Section 3.1 are left for possible future extensions of this work.
## 4 Technical Approaches
In this section, we discuss the underlying algorithmic approaches to each of the ECRSs dimensions of analysis introduced in Section 3.1, i.e., price sensitivity, profit awareness, promotional, long-term value sustainability, and economic utility modeling.
In Table 2 we report the studies identified by the present survey that propose technical approaches, categorized by dimension of analysis and algorithmic method. The approaches can be divided into in- and post-processing1 methods, depending on the time the economic value optimization occurs. In-processing approaches aim to incorporate economic aspects directly into the models, either by extending the objective function of known algorithms (e.g., by introducing new variables or regularizers) or by developing entirely new algorithms. The underlying algorithms may be based, for example, on supervised or reinforcement learning paradigms. Post-processing approaches, on the other hand, can be mounted on top of any recommender and aim to transform the recommendations generated from the baselines by applying specific heuristic economic criteria. These may incorporate the economic value by simply re-ranking the output of the original algorithm or by exploiting also additional learning models.
Footnote 1: Pre-processing methods may also exist in industry, e.g., when a recommendation provider wants to rule out certain unprofitable items. Our literature search, however, did not surface such approaches.
Analyzing the distribution of the studies in Table 2, we can make some observations. In particular, it can be noted that there are several relevant works for all the dimensions of analysis. In addition, in-processing and post-processing methods are equally used across all dimensions. This implies that the research field is broad and that there are various important lines of active research. Overall, given that there is a substantial number of works in each dimension of analysis, we are confident that our categorization scheme properly reflects the various types of activities in this research area.
Figure 3: Distribution of surveyed papers per year divided by dimension of analysis.
NotationIn the following, we introduce the main notation used in the paper, see Table 3. Formally, the vast majority of the approaches we discuss in this survey refer to the _top-\(k\) recommendation problem_[225], i.e., the problem of determining the best \(k\) items to recommend to each user. All algorithms designed to address this particular problem consider a set \(\mathcal{U}=\{u_{1},\ldots,u_{m}\}\) of \(m\) users, a set \(\mathcal{I}=\{i_{1},\ldots,i_{n}\}\) of \(n\) items, and a user-item interaction matrix \(\mathbf{X}\), where each entry \(x_{u,i}\) represents the feedback from a user \(u\) towards an item \(i\). With very few exceptions, the feedback considered is almost always implicit (i.e., \(x_{u,i}\in\{0,1\}\)). This indicates a positive or missing interaction, depending if the user interacted with the item or not (e.g., purchased it). Generally, it is assumed that purchased items are those that are relevant (and maybe satisfactory) for consumers.
Algorithms are often designed [225] to learn a _scoring function_\(\hat{\mathbf{X}}(\mathbf{\Theta}):\mathbf{X}\rightarrow\{\hat{x}(\mathbf{ \Theta})\in\mathbb{R}:0\leq\hat{x}(\mathbf{\Theta})\leq 1\}^{m\times n}\) to predict the missing entries of \(\mathbf{X}\). The scoring function is parameterized by a set \(\mathbf{\Theta}\in\mathbb{R}^{o}\) of model parameters2 - where \(o\) is the number of parameters. Hence, \(\hat{x}_{u,i}(\mathbf{\Theta})\) represents the expected interest of the user toward an item he or she has never interacted with.
Footnote 2: For some algorithms, such as User-Based Collaborative Filtering based on Nearest-Neighbor techniques [204], we assume \(\mathbf{\Theta}=\emptyset\) since there are no model parameters.
In the general top-\(k\) recommendation problem [9], an ordered list \(\mathcal{Y}_{u,k}\) of \(k\) items to be recommended to each user \(u\) is determined optimizing a specific _utility function_\(\mathcal{T}(\mathcal{Y}_{u,k}):\mathcal{Y}_{u,k}\rightarrow\mathbb{R}\). More formally:
\[\operatorname*{argmax}_{\mathcal{Y}_{u,k}}\quad\mathcal{T}(\mathcal{Y}_{u,k}) \tag{1}\]
The utility function can be implemented in arbitrary ways (e.g., including relevance, profitability, and other aspects).
Given \(\rho_{u,i}\) as the utility of the user-item interaction, the vast majority of the studies in the RSs literature operationalize the utility function as:
\[\mathcal{T}(\mathcal{Y}_{u,k})=\sum_{i\in\mathcal{Y}_{u,k}}\rho_{u,i} \tag{2}\]
optimizing directly:
\[\operatorname*{argmax}_{\mathcal{Y}_{u,k}}\sum_{i\in\mathcal{Y}_{u,k}}\hat{x} _{u,i}(\mathbf{\Theta}) \tag{3}\]
and thus considering the utility of the potential interaction as the expected interest, i.e., \(\rho_{u,i}=\hat{x}_{u,i}(\mathbf{\Theta})\).
However, although this user-focused utilitarian conception is currently the most widely used one in the literature, a recommendation provider may have different goals. In the context of ECRSs, instead, the utility functions may be operationalized considering the item's price \(p_{i}\), and profit \(v_{i}=p_{i}-c_{i}\), where \(c_{i}\) is the item's cost. For example, algorithms belonging to the profit-aware subdomain that we discuss in Section 4.3 are often developed to find the most profitable, yet relevant, items for the company, and these may clearly differ from the _most_ relevant ones.
### Price-Sensitivity Methods
Price is one of the variables that most influence customers' buying behavior [16; 179]. Accordingly, many studies in the literature [51; 97; 162; 294; 109] propose algorithms to explicitly consider customers' price sensitivity, as more accurate and relevant recommendations (i.e., in terms of being in the right price range) could directly increase the probability of purchase and thus lead to higher sales revenue for the organization. Below, we give some insights on how these methods work by discussing a set of selected articles.
\begin{table}
\begin{tabular}{p{56.9pt}|p{113.8pt} p{113.8pt} p{113.8pt} p{113.8pt}} \hline \hline Algorithmic Approach & Price Sensitivity & Economic Utility Modeling & Profit Awareness & Promotional & Long-Term Value Sustainability \\ \hline \multirow{3}{*}{In-Processing} & [50; 51; 57; 97; 109; 110; 254; 265; 295] & [13; 41; 43; 58; 26; 73; 99; 191; 254; 258; 294; 295] & [13; 41; 43; 58; 62; 73; 99; 191; 254; 270; 286; 287; 290] & [13; 41; 43; 58; 62; 78; 108; 134; 177; 187; 187; 201; 218; 220; 259; 261; 259; 272; 288] & [12; 23; 45; 68; 101; 133; 162; 184; 185; 188; 189; 200; 224; 235; 256; 257; 272; 288] & [12; 23; 27; 46; 78; 101; 133; 162; 184; 185; 188; 189; 200; 224; 235; 256; 257; 272; 288] & [12; 23; 27; 46; 77; 108; 134; 177; 187; 187; 201; 218; 220; 259; 267; 278; 279; 292; 239; 254; 268] & [15; 25; 26; 31; 72; 82; 116; 145; 149; 189; 200; 224; 235; 256; 257; 272; 288] & [116; 145; 149; 130; 210] \\ \hline \hline \end{tabular}
\end{table}
Table 2: ECRSs studied by dimension of analysis and algorithmic approach.
\begin{table}
\begin{tabular}{c l} \hline \hline Notation & Definition \\ \hline \(u\) & user \\ \(i\) & item \\ \(p_{i}\) & item’s price \\ \(c_{i}\) & item’s cost \\ \(v_{i}=p_{i}-c_{i}\) & item’s profit \\ \(m\) & number of overall users \\ \(n\) & number of overall items \\ \(k\) & number of items to recommend \\ \(\mathcal{U}=\{u_{1},\ldots,u_{m}\}\) & set of users \\ \(\mathcal{I}=\{i_{1},\ldots,i_{n}\}\) & set of items \\ \(\mathbf{X}\in\{0,1\}^{m\times n}\) & user-item interaction matrix \\ \(x_{u,i}\in\{0,1\}\) & user-item feedback \\ \(\mathbf{\Theta}\) & set of model parameters \\ \(\mathbf{\hat{X}}(\mathbf{\Theta})\) & scoring function \\ \(\hat{x}_{u,i}(\mathbf{\Theta})\in[0,1]\) & user-item predicted interest \\ \(\mathcal{Y}_{u,k}\) & recommendations list \\ \(\mathcal{T}(\mathcal{Y}_{u,k})\) & utility function \\ \(\rho_{u,i}\) & user-item interaction utility \\ \hline \hline \end{tabular}
\end{table}
Table 3: Main notation.
#### 4.1.1 In-Processing Price-Sensitivity Methods
Most of the approaches used to generate price-sensitive recommendations are based on in-processing algorithms. The main characteristic of these algorithms is that price sensitivity is incorporated directly into the model.
In particular, this methodology proved particularly flexible to be applied to the well-known _Matrix Factorization_ (_MF_) [163; 164] model. The original model estimates the expected interest of the user toward a given item via the dot product of latent factor vectors. These are traditionally learned through a dimensionality reduction algorithm applied to the user-item interaction matrix. Considering price-sensitive methodologies based on MF, for example, one paper [97] proposes to incorporate cost factors3 into the model's objective function to generate more accurate travel tour recommendations. The experiments reported by the authors indicate that explicitly incorporating cost factors improves the overall accuracy of the recommendations when compared with a plain MF model. Also, extending MF, other papers in the literature [50; 51] propose incorporating customers' price preferences explicitly into the objective function through the use of particular regularizers. However, whereas previously the purpose was to enhance the overall performance of the system, here the study is about the use of price preferences to make recommendations in product categories that the user has never explored (_transfer learning_). In particular, according to the authors, generating recommendations for customers' unexplored product categories can cause significant performance drops (-40%) if traditional algorithms are used, since the learned product user preferences are difficult to transfer from one category to another. Instead, explicitly incorporating customers' price preferences into the objective function can help to significantly improve (+43%) performance on unexplored categories compared to state-of-the-art baselines.
Footnote 3: Note that here we respect the original paper’s terminology by referring to the cost, but actually the cost for the user is simply the item’s price.
Other studies in the literature [109; 110] propose incorporating customers' price preferences within existing _context-aware_ recommendation algorithms [169]. According to an experimental study with real customers in the food & beverage field [110], explicitly incorporating discount sensitivity into the algorithms can help to significantly improve performance in a coupon recommendation task when compared to the CAMF method [28], i.e., a context-aware variant of matrix factorization. Specifically, in the domain of location-based deals, the analysis shows that the most important feature for predicting purchase probability is the discount-to-distance ratio: the higher the discount offered by the store, the more likely the customer is to travel longer distances to obtain it. However, as is well known in the literature, context variables often depend on the considered business domain. In particular, eBay.com has some unique characteristics [109]. In this multi-seller platform, the same products are offered at various prices simultaneously by various sellers with different reputation scores. According to a study [109], in this business domain, incorporating customers' _willingness-to-pay_ (_WTP_), discounting, and seller reputation features into a context-aware recommender can help to significantly improve the accuracy of predictions, with an 84% improvement over MF models.
In addition, recent studies [283; 294; 295] propose incorporating customers' price preferences into algorithms by exploiting Graph Neural Networks (GNNs) [91]. Specifically, in two related studies [294; 295], it is proposed to construct a GNN-based recommender by building a heterogeneous graph consisting of different types of nodes: customers, items, prices, and product categories. The key idea is to propagate price influence from prices to users by leveraging items as a bridge so that price preferences are implicitly encoded into the embeddings. The use of price-sensitive GNNs is also exploited in the field of session-based recommendations [283]. For all studies based on GNNs [283; 294; 295], the models are able to generate slightly more relevant recommendations than the baselines. However, as various authors pointed out, it is difficult to handle heterogeneous information and model complex relationships underlying customer buying behavior, and research still offers many opportunities to develop better-performing models that can fully exploit the potential of GNN-based algorithms.
#### 4.1.2 Post-Processing Price-Sensitivity Methods
A number of price-sensitive recommendation algorithms also make use of post-processing methods. The latter are primarily re-ranking algorithms, which can be applied on top of any price-agnostic recommender baseline.
In this domain, it is proposed, for example, to generate recommendations by weighting the expected interest \(\hat{x}_{u,i}(\mathbf{\Theta})\) by the price-sensitivity \(s_{u,i}(\mathbf{\Phi})\). The latter is a particular variable, learned through a different model parameterized by \(\mathbf{\Phi}\), indicating how price-sensitive a given user is to a given item (Eq. 4) [238]. A similar approach is also proposed in another study [252]. However, in this case,
\begin{table}
\begin{tabular}{l l} \hline \hline Ref & Re-Ranking Method \\ \hline \hline
[238] & \[\operatorname*{argmax}_{\mathcal{Y}_{u,k}}\sum_{i\in\mathcal{Y}_{u,k}}\hat{x}_{u,i}(\mathbf{\Theta})\cdot s_{u,i}(\mathbf{\Phi})\] \\ \hline
[252]* & \[\operatorname*{argmax}_{\mathcal{Y}_{u,k}}\sum_{i\in\mathcal{Y}_{u,k}}w_{1}( \mathbf{\Psi})\cdot\hat{x}_{u,i}(\mathbf{\Theta})+w_{2}(\mathbf{\Psi})\cdot s_ {u}(\mathbf{\Phi})\] \\ \hline
[162] & \[\operatorname*{argmax}_{\mathcal{Y}_{u,k}}\sum_{i\in\mathcal{Y}_{u,k}} \hat{x}_{u,i}(\mathbf{\Theta})\cdot\left(\left(1+\log_{10}\!\left(0.1+ \frac{0.9\cdot p_{i}}{c_{i}}\right)\right)^{\beta}+\right.\] \\ & \[\left.+\left(1+\log_{10}\!\left(0.1+\frac{0.9\cdot p_{i}}{\hat{\mathbf{p}}^{u }}\right)\right)^{\gamma}\right)\] \\ \hline \hline \end{tabular}
\end{table}
Table 4: Price-sensitive re-ranking methods. *The formula captures the main essence of the described approaches.
the price-sensitivity variable \(s_{u}(\mathbf{\Phi})\) depends only on the customer and not on the item (Eq. 5). In addition, it is necessary to use another regression model (parameterized by \(\mathbf{\Psi}\)) to learn how to properly weigh (through \(w_{1}(\mathbf{\Psi})\), \(w_{2}(\mathbf{\Psi})\) coefficients) the price-sensitivity with the user's expected interest. Both studies show that through the use of price-sensitivity algorithms, more relevant recommendations can be obtained.
Recently, another research [162] proposes a hybrid approach (Eq. 6) combining the price-sensitive and the profit-aware4 subdomains. This approach weighs the expected interest \(\hat{x}_{u,i}(\mathbf{\Theta})\) by balancing a user price preference factor \(\frac{p_{i}}{p^{*}}\) with a profitability factor \(\frac{p_{i}}{c_{i}}\), where \(\beta,\gamma\in[-1,1]\) in Eq. 6 are regularization parameters. In particular, considering \(\bar{\mathbf{\mathrm{p}}}^{u}\) as the average user price, the first factor captures the difference between the customer's typical price level the actual item's price. The second factor, \(\frac{p_{i}}{c_{i}}=1+\frac{v_{i}}{c_{i}}\), captures how much an item's sale is able to repay the underlying cost and bring profit to the organization. In this way, it becomes possible to effectively balance the interests of customers with those of the organization because the increase in profitability that traditionally adversely affects the relevance of recommendations is more than offset by the increase in the latter due to the influence of price preferences.
Footnote 4: We discuss profit-aware methods in Section 4.3.
In Table 4, we formally characterize the three discussed price-sensitive re-ranking methods.
### Economic Utility Modeling Methods
In the economic literature [195], user behavior is often modeled using utilitarian theories to construct systems that can describe and/or optimize certain dynamics. According to the _Rational Choice Theory_ (_RCT_), at each time instant, a rational user, when faced with a set of alternatives, will choose those with the highest utility for him or her [233]. Accordingly, many studies in the literature [99; 258; 290] propose algorithms that explicitly consider the customer's utilitarian behavior to generate more useful recommendations that can in turn increase conversion rates and profitability. Below we give some insights on how these methods work by discussing a few selected articles focused, respectively, on multi-attribute, repurchase, and complementary recommendations.
In the field of RSs, many studies in the literature assume that the utility \(\rho_{u,i}\) of a product to a customer depends on his or her purchase history [258]. Most existing RSs recommend for each user \(u\) a list \(\mathcal{Y}_{u,k}\) consisting of the top-\(k\) items (Eq. 3) with the highest predicted scores \(\hat{x}_{u,i}(\mathbf{\Theta})\). The list \(\mathcal{Y}_{u,k}\) is traditionally selected from a set of items with which the user has never interacted before. Interpreting this assumption from the perspective of economic utility theory (Eq. 7) [258], then, the utility \(\mathcal{T}(\mathcal{Y}_{u,k})\) of a recommendation \(\mathcal{Y}_{u,k}\) is nothing but the sum of the predicted scores, i.e., \(\rho_{u,i}=\hat{x}_{u,i}(\mathbf{\Theta})\). In this case, a recommendation \(\mathcal{Y}_{u,k}\) generated by optimizing the total utility of a set of \(k\) recommended items optimizes the expected user interest estimated by any recommendation algorithm.
However, in addition to the previous utility definition, alternative definitions are recently emerging in the literature. For example, in the field of _Multi-Criteria Recommendation Systems_[14], in the presence of a set \(\mathcal{G}\) of attributes associated with items, various studies in the literature [73; 80; 135; 232; 293] propose to generate recommendations by exploiting the _Multi-Attribute Utility Theory_ (_MAUT_) [157]. MAUT is one of the most widely used utilitarian theories in decision making, which aims to weigh a set of relevant variables to determine the overall utility of each alternative. In the context of recommendations, in particular, the overall optimized utility (Eq. 8) in this case depends on the utility \(\rho_{i,g}\) of the single attribute \(g\) of item \(i\), and a weight \(f_{u,g}\) that each user can provide to indicate the importance of that attribute.
Other studies focus on the problem of repeated purchase recommendations [99; 258; 287; 290]. Unlike traditional RSs, algorithms developed for this task generate recommendations by also considering items that the user already purchased in the past. In particular, it is observed that the repurchase cycle of some products may follow the _Law of Diminishing Marginal Utility_[195; 258]. According to this theory, many products have decreasing utility for the user as the quantity of purchased products increases (e.g., computers, cell phones), while others, instead, are likely to
\begin{table}
\begin{tabular}{l l l} \hline \hline Ref & Name & Utility Function \\ \hline
[258] & Standard & \(\mathcal{T}(\mathcal{Y}_{u,k})=\sum_{i\in\mathcal{Y}_{u,k}}\rho_{u,i}\) \\
[135] & Multi-Attribute & \(\mathcal{T}(\mathcal{Y}_{u,k})=\sum_{i\in\mathcal{Y}_{u,k}}\sum_{g\in \mathcal{G}}f_{u,g}\cdot\rho_{i,g}\) \\ & Constant & (8) \\
[258]* & Elasticity of Substitution & \(\mathcal{T}(\mathcal{Y}_{u,k})=\sum_{i\in\mathcal{Y}_{u,k}}\rho_{u,i}\cdot q _{u,i}^{\xi_{i}}\) \\ & King-Plosser-Rebelo & \(\mathcal{T}(\mathcal{Y}_{u,k})=\sum_{i\in\mathcal{Y}_{u,k}}\rho_{u,i}\cdot\ln(1+q _{u,i})\) \\ & Rebelo & (10) \\ & Multi-Product & \(\mathcal{T}(\mathcal{Y}_{u,k})=\frac{1}{|\mathcal{Y}_{u,k}|}\sum_{i,j\in \mathcal{Y}_{u,k}:i\neq j}\left(a_{i,j}\cdot q_{u,i}^{1-b_{i,j}}+\right.\) \\ & & \(\left.+(1-a_{i,j})\cdot q_{u,j}^{1-b_{i,j}}\right)^{\frac{1}{1-b_{i,j}}}\) \\ & Marginal Utility per Dollar & \(\mathcal{T}(\mathcal{Y}_{u,k})=\sum_{i\in\mathcal{Y}_{u,k}}\frac{\tanh \left(\rho_{u,i}\right)\cdot r_{i,u}}{(1+q_{u,i})\cdot\sigma(p_{i})}\) \\ \hline \hline \end{tabular}
\end{table}
Table 5: Economic utility functions from rational choice theory. *The formulas capture the main essence of the described approaches.
be purchased frequently over time (e.g., baby diapers, pet food). Using the standard utilitarian criterion in Eq. 7 it is not possible to model this behavior. Indeed, in this case, the usefulness of recommendations for the user highly depends on the quantity \(q_{u,i}\) of item \(i\) purchased by him or her until a specific time. In this context, promising results can be obtained by modeling the repurchase cycle through the _Constant Elasticity of Substitution Utility Function_[248]. This allows the decreasing marginal utility of product \(i\) to be properly modeled through a parameter \(\xi_{i}\in[0,1]\) associated with item \(i\) (Eq. 9). This parameter can be learned by extending the MF objective function. In this way, the algorithm can explicitly consider the decreasing utility of certain products for the user and generate more relevant recommendations.
With similar methodologies, other utilitarian functions are also used in the literature to model customer purchasing behavior [99; 287; 290]. However, these studies focus on different objectives. For example, one study [287] proposes three different business cases (i.e., e-commerce, P2P lending, freelancing) that exploit the _King-Plosser-Rebelo Utility Function_ (Eq. 10) to optimize the _Total Surplus_, i.e., an indicator that considers both the usefulness of the recommendations for the customer and the profit for the producer. Another study [290] in contrast propose to use the _Multi-Product Utility Function_ (Eq. 11) in order to also consider any complementary and substitutability relationships among the recommended products. In the equation, the variables \(a_{i,j}\) and \(b_{i,j}\) are additional parameters that the recommendation algorithm can jointly learn with the latent factors to model the indifference curves between pairs of products, i.e., how much the increase in one product affects the relative marginal utility of another product. Finally, one study [99] proposes using the _Marginal Utility per Dollar Function_ (Eq. 12). This function considers the price \(p_{i}\) of item \(i\) and a risk attitude coefficient \(r_{i,u}\) to model customers' risk-aversion, i.e., the tendency of consumers to spend only a small portion of their total wealth on a single purchase.
In Table 5, we formally characterize the utility criteria discussed above.
### Profit Aware Methods
Profit is one of the most important business KPIs for a successful enterprise [243]. Accordingly, many studies in the literature [108; 162; 177; 52; 108] propose profit-aware recommendation algorithms to directly optimize the firm's profitability. Below we give some insights on how these methods work by discussing a few selected articles.
#### 4.3.1 In-Processing Profit-Aware Methods
Profit-aware in-processing approaches in the literature are quite heterogeneous, scattered, and there are several parallel lines of research. Below, we offer a brief overview of major research directions in this area.
Some early studies [134; 272; 218] exploit _Association Rules_[125]. According to this particular methodology [211], recommendations are generated through a frequentist approach based on statistical support and confidence constructs [208]. One of Amazon's most prominent recommenders, i.e., "_customers who bought this item also bought_", is seemingly based on association rules. In particular, many studies in the literature [259; 260; 58; 261] propose to generate association rules while also optimizing profitability. The main methods incorporate profit considerations when weighting the rules [42]. However, unlike modern RSs based on collaborative and content-based filtering algorithms, association rules [261] are not personalized, i.e., different users do not get different recommendations. In addition, association rules may generally face challenges when the total number of recommendable items is very large.
Other earlier studies [13; 41; 220] propose graph-based approaches. In particular, one research [13] focuses on social networks. The proposed algorithm is designed to explicitly optimize the value of recommendations in customer-product graphs. However, in the study, profit is operationalized through non-monetary metrics. Another relatively recent approach based on graphs [220] is developed specifically for the taxi industry. In this particular application domain, if we assume an hourly rate, a taxi driver's profit depends solely on the hours billed to customers: simply put, it is critical for a taxi driver to minimize the distance to find a customer and maximize the distance traveled with a customer on board. The proposed algorithm recommends pick-up points for taxi drivers in order to maximize the profit of driving routes while balancing the potential congestion resulting from multiple requests from different customers at the same location.
More recently, a study [43] proposes a profit-aware RS based on collaborative filtering. The algorithm is based on an extension of the well-known neighbor selection criterion of the user-based nearest-neighbor collaborative filtering model [204]. The original algorithm calculates the predicted score based on a weighted sum of similarities between users belonging to a given neighborhood. The authors of [43] instead propose to calculate the predicted scores by selecting the neighbors that would allow the generation of the highest value-weighted expected interest. Although the focus of the paper is on shilling attacks, i.e., attacks by malicious users who generate biased ratings to influence recommendations for their interests, the subprocedure for selecting the most valuable users can be used to generate more profitable recommendations.
Other recent approaches [177; 180; 266] are based on _Learning To Rank_ (_LTR_) [60]. This is a well-known technique in _Information Retrieval_ (_IR_) [32; 160]. IR algorithms aim to help users to find the most relevant items based on specific search queries. In particular, one study [266] uses this methodology in a product search application context. The proposed algorithm integrates the price into the objective function in order to optimize the overall sales revenue of an e-commerce. This technique is later applied [177; 180] also to generate recommendations without the
need to anchor them to an underlying search query. For example, one study [180] proposes a multi-objective algorithm that is able to optimize multiple objective functions simultaneously through LTR. The algorithm is Pareto-efficient, i.e., it optimizes each objective (e.g., CTR and GMV5) one at a time, with the constraint that no single objective can be further improved without affecting others.
Footnote 5: We provide the definition of the most frequently used online metrics in Table 9.
Finally, some studies [62; 201] propose using profit-aware multi-objective evolutionary algorithms [297; 128]. One of these [201] is based on _Non-dominated Sorting Genetic Algorithm II_ (_NSGA-II_). A more recent one [62] is based on _Multi-Objective Artificial Bee Colony_ (_MOABC_). In these cases, the optimization target is a combination of the item's profit and the user's expected interest. Both algorithms obtained very promising offline results on the overall profit improvement, although the comparison was performed exclusively with a traditional user-based collaborative filtering algorithm [204].
#### 4.3.2 Post-Processing Profit-Aware Methods
In the context of this survey, many profit-aware approaches rely on post-processing re-ranking methods. As mentioned earlier, these approaches consider the recommender baseline as a black box and generate recommendations by exploiting a combination of certain heuristics.
All examined profit-aware approaches are based on a simple but important assumption [70; 141]: the items most relevant to the user are often not those of the highest business value to the organization. Consequently, prioritizing the highest-profit items in recommendations would allow for increased business profitability as a result of actual user purchases of those items. In one of the earliest approaches [52; 200] it is proposed to weight the probability of purchase (i.e., the estimated expected interest) by profitability in order to maximize an average expected profit (Eq. 13). This approach should make it possible to provide more profitable recommendations than those generated by traditional RSs. Experiments in a synthetic dataset based on a subset of groceries transactions show encouraging results: the proposed algorithm was able to increase profitability without excessively impacting the relevance of recommendations. However, as also reported by the authors [52; 200], the interests of customers and the organization must be balanced appropriately. In fact, the organization could risk losing loyal customers should they feel dissatisfied with overly biased recommendations toward higher-value items and decide to leave the platform.
To mitigate this drawback, and thus to avoid providing completely irrelevant recommendations, various studies propose more or less straightforward extensions of Eq. 13 based on constrained optimization techniques. One of the earliest papers [68] proposes a constrained re-ranking method based on the _Dice_ coefficient (Eq. 14). This can help prevent the system from providing recommendations that are too dissimilar from the original ones based on a threshold \(\eta\). However, the study is based on various simplifying assumptions and does not provide an empirical evaluation of the approach. In two related studies [256; 257] instead, it is proposed to maximize profitability under customer satisfaction and budget constraints (Eq. 15), where \(\zeta\) and \(\lambda_{u}\) are two thresholds used to keep the probability of purchase and the price of items within certain ranges, respectively. In particular, an expert system is proposed where different optimization goals can be specified in order to optimize profitability or balance profitability and satisfaction in order to achieve a win-win situation for suppliers and customers. A similar variant of this approach (Eq. 16, 17) is also proposed in two related studies [101; 141] where the short- and long-term profit-relevance tradeoff is investigated through the use of simulations. In Eq. 17, \(\delta\in[0,1]\) is a regularization parameter.
In addition, other studies [188; 189; 288] propose algorithms to address the problem of sponsored recommendations. In this scenario, a supplier who decides to sponsor its products pays the platform for each user interaction. One study [189] in particular proposes a multi-objective post-processing re-ranking algorithm (Eq. 18). In the equation, \(y_{u,i}\) is a decision variable (\(y_{u,i}=1\) iff \(i\in\mathcal{Y}_{u,k}\)), \(d_{u,i}\) is the ad revenue that the organization gets from suppliers if the user interacts with the item, and \(l<k\) is the maximum number of sponsored items that can be included in the recommendation list. The algorithm is designed to balance
\begin{table}
\begin{tabular}{c c} Ref & Re-Ranking Method \\ \hline \hline \end{tabular}
\end{table}
Table 6: Profit-aware re-ranking methods. *The formulas capture the main essence of the described approaches.
the recommendation of high ad revenue sponsored items with the user's interests.
In Table 6, we formally characterize the profit-aware re-ranking methods discussed above.
### Promotional Methods
Promotional methods [198; 199] aim to increase sales figures by promoting products and services to the most appropriate customer segments. We identify three main strategies in the RSs literature that can be used to optimize profit and related business KPIs. _Pricing methods_, can be used to offer products at a discounted price or to strategically adjust prices in order to increase the market demand. _Bundling methods_, are special pricing methods that are applied to product bundles. _Brand-awareness methods_, finally, can be used to focus customers' attention on the organization's products in order to generate extra sales. Below we give some insights on how these methods work by discussing a few selected articles in each category.
#### 4.4.1 Pricing Methods
As discussed earlier in Section 4.1, price is one of the most influential variables of customer buying behavior, and considering this variable explicitly would allow for recommendations more in line with customers' interests. However, while the previous section focuses on customer-oriented methodologies that integrate price sensitivity as additional information in order to generate more relevant recommendations, in this section we instead discuss promotional techniques that an organization might want to apply to incentivize the purchase of certain products by strategically setting the prices [36; 103]. In the following, we describe two organizational strategies referring to: (a) _occasional discounting_; (b) _personalized dynamic pricing_.
One of the most commonly used promotional strategies to incentivize product purchases is to offer occasional discounts [56], for example at certain times of the year (e.g., winter sales) or special events (e.g., Black Friday). In the context of RSs many studies [116; 145; 149; 151; 230; 254; 268] aim to generate recommendations while considering discounts. Some studies propose, for example, to use re-ranking algorithms [145] to promote products on sale or in-processing methods [254] based on adaptations of MF-based models to explicitly consider customers' discount sensitivity [230]. Another method is proposed in two related studies [149; 151]. In particular, as noted by the authors, there may be inter- and cross-category effects when discount products are bought. Thus, especially in e-commerce, organizations can exploit RSs to incentivize customers to buy discount products but also those products that are related to them but not on sale (e.g., camera on sale and full-price lens). A similar analysis is also made [116] to determine the optimal shipping-fee discount to attract customers to the platform and encourage them to purchase products related to the discounted ones.
While discounts may be occasional and the same for all customers, some methodologies are proposed in the RSs literature to generate dynamic customer-specific prices in order to strategically promote certain products and generate higher profits. In this context, some initial studies [25; 26] propose to use survey-based techniques (_conjoint analysis_) to estimate customer WTP (recall Section 4.1.1) and filter items that are priced higher than WTP in the ranking. The authors also discuss some possible configurations of the algorithm to set the prices based on WTP in order to generate more profit for the organization. However, the proposed pricing model is only theoretical as it is not validated by empirical experiments. Another study [154] proposes a system that classifies customers based on whether they would buy products only if discounted or not. Based on the type of customer, the system can offer a discount in order to incentivize purchases. However, as discussed later [193], the study is based on assumptions that are not feasible in practice: all products have the same price; only two price values are available (i.e., standard and discounted price). Another work [289] proposes a different methodology. The study focuses on a lottery-based mechanism that aims to obtain the exact WTP for one subset of products and then to exploit this information to predict the WTP for another subset of products. In this way, the system can offer a personalized promotion to increase the conversion rate of the latter product subset. The authors report significant results on the potential ability of this system to increase profit over conventional systems. However, the experiments are based on a user study with a low number of users. Finally, another study [5] proposes a dynamic personalized pricing recommender system for information goods (e.g., digital movie rentals). These goods differ from physical goods in that their production and distribution costs are negligible and they can be copied, rented, and resold easily. In this context, traditional markup-based pricing methods (i.e., cost plus margin) are not effective because there is no true underlying unit cost. The proposed system first classifies customers according to their WTP and quality sensitivity (e.g., whether they prefer a premium version of the same product). Then it calculates a personalized price to incentivize purchase.
#### 4.4.2 Bundling Methods
One frequently used promotional strategy [250] to increase sales revenue of certain products is to offer them at a discount if purchased in bundles [119]. In the literature [271] it is proposed, for example, to include in the bundles: (a) products that are complementary to each other in order to incentivize cross-selling; (b) products that are uncorrelated, for example, to clear the stock in the warehouse; (c) the same product in multiple quantities (e.g., 2x1 promotion). Specifically, in RSs research [176], one branch of the literature focuses on recommending bundles to optimize profit by exploiting price modeling techniques. The other branch, in contrast, does not exploit such techniques and
focuses solely on optimizing relevance6. In this review, we focus only on bundling approaches that aim to explicitly optimize business KPIs.
Footnote 6: Relevance-based bundling algorithms [239] can be based for example on association rules [83; 152; 273], graph-based approaches [27; 74; 96; 183; 183], GNNs [47; 12; 46] and transformers [23].
Concerning price modeling bundle recommendation techniques, two related earlier studies [94; 95] focus on the development of a shopbot (i.e., comparison shopping agents) capable of offering bundles at a discounted price based on an integer linear programming model. The proposed algorithm is validated using data from Amazon.com and Buy.com reporting significant results from the perspective of potential economic savings of price-sensitive bundle purchasing customers. However, the data sample used is very small, and optimization of business KPIs is not explicitly considered. In contrast, two other studies [150; 301] leverage similar integer programming-based approaches to recommend bundles with the goal of optimizing profitability [150] or any business objective [301]. In particular, considering the case where the bundle can be created directly by the customer by selecting the products of his or her preference, the first study [150] proposes a multistage approach that can dynamically determine the price of the added products in real-time with the goal of maximizing profits for the organization. In contrast, the second study [301] investigates how to incorporate product compatibility and potential cost savings to generate bundles that, if recommended, could optimize certain business objectives (e.g., profitability, revenue, and others). Both studies report results regarding the potential ability of the proposed systems of increasing profitability and conversion rates. In addition, two other approaches [82; 31] are proposed recently. The first approach [31] is based on a collaborative filtering algorithm that integrates demand estimation and price modeling techniques to make recommendations with the goal of jointly maximizing purchase probability and sales revenue considering the customer WTP. The second approach [82] is based on an algorithm that can recommend bundles with customized discounts to customers considering also inventory levels. However, in the former case, the bundle does not offer an additional discount over the full price of the individual products. Instead, the bundle is created exclusively so that the total price of the products inside it is aligned with the customer's WTP to meet his or her price preferences. In the latter case, on the other hand, the evaluation is based on a simulation focused on the aviation industry with a large number of assumptions.
#### 4.4.3 Brand-Awareness Methods
Some methods in the literature can be used to promote the organization's products and services, raise brand awareness, and increase profitability in the long run. These methods can be interpreted by referring to the sales funnel [249]. The sales funnel is a theoretical model that describes the customer journey in different stages according to the type of customer interaction with the organization [212]. Depending on the status of the customer in the sales funnel, it might be advisable to design an RS with different purposes.
If the customer has not yet made the first purchase (which is referred to as the prospect state), it might be promising to maximize the conversion rate by closing the first deal as quickly as possible [156]. At this early stage, recommending the most popular products may not be the best strategy. Since many popular products are commonly purchased together, customers would discover them on their own without the need of a recommendation. Instead, it could be more beneficial to present still popular but unrelated products, optimizing coverage. In this way, it may be possible to attract more customers to the platform and increase the probability they make their first purchase.
Once the customer has made the first purchase, the company can exploit mechanisms to optimize profits in the long-run [39; 108]. One option could be to mainly recommend items with high consumer ratings [147]. However, similarly to the previous case, this may not be the best choice either, as many customers might search for and buy such items anyway. Instead, it might be more valuable to stimulate the purchase of products of possible interest that are likely unknown to the customer [39], e.g., products that do not fall in the top-\(k\) but have medium-high ranking positions. This way, the company may get both the revenue from the purchases of products that the customer would discover on their own without the recommendations, and an additional revenue through the purchases that were triggered by the recommendations. With similar objectives, it might also be worthwhile for the company to leverage an RS [108] to launch a marketing campaign with the purpose of promoting new products in the market. Such a system could be designed to select a set of seed consumers for the marketing campaign such that if these seed consumers provide relatively high ratings, the number of other consumers to whom the new product is recommended is maximized.
### Long-Term Value Sustainability Methods
It is very important for organizations to grow sustainability over time [186; 222]. Accordingly, a number of studies in the literature [130; 139; 216] propose recommendation algorithms that consider temporal dynamics to optimize long-term business value. Many of them rely on the _Customer Lifetime Value_ (_CLV_) [37; 38] and other related conceptual models (e.g., _Recency Frequency Monetary - RFM_) from the business literature. CLV represents the expected business value of all future cash flows attributed to a specific customer discounted to the present time.
Similarly to what is found for bundling methods (in Section 4.4.2), some RSs studies propose to exploit CLV to optimize long-term profit [139; 216] while others exploit it solely to optimize relevance7[236; 242]. In this review, we
focus only on algorithms that aim to optimize long-term business KPIs. Below we give some insights on how these methods work by discussing a few selected articles. In particular, we first discuss in- and post-processing methods based on supervised learning and then we delve into recent algorithms based on reinforcement learning.
Footnote 10: [https://github.com/google-learning-learning-and-play/](https://github.com/google-learning-learning-and-play/)
Footnote 11: [https://github.com/google-learning-and-play/](https://github.com/google-learning-and-play/)
#### 4.5.1 Post-Processing and Supervised Learning Methods for Long-Term Business Value Optimization
Some studies [29; 130; 131; 210] propose post-processing algorithms to maximize the long-term business value of recommendations by exploiting heuristic criteria. In particular, Hosanagar [130] proposes an algorithm following this simple but effective intuition: when a customer trusts an RS, the system should bias the recommendations to increase profitability; instead, when the customer trust is below a certain threshold, the system should recommend the most relevant products to restore trust at the expense of profitability. The original study [130] proposes only a theoretical assessment of the profit surplus that can be generated using this algorithm. However, the algorithm's performance is also evaluated in an online study [210] and in a recent post-hoc econometric analysis [29]. These recent studies demonstrate both the effectiveness of the proposed methods in generating higher sales revenue than a content-based filtering algorithm [210] and how trust is positively correlated with higher sales revenue [29].
Other approaches based on supervised machine learning algorithms are also studied to explicitly optimize the long-term business value of recommendations. In particular, in two related studies [138; 139], a recommendation system is proposed to explicitly maximize CLV. The algorithm is designed specifically for subscription-based [138] and transaction-based [139] revenue models. In particular, survival analysis techniques are used to identify frequent purchasing patterns among higher CLV users. Then, recommendations are generated to match those patterns as closely as possible. The algorithms are evaluated using real data from a mobile cartoon provider with a subscription-based revenue model [138] and an online music provider with a transaction-based revenue model [139], both from Japan. However, although results regarding the improvement of the subscription period and the number of items purchased over time are reported, the evaluation is only based on a simulation system of user purchasing behavior.
#### 4.5.2 Reinforcement Learning Recommendation Methods for Long-Term Business Value Optimization
Recent studies propose methodologies based on _Reinforcement Learning_ (_RL_) for optimizing the long-term business value of recommendations [241]. RL is a learning approach that aims to learn an optimal policy (i.e., recommendation strategy) based on the sequential interaction between an agent and the environment through trial and error to maximize a reward. This methodology is used many times in the literature [115; 122; 148; 153; 216; 244; 267; 292; 303] to optimize the customer lifetime value.
A few studies propose algorithms to directly optimize profit [153; 216]. These studies focus on the transaction-based revenue model where each customer purchase brings a certain profit to the organization. Specifically, in this context, one study [216] considers that a certain profit share can be allocated to each user action (i.e., click, add-to-cart, pay). Hence, the overall profitability can be maximized by optimizing the sum of the profit allocated to each user action considering the probability that such an action will occur given the recommendations. Other studies [115; 148; 244; 267; 292; 303], in contrast, propose algorithms to optimize user engagement, or more generally some strategic interrelated business indicators [122]. One study [244] is based on the advertising revenue model. In this particular context, advertisers are used to pay the platform a certain monetary amount for each click or conversion generated. Hence, in this case, by optimizing user engagement, profit is directly optimized. Instead, other works [115; 148; 267; 292; 303], although they similarly propose to optimize user engagement, are not based on advertising revenue models. Therefore, in these cases, the relationship with profitability is indirect, as user engagement positively correlates with retention.
## 5 Evaluation Methodologies
In this section, we review the evaluation methodologies used in the surveyed papers. First, we give some insights into the different methodologies that are used to evaluate algorithms. Next, we discuss the metrics used in offline evaluation. Then we discuss the results that have been obtained in the real world from ECRSs algorithms by analyzing in detail those studies that report online performance. Finally, we analyze related topics concerning public datasets and the current level of reproducibility.
### Evaluation Approaches
In the field of RSs, several methods are proposed to evaluate the performance of algorithms and systems. Depending on the objective of the study, the evaluation may vary in order to assess specific aspects of the recommendations and the system. We identify five methods that are used in the surveyed literature. Some of these are used for offline evaluation (e.g., static predictions, simulation studies, and econometric analyses) [291], while others are used for online evaluation (e.g., user studies, and A/B tests) [53]. While offline methods aim to give a plausible estimate of the performance the system could achieve under real circumstances if certain assumptions are verified, online ones are instead based on real user interactions. In
Figure 4 we report the distribution of evaluation methods in the literature according to the subdomain of analysis. As can be seen, offline methods are used more frequently than online ones. Moreover, among offline methods, static predictions is the most frequently used method.
Static PredictionsThe most commonly used evaluation method in the RSs literature is to hide some data (e.g., ratings, interactions) from a particular dataset, train a model on the remaining data, and then predict the hidden data [144; 291]. After constructing a dataset that contains all the necessary information, the adopted standard is to measure the performance of the system with respect to some underlying objectives [17; 140] with the help of certain metrics. In terms of metrics, given the underlying purposes of ECRSs, the surveyed literature often measures not only relevance prediction metrics (e.g., precision, MRR, NDCG) [112], but also business value metrics8 (e.g., profit, revenue) [144; 189; 14].
Footnote 8: We discuss the most frequently used offline metrics in Section 5.2.
Simulation StudiesWhile static predictions methods [291] are mainly used to obtain an estimate of RSs performance in the short term, other studies propose to use dynamic simulations to assess long-term performance [101]. The methodology first involves building a simulator to mimic user behavior [216; 244; 279]. Next, the simulator is used to train and test RSs algorithms on the simulated behavior [116; 267]. Simulators are often adopted to evaluate the performance of reinforcement learning-based recommendation algorithms [11] (e.g., RecoGym [229], RecSim [137]). Moreover, in the surveyed literature there are also simulators [139; 185; 101] created to evaluate supervised learning algorithms. One of these [101], based on agent-based modeling, is designed to realistically mimic customer behavior considering various factors known in the literature to have a high correlation with purchase probability (e.g., trust).
Econometric AnalysesFor some algorithms in the surveyed literature [68; 130; 151], performance is assessed with the help of econometric analyses [3; 207]. These are quantitative approaches based on statistical or mathematical methods used to estimate the impact of the system on certain variables of interest (e.g., profit [130]), considering some underlying assumptions. For example, one study [130] investigates the impact of recommendations on corporate profit and consumer welfare by modeling the behavior of a system that considers the simplified case in which the company can sell only two products.
User StudiesIn many cases, the impact of the system on certain factors (e.g., user satisfaction) is difficult to model through offline methods. This occurs because in some cases it is not possible to find a good proxy for the target variable, while in other cases it would be necessary to use a large number of assumptions. Especially when the factors are qualitative and the response is subjective (e.g., perceived fairness), the literature adopts user studies as a research methodology [53]. These methods typically involve recruiting a group of users (e.g., through emails or through crowdsourcing platforms like Amazon Mechanical Turk), randomly splitting them into distinct groups, requiring them to perform a particular task (e.g., interacting with an RS designed for the study), observing their (objective) behavior, and asking them about their subjective perceptions. In the surveyed literature these methods are used [24; 210] for example to determine the impact of algorithms on profitability and user trust.
A/B TestsWhen it is necessary to measure the performance of recommender systems in real-world circumstances, A/B tests are often performed [53]. In such tests, two (or more) versions of a system are deployed for a certain period of time and users either interact with one or the other version [144]. Although these tests are often complex to execute and require significant effort, the main advantage is that they are able to directly measure business KPIs (e.g., revenue, profit) [53] and to compare different algorithms in production. These tests are used many times in the surveyed literature9 since algorithms are often designed to optimize such KPIs. For example, A/B tests are used to measure the effects of a profit-aware algorithm deployed on Alibaba's AliOS appstore [288] and the CTR of a reinforcement learning-based algorithm deployed on a large e-commerce platform [216].
Footnote 9: We discuss results of A/B tests in Section 5.3.
### Metrics Used in Offline Evaluations
A variety of metrics are used in the literature in offline evaluations, including both accuracy metrics to assess the
Figure 4: Distribution of evaluation methods in the surveyed literature organized by dimension of analysis.
* [112] \[\begin{array}{ll}\mbox{\it Prec}@k=\frac{1}{|\mathcal{U}|}\sum_{u \in\mathcal{U}}\frac{\sum_{j=1}^{k}rel_{u,j}^{\mathcal{Y}}}{k}\end{array}\] (19)
* [112] \[\begin{array}{ll}\mbox{\it Rec}@k=\frac{1}{|\mathcal{U}|}\sum_{u \in\mathcal{U}}\frac{\sum_{j=1}^{k}rel_{u,j}^{\mathcal{Y}}}{\sum_{i=1}^{k}x_{u,i}}\end{array}\] (20)
* [112] \[\begin{array}{ll}\mbox{\it HR@k=\frac{1}{|\mathcal{U}|}\sum_{u \in\mathcal{U}}\begin{cases}1&\mbox{if}\quad\sum_{j=1}^{k}rel_{u,j}^{\mathcal{Y }}\geq 1\\ 0&\mbox{otherwise}\end{cases}}\end{array}\] (21)
* [112] \[\begin{array}{ll}\mbox{\it MRR@k}=\frac{1}{|\mathcal{U}|}\sum_{u \in\mathcal{U}}\frac{1}{j_{u}^{\mathcal{Y}}}\end{array}\] (22)
* [112] \[\begin{array}{ll}\mbox{\it NDCG@k}=\frac{1}{|\mathcal{U}|}\sum_{u \in\mathcal{U}}\frac{\sum_{k=1}^{k}rel_{u,j}^{\mathcal{Y}}}{IDCG_{u}@k}\end{array}\] (23)
* [112] \[\begin{array}{ll}\mbox{\it NDCG@k}=\frac{1}{|\mathcal{U}|}\sum_{u \in\mathcal{U}}\frac{\sum_{k=1}^{k}rel_{u,j}^{\mathcal{Y}}}{IDCG_{u}@k}\end{array}\] (23)
* [113] \[\begin{array}{ll}\mbox{\it NDCG@k}=\frac{1}{|\mathcal{U}|}\sum_{u \in\mathcal{U}}\frac{\sum_{k=1}^{k}rel_{u,j}^{\mathcal{Y}}}{IDCG_{u}@k}\end{array}\] (24)
* [114] \[\begin{array}{ll}\mbox{\it NDCG@k}=\sum_{u\in\mathcal{U}}\sum_{j=1}^{k} rel_{u,j}^{\mathcal{Y}}\cdot v_{j}\end{array}\] (25)
* [115] \[\begin{array}{ll}\mbox{\it NDCG@k}=\frac{1}{|\mathcal{U}|}\sum_{u \in\mathcal{U}}\sum_{j=1}^{k}\hat{x}_{u,j}(\mathbf{\hat{}})\cdot v_{j}\end{array}\] (26)
* [116]** \[\begin{array}{ll}\mbox{\it Prt@k}=\frac{1}{|\mathcal{U}|}\cdot\frac{Profit@k }{Volume@k}\end{array}\] (27)
* [117]* \[\begin{array}{ll}\mbox{\it Prt@k}=\frac{1}{|\mathcal{U}|}\sum_{u \in\mathcal{U}}\frac{\sum_{j=1}^{k}rel_{u,j}^{\mathcal{Y}}\cdot p_{j}}{P-IDCG_ {u}@k}\end{array}\] (28)
\begin{table}
\begin{tabular}{l l l l} \hline Refs & Metric & Type & Definition \\ \hline
* [112] & \(\begin{array}{ll}\mbox{\it Prec}@k=\frac{1}{|\mathcal{U}|}\sum_{u \in\mathcal{U}}\frac{\sum_{j=1}^{k}rel_{u,j}^{\mathcal{Y}}}{k}\end{array}\) & (19) & Relevance & \(\begin{array}{ll}\mbox{\it Precision}\mbox{\it at position $k$ is the number of relevant items in the top-$k$ recommendations over the number of recommended ones.}\end{array}\) \\ \hline
* [112] & \(\begin{array}{ll}\mbox{\it Rec}@k=\frac{1}{|\mathcal{U}|}\sum_{u \in\mathcal{U}}\frac{\sum_{j=1}^{k}rel_{u,j}^{\mathcal{Y}}}{\sum_{i=1}^{k}x_{u,i}}\end{array}\) & (20) & Relevance & \(\begin{array}{ll}\mbox{\it Recall}\mbox{\it at position $k$ is the number of relevant items in the top-$k$ recommendations over the total number of relevant ones.}\end{array}\) \\ \hline
* [112] & \(\begin{array}{ll}\mbox{\it HR@k}=\frac{1}{|\mathcal{U}|}\sum_{u \in\mathcal{U}}\begin{cases}1&\mbox{if}\quad\sum_{j=1}^{k}rel_{u,j}^{ \mathcal{Y}}\geq 1\\ 0&\mbox{otherwise}\end{cases}\) & (21) & Relevance & \(\begin{array}{ll}\mbox{\it Hit-Rate}\mbox{\it at position $k$ is the fraction of users for which the recommendations list contains at least one relevant item.}\end{array}\) \\ \hline
* [112] & \(\begin{array}{ll}\mbox{\it MRR@k}=\frac{1}{|\mathcal{U}|}\sum_{u \in\mathcal{U}}\frac{1}{j_{u}^{\mathcal{Y}}}\end{array}\) & (22) & Relevance & \(\begin{array}{ll}\mbox{\it Mean Reciprocal Rank}\mbox{\it at position $k$ is the mean rank of the first relevant item in the recommendations list. In the equation, $j_{u}^{\mathcal{Y}}$ is the rank (position) of the first item relevant to user $u$.}\end{array}\) \\ \hline
* [112] & \(\begin{array}{ll}\mbox{\it NDCG@k}=\frac{1}{|\mathcal{U}|}\sum_{u \in\mathcal{U}}\frac{\sum_{k=1}^{k}rel_{u,j}^{\mathcal{Y}}}{IDCG_{u}@k}\end{array}\) & (23) & Relevance & \(\begin{array}{ll}\mbox{\it Normalized Discounted Cumulative Gain}\mbox{\it at position $k$ is the inverse log reward on all positions with relevant items among the top-$k$ recommended ones.}\end{array}\) \\ \hline
* [113]* & \(\begin{array}{ll}\mbox{\it Revenue@k}@=\sum_{u\in\mathcal{U}}\sum_{j=1}^{k}rel_{ u,j}^{\mathcal{Y}}\cdot p_{j}\end{array}\) & (24) & Value & \(\begin{array}{ll}\mbox{\it Prt@k}\mbox{\it at position $k$ is the revenue from relevant items in the recommendations list.}\end{array}\) \\ \hline
* [114]* & \(\begin{array}{ll}\mbox{\it Profit@k}=\sum_{u\in\mathcal{U}}\sum_{j=1}^{k}rel_{ u,j}^{\mathcal{Y}}\cdot v_{j}\end{array}\) & (25) & Value & \(\begin{array}{ll}\mbox{\it Expected Profit}\mbox{\it at position $k$ is the statistical profit it is expected to achieve by the recommendations considering the expected user interest $\hat{x}_{u,j}(\mathbf{\hat{}})$. $EP@k$ is referred to as statistical profit (compared with $Profit@k$ in Eq. 25), because the probability that the user accepts the recommendations instead of the actual ground truth relevance information is considered.}\end{array}\) \\ \hline
* [116]** & \(\begin{array}{ll}\mbox{\it Prt@k}-\mbox{\it Hit}@k\mbox{\it at position $k$ is the average profit per user from relevant items in the recommendations list.}\end{array}\) \\ \hline
* [118]* & \(\begin{array}{ll}\mbox{\it Prt@k}-\mbox{\it NDCG@k}=\frac{1}{|\mathcal{U}|} \sum_{u\in\mathcal{U}}\sum_{j=1}^{k}\frac{rel_{u,j}^{\mathcal{Y}}\cdot p_{j}}{P-IDCG_ {u}@k}\end{array}\) & (28) & Value & \(\begin{array}{ll}\mbox{\it Price-Based Normalized Discounted Cumulative Gain}\mbox{\it at position $k$ is defined as $NDCG@k$, where the gain is given by the items price. In the equation, $P-IDCG_{u}@k$ is the _Price-Based Ideal Discounted Cumulative Gain_ obtained by sorting the prices of all relevant items to the user in descending order.}\end{array}\) \\ \hline \end{tabular}
\end{table}
Table 7: Most frequently used offline evaluation metrics in the surveyed literature. *Note that for the sake of notation we used \(p_{j}\) and \(v_{j}\) as variables to indicate the price and profit of the recommended item at position \(j\), but these variables depend only on the item and not on the position. **The formulas capture the main essence of the metrics.
relevance prediction performance as well as metrics aimed to investigate organizational value.
Given \(rel_{u,j}^{Y}\) as a ground truth relevance10 variable that indicates whether the item recommended at position \(j\) in the ordered ranking \(\mathcal{Y}_{u,k}\) is relevant or not for user \(u\), we report in Table 7 the metrics used for offline evaluations in the literature. In the table we indicate for each metric the reference, the formula, the type, as well as its definition. In the following, we mainly focus on value metrics, since relevance prediction metrics (e.g., Precision, Hit-Rate, NDCG - see Eq. 19, 21, 23), are already widely known [112] and do not require further discussion here.
Footnote 10: The relevance of each item typically corresponds to the value of the ground truth \(x_{u,i}\in\{0,1\}\), i.e., assuming \(x_{u,i}=1\) if the item was actually purchased by the user, and \(x_{u,i}=0\) if not.
The general principle is the same for all value metrics. Similarly to relevance prediction metrics, first a list of top-\(k\) recommendations is generated for each user. Then the recommendations are compared to the ground truth and certain value-related aspects are collected. Those value-related aspects are connected to the price and profit (and in more general terms also to the utility) of each recommended item. In particular, differently from prediction relevance metrics, value ones do not only count the hits but multiply that hit by the items' price and profit. We briefly introduce the most frequently used value metrics as follows:
* _Revenue@k_ (Eq. 24) [24; 188; 189] indicates the total revenue from the sale of recommended products actually purchased by users;
* _Profit@k_ (Eq. 25) [62; 101; 141; 201] indicates the total profit from the sale of recommended products actually purchased by users;
* _EP@k_ (Eq. 26) [43] indicates the _statistical_ expected profit from the recommendation. _EP@k_ compared to _Profit@k_ is referred to as statistical because the probability of the user accepting the recommendations is considered rather than the ground truth information;
* _PAH@k_ (Eq. 27) [162] indicates the overall profit generated by the recommendation per user divided by the number of items sold;
* _P-NDCG@k_ (Eq. 28) [180; 184] indicates the total revenue generated on average per user from the recommendation compared to the theoretically achievable maximum revenue. _P-NDCG@k_, like _NDCG@k_ (Eq. 23) [112], gives more importance to the higher-priced items positioned on the top of the ranking11.
Footnote 11: Note that, as in IR [32; 160], value metrics can be rank-agnostic (e.g., _Revenue@k_, _Profit@k_) or rank-aware (e.g. _P-NDCG@k_), depending on whether the position of the recommended items in the ranking is considered for evaluation or not.
However, analyzing the surveyed articles, some open issues can be identified. In particular, we observe that the literature is mostly scattered, application-specific, and there are no well-defined standards in offline assessment of business value [70; 141]. Often the same metric is referred to by different names (e.g., _Price-Based NDCG_[184], vs. _G-DCG_[180]). Other times, researchers report results that are not comparable to each other because application-specific metrics are proposed in the article to investigate certain types of value (e.g., perishability [235], marginal utility per dollar [99]). In fact, under certain circumstances, it would not even be possible to use certain metrics. For example, in the case where the underlying dataset carries only price information and not profit information (e.g., Amazon [203], Tmall [300]), the metrics related to the latter would not be computable without using synthetic profit distributions of the dataset12. Finally, in cases where simulations are used, the calculation of value metrics may be based on assumptions. The main assumption that can be found [101; 141] is that in some studies the user is supposed to always buy at least one item of the top-\(k\) recommended ones. In these cases, since the user may not have actually purchased any of the recommended items if his or her purchase history is analyzed, the underlying ground truth information may be unrealistic.
Footnote 12: We discuss the synthetic profit issue in Section 5.4.
### Real-World A/B Tests and User Studies
Many authors evaluate the performance of ECRSs algorithms using A/B tests or user studies. As is known in the literature, offline evaluation results are not necessarily a valid indicator of online performance [143; 144]. This is often due to the fact that different metrics are used for the two types of experimental evaluation [75; 234]. While offline metrics are often used to measure relevance prediction accuracy (e.g., Precision, NDCG), online metrics are used instead to measure business value (e.g., CTR, GMV, Revenue) [92; 159]. Companies are usually much more interested in assessing how algorithms impact real-world business KPIs exploiting online metrics.
In Table 8 we list the studies in the surveyed literature that measure the performance of the proposed systems through A/B tests or user studies. In Table 9 we then briefly summarize the meaning of each online metric that is considered for the analysis13 (i.e., IPV, CTR, CVR, GMV, Revenue, Profit). We refer readers to a recent survey [144] on this topic for further insights into online metrics.
Footnote 13: Some niche metrics used to measure certain application-specific factors reported in the studies are not considered
Analyzing Table 8 we can make some interesting observations. Some considerations depend on the nature of the particular evaluation methodology (i.e., A/B test vs. user study). For example, considering the recommendations channel and the number of subjects, we note that user studies typically involve few users recruited through e-mail campaigns [210; 301] or Amazon Mechanical Turk [24; 289]. Instead, A/B tests are typically performed on a large scale, exploiting existing systems with large customer
bases [216, 288], some of well-known brands (e.g., Walmart, Taobao, Alibaba, NetEase) [74, 148, 191, 288]. Moreover, from a performance point of view, all the studies, whether they are based on user studies or A/B tests, show that ECRSs are able to potentially bring huge business value to the firm. In fact, increases in online metrics are reported in all studies. In some cases, the authors report significant performance improvements14 (e.g., +48.92% CVR [301], +35% revenue [12], +32% profit [288]).
Footnote 14: To ensure evaluation reliability, many authors test the proposed algorithm in different configuration environments reporting different results for each of them [177]. In these cases, Table 8 shows a range instead of a single value in metrics improvement.
However, there may be some limitations regarding the insights we can get from the studies. For example, most of the A/B tests last a very short time, i.e., less than three weeks15[24, 45, 148, 177, 180, 216, 288, 289, 301]. In some cases, the baselines are proprietary algorithms and their internal mechanisms are unknown [24, 45, 74, 191] (e.g., Walmart Ranker). In other cases, results depend on assumptions. For example, a study [289] based on Amazon Mechanical Turk uses synthetic profit information, as the authors did not have product costs available. Another study [210] uses some proxies for offline purchases in addition to explicit purchase data from the firm's online site to measure revenue. In that specific context, offline purchases cannot be connected to the online identities of experiment participants. In particular, the authors treated items that received high ratings by users after they clicked on the "_see more details_" link as purchases to calculate profit.
Footnote 15: Performing long-term A/B tests on a real platform is complex [144] and significant effort is required both in the planning and analysis phases. Often the test could cause financial damage to the brand as users could lose trust in the company due to ineffective recommendations. Other times, it is necessary to re-run the test because of bugs. Or again, certain events (e.g., Easter, Super Bowl) or global macroenomic circumstances (e.g., 2020 COVID-19 crisis, 2022 Ukrainian war) may impact performance.
### Available Datasets
Analyzing the ECRSs literature, our survey reveals that many studies report results based on proprietary datasets. This is mainly due to the fact that certain types of information (e.g., prices, profits, purchases, demographics) are
\begin{table}
\begin{tabular}{l l l l l l l l l l l l l l} \hline \hline Ref & Year & Eval. & Channel & Subjects & Durat. & Baseline & \(\Delta\)\% IPV & \(\Delta\)\% CTR & \(\Delta\)\% CVR & \(\Delta\)\%GMV & \(\Delta\)\%Rev. & \(\Delta\)\%Prof. \\ \hline
[191] & 2022 & A/B & \begin{tabular}{l} E-Commerce \\ Test \\ \end{tabular} & \begin{tabular}{l} 36M ses- \\ (Wamart) \\ \end{tabular} & - &
\begin{tabular}{l} Walmart \\ Ranker \\ \end{tabular} & & & \(\mathbf{+0.71\%}\) & & \\
[45] & 2022 & A/B & \begin{tabular}{l} Booking \\ Test \\ \end{tabular} & 1M & \begin{tabular}{l} 20 days \\ searches \\ \end{tabular} & 20 days &
\begin{tabular}{l} Platform \\ Ranker \\ \end{tabular} & \(\mathbf{+2.00\%}\) & & & \\ \hline
[12] & 2022 & A/B & \begin{tabular}{l} E-Commerce \\ Test \\ \end{tabular} & - & - & - & \begin{tabular}{l} Co- \\ \end{tabular} & & & & \(\mathbf{+35.0\%}\) \\ & \begin{tabular}{l} Total \\ online \\ Test \\ \end{tabular} & 2021 & A/B & \begin{tabular}{l} Online Insurance \\ Test \\ \end{tabular} & - & 1 week & LogReg & & & \(\mathbf{+1.05\%}\), & \(\mathbf{+2.7\%}\), \\ & \begin{tabular}{l} Total \\ online \\ Test \\ \end{tabular} & \begin{tabular}{l} E-Commerce \\ Test \\ \end{tabular} & \begin{tabular}{l} 26027 \\ (Nabab) \\ \end{tabular} & & \(\mathbf{+32\%}\) & & \(\mathbf{+12.31\%}\), \\ & \begin{tabular}{l} Total \\ online \\ Test \\ \end{tabular} &
\begin{tabular}{l} 26039 \\ (Nabab) \\ \end{tabular} & & & & \(\mathbf{CIR}\) & \(\mathbf{8.67\%}\) & & \(\mathbf{+18.0\%}\) & \\
[74] & 2020 & A/B &
\begin{tabular}{l} Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ Pluform \\ P Pluform
of strategic importance to companies, and uncontrolled sharing could create significant economic damage. For example, some information is sensitive to the user, and non-anonymized sharing could have major legal implications due to privacy laws, as well as significant impact on brand reputation. In addition, competitors could make use of economic data related to purchasing and profitability to study weaknesses in the business model and take away market share. However, especially recently, several studies also report results based on public datasets.
In Table 10 we report the most frequently used public datasets in the surveyed literature. Specifically, in addition to statistical information such as the number of users, items, interactions, and the density of the dataset, we also report the type of event/interaction (e.g., click, add-to-cart, purchase, rating), and the presence of relevant features for ECRSs algorithms, i.e., date, user demographics, product category, price, and profit.
Analyzing the reported information, we can make some observations. First, both the datasets' density and size, i.e., the number of interactions, vary greatly. Some of them are quite sparse (e.g., REC-RL [216]), whereas others are dense (e.g., Jester [105]). Some are quite small (e.g., Food-mart [63]), while others are large (e.g., Amazon [203]). In addition, as expected, most of the datasets contain economic information related to actual purchases, as well as prices and possibly profit of products (e.g., in the Cosmetrics [22], Diginetica [61], Ta-Feng [132], and Tmall [300] datasets). Indeed, as discussed earlier, economic information is typically used for both algorithmic and evaluation purposes. However, as can be noted, some datasets do not contain prices (e.g., MovieLens [120], Netflix Prize [35], Book-Crossing [302], Epinions [228], Last.fm [196]) and currently only Foodmart [63] contains profit. We observe that those datasets are the most frequently used in RSs research. In particular, although profit is very important especially to train profit-aware models, we note various studies [13; 41; 43; 62; 101; 141; 185; 201; 218] assuming some synthetic profit distribution, e.g., normal [101], or random [201]. This assumption would allow to overcome the profit availability issue. However, as reported in almost all the studies, this also constitutes an important limitation. In fact, under real circumstances, the profit distribution could be very different from the synthetic one used for the experiments, and the results could vary considerably.
### Reproducibility Maturity
The impact of reproducibility on the progress of science is undeniable. However, although there has generally been an increase in reproducible papers in AI over the
\begin{table}
\begin{tabular}{l l c c c c c c c c c} \hline Ref & Dataset & \#User & \#Item & \#Inter & Density & Event & Date & Dem. & Cat. & Price & Profit \\ \hline \hline
years [113], many of them are still not sufficiently well documented to reproduce the results of the reported experiments [117]. This problem is observed several times in the field of RSs [30; 66], with well-known cases regarding articles that proposed neural algorithms [85; 226], highlighting for example: non-uniform and lax standards in adopting the correct experimental evaluation methodologies [240]; questionable choices on the use and fine-tuning of baselines for comparative experiments [84].
In particular, by reviewing the ECRSs literature, we note several limitations concerning the reproducibility of the studies. As reported in Table 11, only a very small subset of 15 articles, out of 133 (11.27%) identified by the present systematic review share the implementation code16. Notably, as can be seen from the table, we find no article that publicly share the code prior to 2019. In addition, the level of reproducibility is quite uneven when considering the different subdomains of ECRSs. In particular, we note the following critical issues: there are many articles published in the _profit-awareness_ subdomain but only two of them share the code; all the articles published in the field of _promotional_ strategies refer to relevance-based bundling methods (i.e., there is no code shared about brand-awareness and pricing methods); the code of articles concerning _price-sensitivity_ and _long-term value_ methods is published only for the most recent and advanced GNN- and RL-based algorithms. Consequently, it would be beneficial and significantly accelerate progress in this field if researchers would pay special attention to increasing the level of reproducibility.
Footnote 16: We did not dive into the code details because even if the code is shared, it was found earlier in the RSs literature [66; 84; 240] that in many cases important information is missing to ensure reproducibility (e.g., pre-processing code).
## 6 Current Challenges and Future Research
In this section we discuss current challenges of ECRSs and possible future research directions.
Comparing Different Algorithmic ApproachesA multitude of algorithmic approaches for optimizing business value are proposed in the literature. In this paper, we categorize them at a high level into in-processing and post-processing methods [70] considering five subdimensions of analysis. However, most of the approaches are never compared with each other and may have specificities that may make them preferable in certain circumstances over others. For example, no study has yet compared in-processing with post-processing approaches. In addition, different types of in-processing algorithms are found in the literature. In particular, it is proposed for example to extend the objective function of MF [50; 51; 97; 230], or to use GNNs [283; 294; 295] to generate price-sensitive recommendations. Moreover, value neighbor selection [43], graph-based [13; 41; 220] or evolutionary [62; 201] profit-aware algorithms are proposed as well. However, some types of methods are applied only to certain dimensions of analysis. For example, although feasible in practice, no profit-aware MF objective function extensions or GNNs were surfaced through our study. Similarly, no neighbor selection or evolutionary price-sensitive algorithm was found so far. Therefore, it might be useful for the future both to compare in-processing and post-processing approaches and to implement theoretically feasible algorithms not yet found in the literature, comparing them with existing ones.
Optimizing Business Value Trade-OffsBusiness value optimization is complex, and the systems must consider multiple trade-offs [70; 141] in the optimization process. For example, considering real-world businesses based on an advertising revenue model (e.g., YouTube, Alibaba's AliOS), it is very important to find the right balance between the ad revenue generated by sponsored items and the actual interests of the user [189; 288]. In particular, special care must be taken not to compromise user trust [130; 210]. In fact, it is shown both through simulations [101], in user studies [205] and subsequent A/B tests [210] that trust is positively correlated with propensity to purchase. A system that is too biased toward higher-value items that provides irrelevant recommendations to the user [70; 141]
\begin{table}
\begin{tabular}{p{42.7pt} p{34.1pt} p{34.1pt} p{130.1pt}} \hline \hline Ref & Year & Dimension & Link \\ \hline
[47] & 2023 & Promotional & [https://github.com/cjx0525/BCCN](https://github.com/cjx0525/BCCN) \\
[283] & 2022b & Price-Sensitivity & [https://github.com/Zhang-xiaokun/CoHNN](https://github.com/Zhang-xiaokun/CoHNN) \\
[265] & 2022 & Price-Sensitivity & [https://github.com/PCNet-Code](https://github.com/PCNet-Code) \\
[101] & 2022 & Profit-Awareness & [https://github.com/nadaa/rec-strategies-abn](https://github.com/nadaa/rec-strategies-abn) \\
[123] & 2022 & Promotional & [https://github.com/tzoof/BRUCE](https://github.com/tzoof/BRUCE) \\
[12] & 2022 & Promotional & [https://github.com/muhanzhang/SSAL](https://github.com/muhanzhang/SSAL) \\ \hline
[278] & 2021 & Long-Term Value Sustainability & [https://github.com/google-research/google-research/tree/master/recs_ecosystem_creator_r1](https://github.com/google-research/google-research/tree/master/recs_ecosystem_creator_r1) \\
[295] & 2020 & Price-Sensitivity & [https://github.com/DavyMorgan/ICDE20-PUP](https://github.com/DavyMorgan/ICDE20-PUP) \\
[270] & 2020 & Economic & [https://github.com/zhichaoxx-shwe/E-commerce-Rec-with-WEU](https://github.com/zhichaoxx-shwe/E-commerce-Rec-with-WEU) \\
[98] & 2020 & Utility & [https://github.com/TobyGE/Risk-Aware-Recommendation-Model](https://github.com/TobyGE/Risk-Aware-Recommendation-Model) \\
[67] & 2020 & Utility & [https://github.com/xydasiytu/rank](https://github.com/xydasiytu/rank) \\
[46] & 2020 & Promotional & [https://github.com/cjx0525/BCCN](https://github.com/cjx0525/BCCN) \\
[216] & 2019 & Long-Term Value Sustainability & [https://github.com/rec-agent/rec-rl](https://github.com/rec-agent/rec-rl) \\
[180] & 2019 & Profit-Awareness & [https://github.com/weberrr/PE-LTR](https://github.com/weberrr/PE-LTR) \\ \hline
[99] & 2019 & Economic & [https://github.com/TobyGE/LTR](https://github.com/TobyGE/LTR) \\ \hline \hline \end{tabular}
\end{table}
Table 11: Studies in the surveyed literature that provide the code.
could risk impacting the organization's reputation and driving away customers. To address this issue various studies [24; 162; 185; 189; 256; 288] propose algorithms with the goal of balancing the interests of multiple stakeholders [1; 2], particularly considering the profitability/relevance trade-off [141], and optimizing short- or long-term value [131]. Furthermore, as various studies pointed out, algorithms should take care also of explainability [246; 285], fairness [215; 219; 276; 277], and diversity [170; 210] since they are directly related to trust [74]. However, the current literature has not thoroughly investigated the impact of many of these factors on business value. Hence, providing efficient algorithms to simultaneously optimize multiple business value trade-offs (e.g., profit, fairness, and trust) could be a valuable research direction for the future.
Comprehensive Purpose-Oriented Offline and Online EvaluationEvaluatingECRSs often requires the use of methods that are different from those used for traditional RSs [53; 291]. As a result, there are still many open challenges in order to to be able to evaluate ECRSs in a comprehensive, purpose-oriented way [17; 140] (i.e., that considers the purposes for which the system is designed). Several of these challenges follow from the analysis presented in this paper. For example, in offline evaluation, it is necessary to use business value metrics besides the widely adopted relevance prediction metrics [112]. Studies often exploit a variety of metrics [43; 116; 141; 162; 184] albeit with similar objectives, and the results reported are not comparable with each other. In addition, offline evaluation methodologies are not standardized and often are designed ad-hoc according to specific needs [130]. Moreover, besides a few exceptions [99; 101; 180; 216; 270; 278; 283], most studies are difficult to reproduce and are often based on proprietary datasets or public datasets with synthetic data [62; 201]. In fact, most datasets [22; 35; 120; 203; 228; 302] do not contain information such as profitability [63], which is however needed for model training. Regarding A/B tests on the other hand, many of them last for a short time [177; 180; 216] and involve a small set of users [24; 210; 289; 301] to avoid potential economic risks [144; 210] for the organization hosting the test. Hence, there could be several future research directions in the field of evaluation. For example, it is necessary to develop better offline value metrics that are indicative of online performance in a given (prototype) scenario. In addition, large-scale A/B tests (i.e., involving many users) and reproducibility studies are also required.
Design of Holistic Algorithmic MethodsIn this work, decomposing the literature on ECRSs into five different dimensions of analysis, various algorithmic approaches for optimizing business value are explored. However, most of the existing methods [51; 52; 99; 216; 289] focus exclusively on one of the five perspectives. There are a few exceptions [162; 191] that involve more than one dimension of analysis that study, for example [162], how to combine price-sensitivity with profit-awareness to generate more profit while keeping relevance high. A very small subset of studies [72; 82; 235], on the other hand, provide broader reasoning by also discussing inventory management techniques that might be useful for analogous purposes. Currently, the literature lacks holistic methods capable of leveraging multiple approaches simultaneously [70; 141] complementing each other to optimize different nuances of business value [144] while also considering the interrelationship [122] between them. In addition, it is also necessary to consider the relationship between sales and marketing processes with operational [72; 82; 235] and financial processes so as to propose methods for improving the entire business ecosystem, e.g., reducing raw material costs, minimizing logistics delays, or optimizing cash flows.
## 7 Conclusion
In this paper, we review the existing literature on economic RSs. Unlike traditional RSs, economic ones aim to directly optimize profitability by exploiting purchase information (e.g., price and profit) and related concepts from economics and marketing. This topic is highly important because organizations aim to optimize (long-term) profit. Accordingly, economic RSs are well-suited for use in commercial applications such as e-commerce, media streaming sites, and advertising platforms, as they offer various benefits for organizations to increase their business KPIs. In this survey, we identify a number of relevant works addressing a multitude of related issues on economic RSs. In particular, although the literature is highly scattered, five different approaches that jointly consider the interests of customers and organizations are identified in this paper (e.g., price sensitivity, profit awareness). This review shall help academic scholars and industry partners to navigate the existing literature and understand the state-of-the-art. We hope this work will serve as a valuable starting point to foster future research and shift academic efforts towards more impactful RSs research that matters [140; 142].
## 8 Acknowledgments
This work was partially funded by estilos srl.
|
2303.02328
|
Decompose, Adjust, Compose: Effective Normalization by Playing with
Frequency for Domain Generalization
|
Domain generalization (DG) is a principal task to evaluate the robustness of
computer vision models. Many previous studies have used normalization for DG.
In normalization, statistics and normalized features are regarded as style and
content, respectively. However, it has a content variation problem when
removing style because the boundary between content and style is unclear. This
study addresses this problem from the frequency domain perspective, where
amplitude and phase are considered as style and content, respectively. First,
we verify the quantitative phase variation of normalization through the
mathematical derivation of the Fourier transform formula. Then, based on this,
we propose a novel normalization method, PCNorm, which eliminates style only as
the preserving content through spectral decomposition. Furthermore, we propose
advanced PCNorm variants, CCNorm and SCNorm, which adjust the degrees of
variations in content and style, respectively. Thus, they can learn
domain-agnostic representations for DG. With the normalization methods, we
propose ResNet-variant models, DAC-P and DAC-SC, which are robust to the domain
gap. The proposed models outperform other recent DG methods. The DAC-SC
achieves an average state-of-the-art performance of 65.6% on five datasets:
PACS, VLCS, Office-Home, DomainNet, and TerraIncognita.
|
Sangrok Lee, Jongseong Bae, Ha Young Kim
|
2023-03-04T05:23:11Z
|
http://arxiv.org/abs/2303.02328v3
|
Decompose, Adjust, Compose: Effective Normalization by Playing with Frequency for Domain Generalization
###### Abstract
Domain generalization (DG) is a principal task to evaluate the robustness of computer vision models. Many previous studies have used normalization for DG. In normalization, statistics and normalized features are regarded as style and content, respectively. However, it has a content variation problem when removing style because the boundary between content and style is unclear. This study addresses this problem from the frequency domain perspective, where amplitude and phase are considered as style and content, respectively. First, we verify the quantitative phase variation of normalization through the mathematical derivation of the Fourier transform formula. Then, based on this, we propose a novel normalization method, \(PCNorm\), which eliminates style only as the preserving content through spectral decomposition. Furthermore, we propose advanced \(PCNorm\) variants, \(CCNorm\) and \(SCNorm\), which adjust the degrees of variations in content and style, respectively. Thus, they can learn domain-agnostic representations for DG. With the normalization methods, we propose ResNet-variant models, DAC-P and DAC-SC, which are robust to the domain gap. The proposed models outperform other recent DG methods. The DAC-SC achieves an average state-of-the-art performance of 65.6% on five datasets: PACS, VLCS, Office-Home, DomainNet, and TerraIncognita.
## 1 Introduction
Deep learning has performed remarkably well in various computer vision tasks. However, the performance decreases when distribution-shifted test data are given [37]. As training and testing datasets are assumed to be identically and independently distributed, common vision models are not as robust as the human vision system, which is not confused by affordable changes in the image style [13]. Domain generalization (DG) aims to learn models that are robust to the gap between source domain and unseen target domain to address this problem [46]. Moreover, DG is challenging because models should learn domain-irrelevant representation in an unsupervised manner.
The style-based approach is widely studied for DG, which defines the domain gap as the difference in style [53, 54, 18, 29, 48]. Typically, normalization methods, such as batch normalization (BN) [17], layer normalization (LN) [17], and instance normalization (IN) [42], which are well
Figure 1: Concepts of (a) the existing normalization and (b) the proposed methods. Our methods prevent or adjust the content change caused by existing normalization using spectral decomposition. The solid line marks the feedforward process and the dashed line conceptually represents the content and style of the feature. Red-colored star and doughnut in (b) indicate the content and style adjusting terms, respectively.
known in style transfer, are used in this approach. Normalization statistics contain style information, and normalization can successfully extract the style from a feature. However, the content is also changed when the style is eliminated [14, 18, 32].
Moreover, another method for style-based DG [6, 45, 47, 48] is the frequency domain-based method. Input images are decomposed into amplitude and phase using the Fourier transform (FT) [4]. The amplitude and phase are each regarded as the style and content of the input image, respectively [6, 31, 35, 48, 50]. Each component is manipulated independently to generate the style-transformed image. In this context, this method has an advantage of the separation between style and content [53]. Nevertheless, most previous studies have applied it to just the input-level data augmentation [6, 48, 50] for DG.
Thus, the normalization is expected to be complemented by the frequency domain-based method if the method is also applicable at the feature level. To identify the feasibility of this, we conduct a style transfer experiment. We replace IN in AdaIN [14], a milestone work that uses normalization in style transfer, with spectral decomposition. The qualitative results in Fig. 2 indicate that the frequency domain-based method can work as a feature-level style-content separator instead of normalization.
Motivated by this, we aim to overcome the content change problem in normalization by combining the normalization with spectral decomposition. The overall concept of our proposed method is visualized in Fig. 1. For this, we investigate the effect of the existing normalization in DG from the standpoint of the frequency domain. We verify how normalization transforms the content of a feature by mathematically deriving the FT formula. This is the first to present such an analysis.
Then, based upon the analysis, we introduce a novel normalization method, phase-consistent normalization (\(PCNorm\)), which preserves the content of a pre-normalized feature. The \(PCNorm\) synthesizes a content-invariant normalized feature by composing the phase of pre-normalized feature and the amplitude of post-normalized feature. The experimental results reveal the effectiveness of \(PCNorm\) in DG compared to existing normalization.
Along with the success of \(PCNorm\), we take a step further and propose two advanced \(PCNorm\) variants: content-controlling normalization (\(CCNorm\)) and style-controlling normalization (\(SCNorm\)). The main idea of both methods is not to preserve the content or style but to adjust the change in it. \(CCNorm\) and \(SCNorm\) regulate the changes in content and style, respectively, so they can synthesize more robust representations of the domain gap.
With the proposed normalization methods, we propose ResNet [12] variant models, DAC-P and DAC-SC. DAC-P is the initial model with \(PCNorm\), and DAC-SC is the primary model using \(CCNorm\) and \(SCNorm\). In DAC-P, the existing BN in the downsample layer is replaced with \(PCNorm\). In contrast, DAC-SC applies \(CCNorm\) instead of \(PCNorm\), and \(SCNorm\) is inserted at the end of each stage. We evaluate DAC-P and DAC-SC on five DG benchmarks: PACS, VLCS, Office-Home, DomainNet and TerraIncognita, and DAC-P outperforms other recent DG methods with average performance of 65.1\(\%\). Furthermore, the primary model, DAC-SC, achieves state-of-the-art (SOTA) performance of 65.6\(\%\) on average, and displays the highest performance at 87.5\(\%\), 70.3\(\%\) and 44.9\(\%\) on the PACS, Office-Home, and DomainNet benchmarks.
The contributions of this paper are as follows:
* For the first time, we analyze the quantitative shift in the phase caused by normalization using mathematical derivation.
* We introduce a new normalization, \(PCNorm\), which can remove style only through spectral decomposition.
* We propose the advanced \(PCNorm\) variants, \(CCNorm\) and \(SCNorm\), which can learn domain-agnostic features for DG by adjusting the degrees of the changes in content and style, respectively.
* We propose ResNet-variant models, DAC-P and DAC-SC, which applies our proposed normalization methods. We experimentally show that our methods are effective for DG and achieve SOTA average performances on five benchmark datasets.
## 2 Related Work
**Domain-invariant learning for DG.** The main purpose of DG methods is to learn domain-invariant features.
Figure 2: Examples of style transfer with spectral decomposition. Only the amplitude of the target images are transferred instead of their normalization statistics in AdaIN.
There are numerous approaches for DG. Adversarial learning methods [8, 9, 24, 25, 26, 52] prevent models from fitting domain-specific representations, allowing the extraction of domain-invariant representations only. Regularization methods introduce various regularization strategies, such as the reformulation of loss functions [21, 40], gradient-based dropout [15], and contrastive learning [19] for DG. Optimization methods aim to reduce the distributional variance and domain-specific differences using kernel methods [3, 27] and penalty on variance [20]. Meta-learning methods [23, 51] simulate diverse domain changes for various splits of training and testing data from source datasets.
**Style-based learning for DG.** Style-based learning methods define the domain gap as the style difference between each domain and try to extract style-invariant features. There are two widely used methods that enable handling style and content of images, normalization and frequency domain-based methods. Normalization [29, 39, 54] has been widely used to filter the style of features [14, 28, 16]. There have been various works for DG by applying normalization methods such as BN [39], IN [29, 54], and both of them [8, 33]. Jin [18] points out the downside of normalization that it causes the loss of content.
In frequency-based methods, the style and content of the image are represented by the amplitude and phase in frequency domain [30, 31, 36, 11]. Through FT, many works manipulate them to make models robust to the distribution shift. FDA [50] proposes the spectral transfer, which is similar to the style transfer, that the low-frequency component of source amplitude is transferred to that of the target amplitude. In [48], the amplitude swap (AS) and amplitude mix (AM) strategies are introduced for data augmentation, the former is exactly the same as the spectral transfer [50] and the latter is to mix amplitude of source and target domain. Similarly, [47] implements data augmentation by applying multiplicative and additive Gaussian noises to both of the amplitude and phase of the source domain. Most frequency-based works have focused on input-level data augmentation. Different from those, we propose novel frequency domain-based normalization methods that are applicable at the feature level.
## 3 Analysis
Prior to analysis, we clarify the terms of normalization. We consider that normalization includes just BN, LN, and IN because these are all the existing normalization methods for DG, as far as we know. As mentioned in Sec. 1, normalization has a content change problem. However, there is still no explicit verification of the problem. Motivated by this, we examine it through a mathematical analysis from the frequency domain perspective. Specifically, we derive the quantitative change in the phase caused by normalization by expanding the FT formula.
```
#decompose:frequencyfeaturetomplitudeandphase
#compose:amplitudeandphasetofrequencyfeature
#weight:learnableparameters,T_c,T_s:temperaturesdefconorm(f):ff:gatialfeaturef_norm=batchnorm(f)#batchnorm:BN
#P=FT(f)f:fintertransformF_norm=FT(f_norm)
#P=decompose(F)
#a_n_n_p_norm=decompose(F_norm)
#f=IFT(compose(a_norm,p))#IFT:inverseFT returnf defconorm(f):
#weight_c=softmax(weight/T_c,dim=0)
#mean:batchmean(train),cumulativemean(test)
#c=f-mean*weight_c[0]
#norm=batch_n_n_p_(f) FC=FT(f_norm)
#_n_norm=FFf(f_norm)
#c,pc=decompose(FC)
#a_n_p_norm=decompose(F_norm)
#f=IFT(compose(a_n_norm,pc)) returnf
```
**Algorithm 1** Pseudocode of the Proposed Normalization Methods in PyTorch-like Style.
### Spectral Decomposition
We first explain the spectral decomposition [35, 5]. Generally, the term spectral decomposition has various meanings. In this work, it refers to decomposing a feature into amplitude and phase using the discrete FT (DFT) [41].
The DFT transforms the input spatial feature \(f\in\mathbb{R}^{h\times w}\) into the corresponding frequency feature \(\mathcal{F}\in\mathbb{C}^{h\times w}\) where \(h\) and \(w\) are the height and the width of \(f\). \(f(x,y)\) is an element of \(f\) at image pixel \((x,y)\), and \(\mathcal{F}(u,v)\) represents the Fourier coefficient at frequency component \((u,v)\). We omit the round bracket term \((\cdot,\cdot)\) when we denote a feature composed of that element. The DFT of \(f\) is defined as follows:
\[\mathcal{F}(u,v)=\frac{1}{wh}\sum_{x=0}^{w-1}\sum_{y=0}^{h-1}f(x,y) \exp^{i2\pi(\frac{ux}{w}+\frac{vw}{h})} \tag{1}\] \[=\mathcal{F}_{real}(u,v)+i\,\mathcal{F}_{img}(u,v),\]
where \(i\) is imaginary unit. In addition, \(\mathcal{F}_{real}(u,v)\) and \(\mathcal{F}_{img}(u,v)\) are the real and imaginary parts of \(\mathcal{F}(u,v)\), respectively, as follows:
\[\mathcal{F}_{real}(u,v)=\frac{1}{wh}\sum_{x=0}^{w-1}\sum_{y=0}^{ h-1}f(x,y)\cos 2\pi(\frac{ux}{w}+\frac{vy}{h}), \tag{2}\] \[\mathcal{F}_{img}(u,v)=\frac{1}{wh}\sum_{x=0}^{w-1}\sum_{y=0}^{ h-1}f(x,y)\sin 2\pi(\frac{ux}{w}+\frac{vy}{h}).\]
The process in which the spatial feature \(f\) is decomposed into the amplitude \(\alpha\) and phase \(\rho\) is called spectral decomposition, where \(\alpha\) and \(\rho\) are calculated as follows:
\[\begin{split}\alpha&=\sqrt{\mathcal{F}_{real}^{2}+ \mathcal{F}_{img}^{2}},\\ \rho&=arctan\frac{\mathcal{F}_{img}}{\mathcal{F}_{ real}}.\end{split} \tag{3}\]
In addition, \(\mathcal{F}\) can be reassembled from \(\alpha\) and \(\rho\):
\[\mathcal{F}=\alpha\,\cos(\rho)+i\,\alpha\,\sin(\rho). \tag{4}\]
We define a function of the disassembling frequency feature in the amplitude and phase as \(decompose(\cdot)\) and the opposite process as \(compose(\cdot)\):
\[\begin{split}\alpha,\rho=decompose(\mathcal{F}),\\ \mathcal{F}=compose(\alpha,\rho).\end{split} \tag{5}\]
### Content Variation by Normalization
In this section, we mathematically verify how normalization changes the content of the feature in the frequency domain. We develop the FT formula of normalized feature \(f^{norm}\in\mathbb{R}^{h\times w}\) and identify the relationship with that of the original feature \(f\). Moreover, \(f^{norm}\) is as follows:
\[f^{norm}=\frac{f-\mu}{\sigma}, \tag{6}\]
where \(\mu\) and \(\sigma\) denote the statistical mean and standard deviation of \(f\), respectively. Statistics are calculated differently as with normalization methods. In this study, however, \(\mu\) and \(\sigma\) are set as constants because they are computed in the same way in a channel.
Like Eq. 1, the DFT of \(f^{norm}\) is as follows:
\[\mathcal{F}^{norm}(u,v)=\mathcal{F}^{norm}_{real}(u,v)+i\,\mathcal{F}^{norm}_ {img}(u,v). \tag{7}\]
In Eq. 2, by the linearity property of FT, \(\mathcal{F}^{norm}_{real}(u,v)\) and \(\mathcal{F}^{norm}_{img}(u,v)\) are represented as follows:
\[\begin{split}\mathcal{F}^{norm}_{real}(u,v)&=\frac{ 1}{wh}\sum_{x=0}^{w-1}\underset{y=0}{\overset{f(x,y)-\mu}{\sigma}}\cos{2\pi( \frac{ux}{w}+\frac{vy}{h})},\\ \mathcal{F}^{norm}_{img}(u,v)&=\frac{1}{wh}\sum_{x=0 }^{w-1}\underset{y=0}{\overset{f(x,y)-\mu}{\sigma}}\sin{2\pi(\frac{ux}{w}+ \frac{vy}{h})}.\end{split} \tag{8}\]
Then we derive a relationship between \(\mathcal{F}\) and \(\mathcal{F}^{norm}\) by presenting Eq. 8 in terms of Eq. 2:
\[\begin{split}\mathcal{F}^{norm}_{real}&=\frac{ \mathcal{F}_{real}-\mathcal{F}^{\mu}_{real}}{\sigma},\\ \mathcal{F}^{norm}_{img}&=\frac{\mathcal{F}_{img }-\mathcal{F}^{\mu}_{img}}{\sigma},\end{split} \tag{9}\]
where \(\mathcal{F}^{\mu}_{real}\) and \(\mathcal{F}^{\mu}_{img}\) are real and imaginary parts of \(\mathcal{F}^{\mu}\in\mathbb{C}^{h\times w}\). \(\mathcal{F}^{\mu}\) is the frequency feature of \(f^{\mu}\in\mathbb{R}^{h\times w}\), which is a feature whose elements are all \(\mu\).
In the same way as Eq. 3, the amplitude and phase of \(f^{norm}\), \(\alpha^{norm}\) and \(\rho^{norm}\), respectively, can be computed as follows:
\[\begin{split}\alpha^{norm}&=\frac{\sqrt{( \mathcal{F}_{real}-\mathcal{F}^{\mu}_{real})^{2}+(\mathcal{F}_{img}-\mathcal{F }^{\mu}_{img})^{2}}}{\sigma},\\ \rho^{norm}&=arctan\frac{\mathcal{F}_{img}-\mathcal{F }^{\mu}_{img}}{\mathcal{F}_{real}-\mathcal{F}^{\mu}_{real}}.\end{split} \tag{10}\]
By comparing \(\rho^{norm}\) with \(\rho\), we verify the numerical variation of the content information caused by normalization. We determine that \(\rho^{norm}\), the phase of spatial feature \((f-\mu)/\sigma\), is same as the phase of \((f-\mu)\) because it is not affected by \(\sigma\) (Eq. 10). That is, the difference between \(\rho^{norm}\) and \(\rho\) in the frequency domain is simply caused by the mean shift in normalization in the spatial domain. Hence, we consider \(\mu\) to be the content variation factor. As observed, the degree of content variation becomes greater when \(\mu\) is larger.
## 4 Proposed Method
In this section, based upon the analysis in Sec. 3.2, we explain the novel normalization methods, \(PCNorm\), \(CCNorm\), and \(SCNorm\), whose Pytorch-like pseudocode is described in Algorithm. 1. Next, we describe the ResNet [12]-variant models: DAC-P and DAC-SC.
### Phase Consistent Normalization (PCNorm)
In Sec. 3.2, we verify that the difference between \(\rho\) and \(\rho^{norm}\) is caused by the mean shift by \(\mu\) in normalization.
Figure 3: Illustrations of the proposed normalization methods. Included notations are the same as in Sec. 3 and 4. Red denotes adjusting operation in Sec. 4.2 and 4.3.
Thus, we infer that a simple method to prevent content from changing is to avoid a mean shift in normalization. We can circumvent content variation using the phase of pre-normalized feature, \(\rho\), instead of \(\rho^{norm}\). In this context, we propose our \(PCNorm\), depicted in Fig. 3 (a). \(PCNorm\) is a normalization method that maintains an original content. \(PCNorm\) decomposes pre-and post-normalized features into their phase and amplitude, respectively. Then, it combines \(\rho\) with \(\alpha^{norm}\). Finally, the composed frequency feature is transformed to spatial feature through inverse FT (IFT).
The \(PCNorm\) is defined as follows:
\[PCNorm(f)=IFT(compose(\alpha^{norm},\rho)), \tag{11}\]
where \(IFT(\cdot)\) denotes IFT.
### Content Controlling Normalization (CCNorm)
\(PCNorm\) prevents content from changing. Further, we have a fundamental question regarding whether the variation of content is absolutely negative in DG. We believe that the answer can be clarified by making the model learn itself to reduce the content change. If the change is harmful to DG, the model would gradually decrease it during training.
As we explain in Sec. 3.2, content change occurs due to \(\mu\), which is the mean shift in normalization in the spatial domain.. Thus, we adjust the degree of content change by introducing a learnable parameter \(\lambda^{c}\in\mathbb{R}^{2}\). The content adjusting terms, (\(\lambda^{c}_{norm}\), \(\lambda^{c}_{org})=softmax\) (\(\lambda^{c}/T_{c}\)), where \(T_{c}\) denotes the temperature value, determine the proportion of the normalized and original content, respectively. Then, the content-adjusted feature \(f^{c}\) is defined as follows:
\[f^{c}=f-\mu\,\lambda^{c}_{norm}. \tag{12}\]
In the equation, \(\lambda^{c}_{org}\) is omitted because it is a dummy variable for the performance, and we only consider the normalized content. If \(\lambda^{c}_{norm}\) is 0, the phase of \(f^{c}\) (\(\rho^{c}\)) is the same as \(\rho\), and if \(\lambda^{c}_{norm}\) is 1, \(\rho^{c}\) is the same as \(\rho^{norm}\). Then, we propose the first advanced \(PCNorm\), \(CCNorm\), which employs \(\rho^{c}\) instead of \(\rho\) in \(PCNorm\). The \(CCNorm\) is defined as follows:
\[CCNorm(f)=IFT(compose(\alpha^{norm},\rho^{c})), \tag{13}\]
which is illustrated in Fig. 3 (b). The main idea of \(CCNorm\) is to mitigate the content variation of normalization, not to entirely offset it. Interestingly, \(CCNorm\) performs better than \(PCNorm\) in many experiments. Sec. 5 discusses the effects of \(CCNorm\) and insights from it.
### Style Controlling Normalization (SCNorm)
It is common to remove style information through IN in DG [18, 32]. Similar to the motivation of \(CCNorm\), an essential question occurs regarding whether completely eliminating style is ideal for DG. Thus, we identify the effect of adjusting the style elimination degree.
Therefore, we propose the other advanced version of \(PCNorm\), \(SCNorm\), that regulates the degree of style elimination. It is illustrated in Fig. 3 (c). The \(SCNorm\) method mixes \(\alpha\) with \(\alpha^{norm}\) by applying learnable parameters. In \(SCNorm\), the content is preserved in the same way as in \(PCNorm\). Analogous to \(CCNorm\), the style-adjusting terms are \(\lambda^{s}_{norm}\) and \(\lambda^{s}_{org}\), which are the outputs of the learnable parameter, \(\lambda^{s}\in\mathbb{R}^{2}\), obtained using the softmax. That is, (\(\lambda^{s}_{norm}\), \(\lambda^{s}_{org})=softmax\) (\(\lambda^{s}/T_{s}\)), where \(T_{s}\) represents temperature value. The \(SCNorm\) is formulated as follows:
\[SCNorm(f)=IFT(compose(\lambda^{s}_{norm}\,\alpha^{norm}+\lambda^{s}_{org}\, \alpha,\rho)). \tag{14}\]
Both adjusting terms, \(\lambda^{s}_{norm}\) and \(\lambda^{s}_{org}\), are learned to make the model independently determine the ratios of pre- and post-normalized style, respectively. The IN is used by default to obtain \(\alpha^{norm}\). Similar to \(\lambda^{c}\) in \(CCNorm\), \(SCNorm\) becomes an identity function when \(\lambda^{s}_{norm}\) is 0, and if \(\lambda^{s}_{norm}\) is 1, \(SCNorm\) completely removes the style. Sec. 5 discusses its effectiveness.
### DAC-P and DAC-SC
In this section, we introduce the ResNet-variant models, DAC-P, and DAC-SC. DAC-P is the early model for feasibility, where \(PCNorm\) is applied. DAC-SC is the primary model that adopts both \(CCNorm\) and \(SCNorm\). The overall architecture of DAC-SC is described in Fig. 4.
In DAC-P, we replace BN in the downsample layer with \(PCNorm\) (Fig. 4 (b)). The downsample layer is included in the residual block, which is represented as \(\mathcal{H}(x)+x\), where \(x\) is the input feature, and \(\mathcal{H}(\cdot)\) is the residual function. Unlike other layers, the shape of \(x\) in downsample layer changes to match that of \(\mathcal{H}(x)\). That is, the phase of \(x\) inevitably changes, although for identity mapping it should be invariant. Thus, the residual \(\mathcal{H}(x)\) is approximated to the biased input whose content information is changed. As it contains BN with the content variation problem, the content change in this layer becomes larger. Consequently, the biased approximation of \(\mathcal{H}(x)\) degrades DG performance.
To relieve this, we substitute the BN in the downsample layer with \(PCNorm\). It is to take advantage of the fact that \(PCNorm\) preserves content. First, the existing downsample layer of ResNet [12], \(downsample(\cdot)\), is represented as follows:
\[downsample(x)=BatchNorm(Conv(x)), \tag{15}\]
where \(BatchNorm(\cdot)\) indicates the BN, and \(Conv(\cdot)\) is a 1\(\times\)1 convolution layer with a stride 2, and \(x\) is the input feature. On the other hand, the downsample layer of our DAC-P, \(downsample_{p}(\cdot)\), is formulated as follows:
\[downsample_{p}(x)=PCNorm(Conv(x)). \tag{16}\]
Next, we introduce the primary model, DAC-SC, where both \(CCNorm\) and \(SCNorm\) are applied. As explained above, the content change in the downsample layer especially needs to be relieved. In DAC-SC, traditional BN is replaced with \(downsample_{c}\), which uses \(CCNorm\) instead of \(PCNorm\) (Fig. 4 (c)). Then, \(downsample_{c}\) is defined as follows:
\[downsample_{c}(x)=CCNorm(Conv(x)). \tag{17}\]
In addition, DAC-SC also exploits \(SCNorm\), which adjusts the degree of style elimination. Previous studies [18, 54] found that putting a style regularizer between residual blocks enhances performance. Inspired by this, we inserted \(SCNorm\) between residual blocks (Fig. 4 (d)). Hence, the primary model, DAC-SC, determines the proper intensities of the content and style changes for DG.
## 5 Experiment
### Dataset
We evaluated the proposed methods on five DG benchmarks: VLCS, PACS, Office-Home, DomainNet, and TerralIncognita. Table 2 summarizes the number of classes, images, and domains in each dataset.
### Experimental Details
We chose ResNet50 as a backbone network, the same as the baseline model (ERM) [43]. In DAC-P, the BN in all four downsample layers was replaced with \(PCNorm\) layers. For DAC-SC, four \(CCNorm\) were inserted at the same locations as \(PCNorm\) in DAC-P, and three \(SCNorm\) were added at the ends of the first to third stages, respectively. The affine transform layer of base normalization is transferred to the end of the proposed normalization layer. The model was initialized with ImageNet [7] pre-trained weight. The elements of \(\lambda^{s}\) and \(\lambda^{c}\) were initialized with 0, and the temperatures \(T_{s}\) and \(T_{c}\) were set to 1e-1 and 1e-6, respectively. For data augmentation, we randomly cropped images on a scale from 0.7 to 1.0 and resized them to 224x224 pixels. Then, we applied random horizontal flip, color jittering, and gray scaling. In training, we used a mini-batch size of 32, and the Nesterov SGD optimizer with a weight decay of 5e-4, a learning rate of 1e-4, and momentum of 0.9. For DomainNet dataset only, the learning rate of 1e-2 was applied. We trained the proposed model for 20 epochs with 500 iterations each, except for DomainNet, which we trained with 7500 iterations and adopted a cosine annealing scheduler with early stopping (tolerance of 4). All experiments were conducted four times and each performance was evaluated on the training-domain validation set, which reserved 20% of source domain data. The performance values were reported using the average performance for the entire domains, which was evaluated using a single out-of-training domain. We conducted an exhaustive hyperparameter search for model selection and evaluated the models based on accuracy. The hardware and software environments were Ubuntu 18.04, Python 3.8.13, PyTorch 1.12.1+cu113, CUDA 11.3, and a single NVIDIA A100 GPU.
### Comparison with SOTA Methods
The test accuracies of DAC-P, DAC-SC, and recent DG methods on five benchmarks are reported in Table 1. Specifically, DAC-SC, which exhibited the highest average performance improvement, achieved new SOTA results at 87.5\(\%\)
\begin{table}
\begin{tabular}{l c c c} \hline \hline
**Dataset** & **Class** & **Image** & **Domain** \\ \hline VLCS [22] & 7 & 9,991 & 4 \\ PACS [22] & 5 & 10,729 & 4 \\ Office-Home [44] & 65 & 15,588 & 4 \\ DomainNet [34] & 345 & 586,575 & 6 \\ TerralIncognita [2] & 10 & 24,788 & 4 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Description of five datasets, VLCS, PACS, Office-Home, DomainNet, and TerralIncognita.
Figure 4: Overall architecture of DAC-SC. (a): DAC-SC is composed of ResNet50 with \(CCNorm\) and \(SCNorm\). (b), (c): The proposed \(PCNorm\) and \(CCNorm\) replace BN in the downsample layer, respectively. We use \(CCNorm\) for DAC-SC and \(PCNorm\) for DAC-P. (d): \(SCNorm\) is attached at the end of stage1 to stage3. (e): A residual block is represented.
70.3\(\%\), and 44.9\(\%\) on the PACS, Office-Home, and DomainNet benchmark datasets. These experimental results indicate that there can be proper degrees of content and style changes for DG. In the TerralIncognita dataset, DAC-P outperformed DAC-SC, which was expected due to the dataset characteristics. The TerraIncognita dataset consists of camera trap images in which objects are captured in four locations. The content between the domains is closely distributed, resulting in fewer domain gaps compared to other datasets. In these cases, preserving the content can be more effective than adjusting changes under a slight domain shift. From the results of this experiment, it is critical to preserve content in DG, but higher performance can be achieved when the preservation degree can be appropriately controlled. A more detailed discussion of this is presented in Sec. 5.4.
### Ablation Study
In this section, we conducted various ablation studies focusing on the DAC-SC because it is the main model and adopts many advanced factors.
**Effect of CCNorm and SCNorm.** To investigate the performance improvement effect with the two proposed modules, \(CCNorm\) and \(SCNorm\), we added them sequentially and individually to ResNet and compared their performance (Rows 2 and 3 of Table 3). As described in Sec. 4.4, \(CCNorm\) is applied to the downsample layer and \(SCNorm\) is inserted at the end of the stage. As the two modules are inserted in different positions, we added Column P, indicating the position of the applied module for clarity, where D and E denote the downsample layer and end of the stage, respectively. An average performance improvement of 1.3% compared to the baseline ResNet (ERM) occurs when only \(CCNorm\) is added. Furthermore, we added both \(SCNorm\) and \(CCNorm\), the DAC-SC, providing an extra improvement of 0.5\(\%\) to the average performance. It achieves the highest increase of 3.7\(\%\) on the DomainNet dataset.
In contrast, applying \(PCNorm\) and \(SCNorm\) together displayed an average performance degradation of 0.2\(\%\) compared to applying \(PCNorm\) alone (Row 5). Considering the performance gains when \(SCNorm\) is combined with \(CCNorm\), it implies that \(SCNorm\) works better
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline Model & VLCS & PACS & Office-Home & DomainNet & TerralIncognita & Avg \\ \hline ERM [43] & 77.4 \(\pm\) 0.3 & 85.7 \(\pm\) 0.5 & 67.5 \(\pm\) 0.5 & 41.2 \(\pm\) 0.2 & 47.2 \(\pm\) 0.4 & 63.8 \\ IRM [1] & 78.5 \(\pm\) 0.5 & 83.5 \(\pm\) 0.8 & 64.3 \(\pm\) 2.2 & 33.9 \(\pm\) 2.8 & 47.6 \(\pm\) 0.8 & 61.6 \\ GroupDRO [38] & 76.7 \(\pm\) 0.6 & 84.4 \(\pm\) 0.8 & 66.0 \(\pm\) 0.7 & 33.3 \(\pm\) 0.2 & 43.2 \(\pm\) 1.1 & 60.7 \\ Mixup [49] & 77.4 \(\pm\) 0.6 & 84.6 \(\pm\) 0.6 & 68.1 \(\pm\) 0.3 & 39.2 \(\pm\) 0.1 & 47.9 \(\pm\) 0.8 & 63.4 \\ MLDG [23] & 77.2 \(\pm\) 0.4 & 84.9 \(\pm\) 1.0 & 66.8 \(\pm\) 0.6 & 41.2 \(\pm\) 0.1 & 47.7 \(\pm\) 0.9 & 63.6 \\ CORAL [40] & **78.8 \(\pm\) 0.6** & 86.2 \(\pm\) 0.3 & 68.7 \(\pm\) 0.3 & 41.5 \(\pm\) 0.1 & 47.6 \(\pm\) 1.0 & 64.5 \\ MMD [24] & 77.5 \(\pm\) 0.9 & 84.6 \(\pm\) 0.5 & 66.3 \(\pm\) 0.1 & 23.4 \(\pm\) 9.5 & 42.2 \(\pm\) 1.6 & 58.8 \\ DANN [9] & 78.6 \(\pm\) 0.4 & 83.6 \(\pm\) 0.4 & 65.9 \(\pm\) 0.6 & 38.3 \(\pm\) 0.1 & 46.7 \(\pm\) 0.5 & 62.6 \\ CDANN [25] & 77.5 \(\pm\) 0.1 & 82.6 \(\pm\) 0.9 & 65.8 \(\pm\) 1.3 & 38.3 \(\pm\) 0.3 & 45.8 \(\pm\) 1.6 & 62.0 \\ MTL [3] & 77.2 \(\pm\) 0.4 & 84.6 \(\pm\) 0.5 & 66.4 \(\pm\) 0.5 & 40.6 \(\pm\) 0.1 & 45.6 \(\pm\) 1.2 & 62.9 \\ ARM [51] & 77.6 \(\pm\) 0.3 & 85.1 \(\pm\) 0.4 & 64.8 \(\pm\) 0.3 & 35.5 \(\pm\) 0.2 & 45.5 \(\pm\) 0.3 & 61.7 \\ VREx [20] & 78.3 \(\pm\) 0.2 & 84.9 \(\pm\) 0.6 & 66.4 \(\pm\) 0.6 & 33.6 \(\pm\) 2.9 & 46.4 \(\pm\) 0.6 & 61.9 \\ RSC [15] & 77.1 \(\pm\) 0.5 & 85.2 \(\pm\) 0.9 & 65.5 \(\pm\) 0.9 & 38.9 \(\pm\) 0.5 & 46.6 \(\pm\) 1.0 & 62.7 \\ IIB [21] & 77.2 \(\pm\) 1.6 & 83.9 \(\pm\) 0.2 & 68.6 \(\pm\) 0.1 & 41.5 \(\pm\) 2.3 & 45.8 \(\pm\) 1.4 & 63.4 \\ SelfReg [19] & 77.8 \(\pm\) 0.9 & 85.6 \(\pm\) 0.4 & 67.9 \(\pm\) 0.7 & 42.8 \(\pm\) 0.0 & 47.0 \(\pm\) 0.3 & 64.2 \\ SagNet [29] & 77.8 \(\pm\) 0.5 & 86.3 \(\pm\) 0.2 & 68.1 \(\pm\) 0.1 & 40.3 \(\pm\) 0.1 & 48.6 \(\pm\) 1.0 & 64.2 \\ \hline
**DAC-P (ours)** & 77.0 \(\pm\) 0.6 & 85.6 \(\pm\) 0.5 & 69.5 \(\pm\) 0.1 & 43.8 \(\pm\) 0.3 & **49.8 \(\pm\) 0.2** & 65.1 \\
**DAC-SC (ours)** & 78.7 \(\pm\) 0.3 & **87.5 \(\pm\) 0.1** & **70.3 \(\pm\) 0.2** & **44.9 \(\pm\) 0.1** & 46.5 \(\pm\) 0.3 & **65.6** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison with recent DG methods. For the performances of IIB and SelfReg, we refer to each paper. The other performance results are the reported numbers from DomainBed [10]. The best performance values are in bold and the second-best performance values are underlined.
\begin{table}
\begin{tabular}{l l c c c c c c} \hline \hline & Method & P & VL & PA & OH & DN & TI & Avg \\ \hline
1 & ResNet & & 77.4 & 85.7 & 67.5 & 41.2 & 47.2 & 63.8 \\
2 & +\(CCNorm\) & D & 77.1 & 86.0 & 69.9 & 44.7 & 48.1 & 65.1(+1.3) \\
3 & +\(SCNorm\) & E & **78.7** & **87.4** & **70.3** & **44.9** & 46.5 & **65.6(+0.5)** \\ \hline
4 & +\(PCNorm\) & D & 77.0 & 85.6 & 69.5 & 43.8 & **49.8** & 65.1(+1.3) \\
5 & +\(SCNorm\) & E & 77.1 & 86.0 & 68.8 & 44.6 & 48.0 & 64.9(-0.2) \\ \hline
6 & +\(SCNorm\) & D & 77.3 & 86.4 & 69.2 & 44.5 & 45.3 & 64.5(-0.7) \\
7 & +\(SCNorm\) & E & 77.3 & 86.8 & 69.2 & **44.9** & 43.0 & 64.3(-0.2) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Ablation study on our methods. VL, PA, OH, DN, and TI denote VLCS, PACS, Office-Home, DomainNet, and TerralIncognita, respectively. P indicates module position, and D and E of column P mean downsample layer and end of the stage in order.
with \(CCNorm\) than \(PCNorm\). These experimental results demonstrate that there is an efficient combination of adjusted content and style, which DAC-SC can successfully determine.
**Content Preserving vs. Content Controlling.** Rows 2 and 4 of Table 3 reveal that \(PCNorm\), which perfectly preserves content, and \(CCNorm\), which controls content change, have the same average performance at 65.1\(\%\). However, referring to the results of the individual datasets, \(CCNorm\) outperforms \(PCNorm\) on the other four datasets except TerralIncognita. It means that controlling content changes (\(CCNorm\)) is more beneficial in terms of generalization performance.
**Content Controlling vs. Style Controlling in Downsample Layer.** We replace \(CCNorm\) with \(SCNorm\) in DAC-SC to verify that adjusting content changes is more effective than controlling style changes at the downsample layer. Rows 6 and 7 of Table 3 present the results when \(SCNorm\) is inserted into the downsample layer. Compared to the baseline ResNet, applying \(SCNorm\) to the downsample layer renders a slight average performance improvement of 0.7\(\%\). However, this performance enhancement is low compared to the result of applying \(CCNorm\) or \(PCNorm\) to the downsample layer. From these results, we infer that, although the adjustment to the style change in the downsample layer works lightly, the regulation of the content variation seems more appropriate in this layer.
**Which Normalization Fits on SCNorm? In \(SCNorm\), we apply IN by default. In this experiment, We first compared the performances of our \(SCNorm\) with those of the simple IN to clarify that \(SCNorm\) is more effective than existing IN. Then, we observed the results when the IN in \(SCNorm\) is replaced with the LN or BN to identify that it is optimal to select the IN compared to other methods.**
As listed in Table 4, applying IN to \(SCNorm\) is more effective for all datasets than using the simple IN (InNorm). Compared with the other two normalization methods (LN and BN) applied to \(SCNorm\), the highest average performance is reached when IN is applied to \(SCNorm\) except in the TerralIncognita dataset. In contrast, \(SCNorm\) with LN consistently performs poorly compared with the other two normalization methods (IN and BN) except for the PACS dataset. These results indicate that it is optimal to apply IN to \(SCNorm\).
**Adjusting Style Elimination Degree.** To identify the degree that style elimination is adjusted through \(SCNorm\), we illustrate \(\lambda^{s}_{org}\) in \(SCNorm\) in Fig. 5, where \(\lambda^{s}_{org}\) is the weight of the original style, which determines the extent of the original style when it is mixed with the normalized style. Moreover, \(SCNorm1\), \(SCNorm2\), and \(SCNorm3\) indicate each \(SCNorm\) module at the ends of the first, second, and third stages of the DAC-SC, respectively. As demonstrated, all \(\lambda^{s}_{org}\) were generally distributed above 0.5, but no clear tendency was found for \(\lambda^{s}_{org}\) of \(SCNorm1\) and \(SCNorm2\). However, all \(\lambda^{s}_{org}\) in \(SCNorm3\) had an average value of 0.9898, which is very close to 1, in all five datasets. That is, the original style \(\alpha\) is nearly preserved at the end of Stage 3.
Then, we deduce that adjusting the style elimination is not required in \(SCNorm3\) because the reflection of the normalized style is negligible. To clarify this, we set \(\lambda^{s}_{org}\) in \(SCNorm3\) to a fixed number at 1 and compared the performance. As presented in Fig. 6, the performance consistently drops when \(\lambda^{s}_{org}\) is 1 in \(SCNorm3\). On the VLCS, Office-Home, and TerranIncognita benchmarks, the performance drops by more than 1.0%. This result confirms that even a small change in \(\lambda^{s}_{org}\) significantly affects performance. Hence, we infer that an adjustment is needed in the style elimination degree for DG, which \(SCNorm\) successfully determines.
## 6 Conclusion
This paper addresses the content change problem in existing normalization methods in DG, and suggests an incipi
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline Method & VL & PA & OH & DN & TI & Avg \\ \hline InNorm & 76.0 & 87.2 & 65.6 & 44.0 & 42.7 & 63.1 \\ \hline \(SCNorm_{in}\) & **78.7** & **87.4** & **70.3** & **44.9** & **46.5** & **65.6** \\ \(SCNorm_{ln}\) & 75.0 & 87.3 & 67.9 & 44.3 & 44.3 & 63.8 \\ \(SCNorm_{bn}\) & 77.0 & 86.6 & **70.3** & 44.8 & **46.9** & 65.1 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Ablation study on base normalization in \(SCNorm\). InNorm is the simple IN without \(SCNorm\). \(SCNorm_{in}\), \(SCNorm_{ln}\), and \(SCNorm_{bn}\) denote IN, LN, and BN applied to \(SCNorm\), respectively. The abbreviations of the datasets are the same as in Table 3.
Figure 5: Comparison of the learned values of \(\lambda^{s}_{org}\) in each \(SCNorm\) layer.
Figure 6: Accuracy comparison of DAC-SC with and without style controlling (\(\lambda^{s}_{org}\)). Fixed \(SCNorm3\) indicates that \(\lambda^{s}_{org}\) is set to a fixed number of 1.
ent approach that explores its improvements from the aspect of the frequency domain. For this, we provide a pioneering analysis of the quantitative content change through mathematical derivation. Based on this analysis, we propose novel normalization methods for DG, PCNorm, CCNorm, and SCNorm. With the proposed methods, the ResNet variant models, DAC-P and DAC-SC, both achieve SOTA performance on five DG benchmarks. The experimental results highlight the importance of content preservation in DG, and take a step further in variation adjustments in the existing normalization-based style extraction.
|
2305.00869
|
Estimating the Density Ratio between Distributions with High Discrepancy
using Multinomial Logistic Regression
|
Functions of the ratio of the densities $p/q$ are widely used in machine
learning to quantify the discrepancy between the two distributions $p$ and $q$.
For high-dimensional distributions, binary classification-based density ratio
estimators have shown great promise. However, when densities are well
separated, estimating the density ratio with a binary classifier is
challenging. In this work, we show that the state-of-the-art density ratio
estimators perform poorly on well-separated cases and demonstrate that this is
due to distribution shifts between training and evaluation time. We present an
alternative method that leverages multi-class classification for density ratio
estimation and does not suffer from distribution shift issues. The method uses
a set of auxiliary densities $\{m_k\}_{k=1}^K$ and trains a multi-class
logistic regression to classify the samples from $p, q$, and $\{m_k\}_{k=1}^K$
into $K+2$ classes. We show that if these auxiliary densities are constructed
such that they overlap with $p$ and $q$, then a multi-class logistic regression
allows for estimating $\log p/q$ on the domain of any of the $K+2$
distributions and resolves the distribution shift problems of the current
state-of-the-art methods. We compare our method to state-of-the-art density
ratio estimators on both synthetic and real datasets and demonstrate its
superior performance on the tasks of density ratio estimation, mutual
information estimation, and representation learning. Code:
https://www.blackswhan.com/mdre/
|
Akash Srivastava, Seungwook Han, Kai Xu, Benjamin Rhodes, Michael U. Gutmann
|
2023-05-01T15:10:56Z
|
http://arxiv.org/abs/2305.00869v1
|
Estimating the Density Ratio between Distributions with High Discrepancy using Multinomial Logistic Regression
###### Abstract
Functions of the ratio of the densities \(p/q\) are widely used in machine learning to quantify the discrepancy between the two distributions \(p\) and \(q\). For high-dimensional distributions, binary classification-based density ratio estimators have shown great promise. However, when densities are _well separated_, estimating the density ratio with a binary classifier is challenging. In this work, we show that the state-of-the-art density ratio estimators perform poorly on _well separated_ cases and demonstrate that this is due to distribution shifts between training and evaluation time. We present an alternative method that leverages multi-class classification for density ratio estimation and does not suffer from distribution shift issues. The method uses a set of auxiliary densities \(\{m_{k}\}_{k=1}^{K}\) and trains a multi-class logistic regression to classify the samples from \(p,q\) and \(\{m_{k}\}_{k=1}^{K}\) into \(K+2\) classes. We show that if these auxiliary densities are constructed such that they overlap with \(p\) and \(q\), then a multi-class logistic regression allows for estimating \(\log p/q\) on the domain of any of the \(K+2\) distributions and resolves the distribution shift problems of the current state-of-the-art methods. We compare our method to state-of-the-art density ratio estimators on both synthetic and real datasets and demonstrate its superior performance on the tasks of density ratio estimation, mutual information estimation, and representation learning. Code: [https://www.blackswhan.com/mdre/](https://www.blackswhan.com/mdre/)
## 1 Introduction
Quantification of the discrepancy between two distributions underpins a large number of machine learning techniques. For instance, distribution discrepancy measures known as \(f\)-divergences (Csiszar, 1964), which are defined as expectations of convex functions of the ratio of two densities, are ubiquitous in many domains of supervised and unsupervised machine learning. Hence, density ratio estimation is often a central task in generative modeling, mutual information and divergence estimation, as well as representation learning (Sugiyama et al., 2012; Gutmann & Hyvarinen, 2010; Goodfellow et al., 2014; Nowozin et al., 2016; Srivastava et al., 2017; Belghazi et al., 2018; Oord et al., 2018; Srivastava et al., 2020). However, in most problems
of interest, estimating the density ratio by modeling each of the densities separately is significantly more challenging than directly estimating their ratio for high dimensional densities (Sugiyama et al., 2012). Hence, direct density ratio estimators are often employed in practice.
One of the most commonly used density ratio estimators (DRE) utilizes binary classification via logistic regression (BDRE). Once trained to discriminate between the samples from the two densities, BDREs have been shown to estimate the ground truth density ratio between the two densities (e.g. Gutmann and Hyvarinen, 2010; Gutmann and Hirayama, 2011; Sugiyama et al., 2012; Menon and Ong, 2016). BDREs have been tremendously successful in problems involving the minimization of the density-ratio based estimators of discrepancy between the data and the model distributions even in high-dimensional settings (Nowozin et al., 2016; Radford et al., 2015). However, they do not fare as well when applied to the task of estimating the discrepancy between two distributions _that are far apart or easily separable from each other_. This issue has been characterized recently as the _density-chasm problem_ by Rhodes et al. (2020). We demonstrate this in Figure 1 where we employ a BDRE to estimate the density ratio between two 1-D distributions, \(p=\mathcal{N}(-1,0.1)\) and \(q=\mathcal{N}(1,0.2)\) shown in panel (a). Since \(p\) and \(q\) are considerably far apart from each other, solving the classification problem is relatively simple as illustrated by the visualization of the decision boundary of the BDRE. However, as shown in panel (b), even in this simple setup, BDRE completely fails to estimate the ratio. Kato and Teshima (2021) have also confirmed that most DREs, especially those implemented with deep neural networks, tend to overfit to the training data in some way when faced with the density-chasm problem. Since BDRE-based plug-in estimators are often used in many high-dimensional tasks such as mutual information estimation, representation learning, energy-based modeling, co-variate-shift resolution, and importance sampling (Rhodes et al., 2020; Choi et al., 2021; Sugiyama et al., 2012), resolving density-chasm is an important problem of high practical relevance.
A recently introduced solution to the density-chasm problem, telescopic density-ratio estimation (TRE; Rhodes et al., 2020), tackles it by replacing the easier-to-classify, original logistic regression problem, by a _set_ of harder-to-classify logistic regression problems. In short, TRE constructs a set of \(K\) auxiliary distributions (\(\{m_{k}\}_{k=1}^{K}\)) to bridge the two target distributions (\(p=:m_{0}\) and \(q=:m_{K+1}\)) of interest and then trains a set of \(K+1\) BDREs on every pair of _consecutive distributions_ (\(m_{k-1}\) and \(m_{k}\) for \(k=1,\ldots,K\)), which are assumed to be close enough (i.e. not easily separable) for BDREs to work well. After that, an overall density ratio estimate is obtained by taking the cumulative (telescopic) product of all individual estimates.
In this work, we argue that the aforementioned solution to the density chasm problem has an inherent issue of _distribution shift_ that can lead to significant inaccuracies in the final density ratio estimation. Notice that the \(i\)-th BDRE in the chain of BDREs that TRE constructs is only trained on the samples from distributions \(m_{i}\) and \(m_{i+1}\). However, post-training, it is typically evaluated on regions where the distributions from the original density ratio estimation problem (i.e. \(p\) and \(q\)) have non-negligible mass. If the high-probability regions of \(p\), \(q\) and the auxiliary distributions \(m_{i}\) do not overlap, the training and evaluation distributions for the \(i\)-th BDRE are different. Because of this distribution shift between training and evaluation, the overall density ratio estimation can end up being inaccurate (see Figure 2 and Section 2.1 for further details). We here provide another solution to the density-chasm problem that avoids this distribution shift.
We present Multinomial Logistic Regression based Density Ratio Estimator (MDRE), a novel method for density ratio estimation that solves the density-chasm problem without suffering from distribution shift. This is done by using auxiliary distributions and _multi-class classification_. MDRE replaces the easy binary classification problem with a _single_ harder multi-class classification problem. MDRE first constructs a set of \(K\) auxiliary distributions \(\{m_{k}\}_{k=1}^{K}\) that overlap with \(p\) and \(q\) and then uses multi-class logistic regression on the \(K+2\) distributions to obtain a density ratio estimator of \(\log p/q\). We will show that the multi-class classification formulation avoids the distribution shift issue of TRE.
The key contributions of this work are as follows:
1. We study the state-of-the-art solution to the density-chasm problem (TRE; Rhodes et al., 2020) and identify its limitations arising from distribution shift. We illustrate that this inherent issue can significantly degrade its density ratio estimation performance.
2. We formally establish the link between multinomial logistic regression and density ratio estimation and propose a novel method (MDRE) that uses auxiliary distributions to train a multi-class classifier for density ratio estimation. MDRE resolves the aforementioned distribution shift issue by construction and effectively tackles the density chasm problem.
3. We construct a comprehensive evaluation protocol that significantly extends on benchmarks used in prior works. We conduct a systematic empirical evaluation of the proposed approach and demonstrate the superior performance of our method on a number of synthetic and real datasets. Our results show that MDRE is often markedly better than the current state-of-the-art of density ratio estimation on tasks such as \(f\)-divergence estimation, mutual information estimation, and representation learning in high-dimensional settings.
## 2 Related Work
Telescopic density-ratio estimation (TRE, Rhodes et al., 2020) uses a two step, divide-and-conquer strategy to tackle the density-chasm problem. In the first step, they construct \(K\)_waymark_ distributions \(\{m_{k}\}_{k=1}^{K}\) by gradually transporting samples from \(p\) towards samples from \(q\). Then, they train \(K\) BDREs, one for each consecutive pair of distributions. This allows for estimating the ratio \(r_{p/q}\) as the product of \(K+1\) BDREs, \(r_{p/q}=\frac{p}{q}=\frac{p}{m_{1}}\times\cdots\times\frac{m_{\infty}}{q}\). Rhodes et al. (2020) introduced two schemes for creating waymark distributions that ensure that consecutive pairs of distributions are packed _closely enough_ so that none of the \(K+1\) BDREs suffer from the density-chasm issue. Hence, TRE addresses the density-chasm issue by replacing the ratio between \(p\) and \(q\) with a product of \(K+1\) intermediate density ratios that, by design of the waymark distribution, should not suffer from the density-chasm problem. In a new work, Choi et al. (2021) introduced DRE-\(\infty\), a method that takes the number of waymark distributions in TRE to infinity and derives a limiting objective that leads to a more scalable version of TRE.
F-DRE is other interesting related work that comes from Choi et al. (2021). F-DRE uses a FLOW-based model (Rezende and Mohamed, 2015) which is trained to project samples from a mixture of the two distributions onto a standard Gaussian. They then train a BDRE. It is easy to show that any bijective map will preserve the original density ratio \(r_{p/q}\) in the feature space as the Jacobian correction term simply cancels out. However, due to the bijectivity of the FLOW map, such a method cannot bring the projected distributions any closer than the discrepancy between the original distributions. At best, the method can shift the discrepancy between the original distributions along different moments after projection. Due to this issue, we found that F-DRE did not work well for the problems we considered (see experimental results in Section 4). Recently, Liu et al. (2021) introduced an optimization-based solution to the density-chasm problem in exponential family distributions by using (a) normalized gradient descent and (b) replacing the logistic loss with an exponential loss. Finally, while BDRE remains the dominant method of density ratio
Figure 1: BDRE vs proposed MDRE on estimation of log density ratio where \(p=\mathcal{N}(-1,0.1)\) and \(q=\mathcal{N}(1,0.2)\). For MDRE, the auxillary distribution \(m\) is Cauchy \(\mathcal{C}(0,1)\). Plots (a) and (c) show the class probabilities \(P(Y|x)\) learned for BDRE and MDRE respectively overlayed on the plots of \(p\), \(q\) and \(m\). Plots (b) and (d) show the estimated log density-ratio by BDRE and MDRE respectively. Using auxiliary distribution \(m\) allows MDRE to better estimate the log density-ratio.
estimation in recent literature, prior works, such as Bickel et al. (2008) and Nock et al. (2016), have studied multi-class classifier-based density ratio estimation for estimating ratios between a set of densities against a common reference distribution and its applications in multi-task learning.
### TRE's performance can degrade due to training-evaluation distribution shifts
In supervised learning, distribution shift (Quinonero-Candela et al., 2009) occurs when the training data \((x,y)\sim p_{\text{train}}\) and the test data \((x,y)\sim p_{\text{test}}\) come from two different distributions, i.e. \(p_{\text{train}}\neq p_{\text{test}}\). Common training methods, such as those used in BDRE, only guarantee that the model performs well on unseen data that comes from the same distribution as \(p_{\text{train}}\). Thus, in the case of distribution shift at test time, the model's performance degrades proportionately to the shift. We now show that a similar distribution shift can occur in TRE when distributions \(p\) and \(q\) are sufficiently different. Recall that in TRE, we use BDREs to estimate \(K+1\) density ratios \(p/m_{1},m_{2}/m_{1},\ldots,m_{K}/q\) that are combined in a telescopic product to form the overall ratio \(p/q\). Let us denote the estimates of the \(K+1\) ratios by \(\hat{\eta}_{1},\ldots,\hat{\eta}_{K+1}\).
Given the theoretical properties of BDRE, for any \(i\in\{1,\ldots,K+1\}\), \(\hat{\eta}_{i}\) estimates \(r_{m_{i-1}/m_{i}}\) over the support of \(m_{i}\)(Sugiyama et al., 2012; Gutmann & Hyvarinen, 2010; Menon & Ong, 2016). However, in TRE, when we evaluate the target ratio \(p/q\) on the supports of \(p\) and \(q\), we evaluate the individual \(\hat{\eta}_{i}\) on domains for which we lack guarantees that they perform well. Since the overall estimator for \(p/q\approx\hat{\eta}_{1}*\cdots*\hat{\eta}_{K+1}\) combines multiple ratio estimators, it suffers from the distribution shift issue if _any_ of the individual estimators' performance deteriorates. Thus, if the supports of \(\{m_{i}\}_{i=1}^{K}\), \(p\), and \(q\) are different, or when the samples from \(\{m_{i}\}_{i=1}^{K}\), \(p\), and \(q\) do not overlap well enough, the training and evaluation domains of the \(\hat{\eta}_{i}\) are different and we expect the ratio estimate \(\hat{\eta}_{i}\) and, in turn, the overall estimator for \(p/q\) to be poor. We now illustrate this with a toy example.
We consider estimating the density ratio between \(p=\mathcal{N}(-1,0.1)\) and \(q=\mathcal{N}(1,0.2)\). Since, \(p\) and \(q\) are well separated, we introduce three auxiliary distributions \(m_{1},m_{2},m_{3}\) to bridge them, providing the waymarks that TRE needs. The auxiliary distributions \(m_{1},m_{2},m_{3}\) are constructed with the _linear-mixing_ strategy that will be described in Section 3.2. This setup is shown in the top-left panel of Figure 2. We train 4 BDREs \(\hat{\eta}_{1},\hat{\eta}_{2},\hat{\eta}_{3},\hat{\eta}_{4}\) to estimate ratios \(p/m_{1},m_{1}/m_{2},m_{2}/m_{3}\) and \(m_{3}/q\) respectively. We begin by showing that each of the trained BDREs estimates their corresponding density ratio accurately on their corresponding training distributions. To show this, in panels 2-5 in the first row of Figure 2, we evaluate \(\hat{\eta}_{1},\hat{\eta}_{2},\hat{\eta}_{3},\hat{\eta}_{4}\) on samples from their respective denominator densities \(m_{1},m_{2},m_{3},q\) and plot them via a scatter plot where the x-axis is labeled with the distribution that we draw the samples from and the y-axis is the log-density ratio (red). We plot the true density ratio in blue for comparison. As evident, red and blue scatter plots overlap significantly, indicating the individual ratio estimators are accurate on their respective denominator (training) distributions.
Next, we evaluate the BDREs \(\hat{\eta}_{1},\hat{\eta}_{2},\hat{\eta}_{3},\hat{\eta}_{4}\), on samples from \(p\) and \(q\) instead of their corresponding training distributions as before. Distributions \(p\) and \(q\) are shown in panel 1 of the second row in Figure 2. In the rest of the panels (2-5) in the second row, estimators \(\hat{\eta}_{1},\hat{\eta}_{2},\hat{\eta}_{3},\hat{\eta}_{4}\) are compared to the ground-truth log-density ratios (blue) \(p/m_{1},m_{1}/m_{2},m_{2}/m_{3}\) and \(m_{3}/q\) that are also evaluated on samples from \(p\) and \(q\). Unlike in row 1, the estimated log-density ratios do not match the ground-truth. This reflects the training-evaluation distribution-shift issues pointed out above. We show now that this deterioration in accuracy on the level of the individual BDREs results in an deterioration of the overall performance of TRE. To this end, we first recover the TRE estimator by chaining the individually trained BDREs via a telescoping product, i.e. \(\hat{\eta}_{1}*\hat{\eta}_{2}*\hat{\eta}_{3}*\hat{\eta}_{4}\) and then evaluate it on samples from all the 5 distributions \(p,m_{1},m_{2},m_{3},q\). The results are shown in panels 1-5 of the third row. The estimated log-density ratios (red) do not match the corresponding ground-truth log-density ratios (blue), which demonstrates that the distribution-shift in the training and evaluation distributions of the individual BDREs significantly degrades the overall estimation accuracy of TRE. Additional issues occur when both \(p\) and \(q\) do not have full support, as discussed in Appendix G.
## 3 Density Ratio Estimation using Multinomial Logistic Regression
We propose Multinomial Logistic Regression based Density Ratio Estimator (MDRE) to tackle the density-chasm problem while avoiding the distribution shift issues of TRE. As in TRE, we introduce a set of \(K\geq 1\) auxiliary distributions \(\{m_{k}\}_{k=1}^{K}\). But, in constrast to TRE, we then formulate the problem of estimating \(\log p/q\) as a multi-class classification problem rather than a sequence of binary classification problems. We show that this change leads to an estimator that is accurate on the domain of all \(K+2\) distributions and, therefore, does not suffer from distribution shift.
### Loss function
We here establish a formal link between density ratio estimation and multinomial logistic regression. Consider a set of \(C\) distributions \(\{p_{c}\}_{c=1}^{C}\) and let \(p_{x}(x)=\sum_{c=1}^{C}\pi_{c}p_{c}(x)\) be their mixture distribution, with prior class probabilities \(\pi_{c}\).1 The multi-class classification problem then consists of predicting the correct class \(Y\in\{1,\ldots,C\}\) from a sample from the mixture \(p_{x}\). For this purpose, we consider the model
Footnote 1: In our simulations, we will use a uniform prior over the classes.
\[P(Y=c|x;\theta)=\frac{\pi_{c}\exp(h_{\theta}^{c}(x))}{\sum_{k=1}^{C}\pi_{k} \exp(h_{\theta}^{k}(x))}, \tag{1}\]
Figure 2: TRE for \(p=\mathcal{N}(-1,0.1)\) and \(q=\mathcal{N}(1,0.2)\) from Figure 1. In all scatter plots, x-axis denotes the sampling distribution and y-axis denotes the log-density-ratio. The density plot in the first row shows \(p\), \(q\) and the 3 waymarks; the density plot in the second row shows \(p\) and \(q\) only. The scatter plots in the first row show individual density ratio estimators evaluated on samples from their corresponding training data (denominator density), demonstrating accurate estimation on the training set. The scatter plots in the second show individual density ratio estimators evaluated on samples from \(p\) and \(q\). The estimation accuracy has degraded notably due to the train-eval distribution shift. The last row shows the performance of the overall density ratio estimator on samples from \(p,m_{1},m_{2},m_{3},q\). We see that the overall ratio estimate is significantly affected by the deterioration of the individual ratio estimates, illustrating the sensitivity of TRE to distribution shift problems in case of well-separated distributions.
where the \(h_{\theta}^{c}(x)\), \(c=1,\ldots,C\) are unnormalized log probabilities parametrized by \(\theta\). We estimate \(\theta\) by minimizing the negative multinomial log-likelihood (i.e. the softmax cross-entropy loss) \(\mathcal{L}(\theta)\)
\[\mathcal{L}(\theta)=-\sum_{c=1}^{C}\pi_{c}\mathbb{E}_{x\sim p_{e}}[\log P(Y=c| x;\theta)]=\sum_{c=1}^{C}\pi_{c}\mathbb{E}_{x\sim p_{e}}\bigg{[}-\log\pi_{c}-h_{ \theta}^{c}+\log\sum_{k=1}^{C}\pi_{k}\exp(h_{\theta}^{k}(x))\bigg{]}, \tag{2}\]
where, in practice, the expectations are replaced with a sample average. We denote the optimal parameters by \(\theta^{*}=\arg\min_{\theta}\mathcal{L}(\theta)\). To ease the theoretical derivation, we consider the case where the \(h_{\theta}^{c}(x)\) are parametrized in such a flexible way that we can consider the above loss function to be a functional of \(C\) functions \(h_{1},\ldots,h_{C}\),
\[\mathcal{L}(h_{1},\ldots,h_{C})=\sum_{c=1}^{C}\pi_{c}\mathbb{E}_{x\sim p_{e}} \bigg{[}-\log\pi_{c}-h_{c}+\log\sum_{k=1}^{C}\pi_{k}\exp(h_{k}(x))\bigg{]}. \tag{3}\]
The following propositions shows that minimizing \(\mathcal{L}(h_{1},\ldots,h_{C})\) allows us to estimate the log ratios between any pair of the \(C\) distributions \(p_{c}\).
**Proposition 3.1**.: _Let \(\hat{h}_{1},\ldots,\hat{h}_{C}\) be the minimizers of \(\mathcal{L}(h_{1},\ldots,h_{C})\) in equation 3. Then the density ratio between \(p_{i}(x)\) and \(p_{j}(x)\) for any \(i,j\leq C\) is given by_
\[\log\frac{p_{i}(x)}{p_{j}(x)}=\hat{h}_{i}(x)-\hat{h}_{j}(x) \tag{4}\]
_for all \(x\) where \(p_{x}(x)=\sum_{c}\pi_{c}p_{c}(x)>0\)._
Proof.: We first note that the sum of expectations \(\sum_{c=1}^{C}\pi_{c}\mathbb{E}_{x\sim p_{c}}\) in equation 3 is equivalent to the expectation with respect to the mixture distribution \(p_{x}\). Writing the expectation as an integral we obtain
\[\mathcal{L}(h_{1},\ldots,h_{C})=\sum_{c=1}^{C}\pi_{c}\mathbb{E}_{x\sim p_{e}} [-\log\pi_{c}-h_{c}(x)]+\int p_{x}(x)[\log\sum_{k=1}^{C}\pi_{k}\exp(h_{k}(x))]dx. \tag{5}\]
The functional derivative of \(\mathcal{L}(h_{1},\ldots,h_{C})\) with respect to \(h_{i}\), i=1..., C, equals
\[\frac{\delta\mathcal{L}}{\delta h_{i}}=-\pi_{i}p_{i}(x)+p_{x}(x)\frac{\pi_{i} \exp(h_{i}(x))}{\sum_{k=1}^{C}\pi_{k}\exp(h_{k}(x))} \tag{6}\]
for all \(x\) where \(p_{x}(x)>0\). Setting the derivative to zero gives the necessary condition for an optimum
\[\frac{\pi_{i}p_{i}(x)}{p_{x}(x)}=\frac{\pi_{i}\exp(h_{i}(x))}{\sum_{k=1}^{C} \pi_{k}\exp(h_{k}(x))},\qquad i=1,\ldots,C,\text{ and for all }x\text{ where }p_{x}(x)>0. \tag{7}\]
The left-hand side of equation 7 equals the true conditional probability \(P^{*}(Y=i|x)=\frac{\pi_{i}p_{i}(x)}{p_{x}(x)}\). Hence, at the critical point, \(\hat{h}_{1},\ldots,\hat{h}_{C}\) are such that \(P^{*}(Y|X)\) is correctly estimated. From equation 7, it follows that for two arbitrary \(i\) and \(j\), we have \((\pi_{i}p_{i})/(\pi_{j}p_{j})=(\pi_{i}\exp(\hat{h}_{i}))/(\pi_{j}\exp(\hat{h}_ {j}))\) i.e.
\[\log\frac{p_{i}(x)}{p_{j}(x)}=\hat{h}_{i}(x)-\hat{h}_{j}(x) \tag{8}\]
for all \(x\) where \(p_{x}(x)>0\), which concludes the proof.
_Remark 3.2_ (Identifiability).: While we have \(C\) unknowns \(h_{1},\ldots,h_{C}\) and \(C\) equations in equation 7, there is a redundancy in the equations because
\[\sum_{i=1}^{C}\frac{\pi_{i}p_{i}(x)}{p_{x}(x)}=\sum_{i=1}^{C}\frac{\pi_{i}\exp (h_{i}(x))}{\sum_{k=1}^{C}\pi_{k}\exp(h_{k}(x))}=\frac{\sum_{i=1}^{C}\pi_{i} \exp(h_{i}(x))}{\sum_{k=1}^{C}\pi_{k}\exp(h_{k}(x))}=1\]
This means that we cannot uniquely identify all \(h_{i}\) by minimising equation 3. However, the difference \(h_{i}-h_{j}\), for \(i\neq j\), can be identified and is equal to the desired log ratio between \(p_{i}\) and \(p_{j}\) per equation 8.
_Remark 3.3_ (Effect of parametrisation and finite sample size).: In practice, we only have a finite amount of training data and the parametrisation introduces constraints on the flexibility of the model. With additional assumptions, e.g. that the true density ratio \(\log p_{i}(x)-\log p_{j}(x)\) can be modeled by the difference of \(h^{i}_{\theta}(x)\) and \(h^{j}_{\theta}(x)\), we show in Appendix A that our ratio estimator is consistent. We here do not dive further into the asymptotic properties of the estimator but focus on the practical applications of the key result in equation 8.
Importantly, equation 8 allows us to estimate \(r_{p/q}\) by formulating our ratio estimation problem as a multinomial nonlinear regression problem as summarized in the following corollary.
**Corollary 3.4**.: _Let the distributions of the first two classes be \(p\) and \(q\), respectively, i.e. \(p_{1}\equiv p,p_{2}\equiv q\), and the remaining \(K\) distributions be equal to the auxiliary distributions \(m_{i}\), i.e. \(p_{3}\equiv m_{1},\dots,p_{K+2}\equiv m_{K}\). Then_
\[\log\hat{r}_{p/q}(x)=\hat{h}_{1}(x)-\hat{h}_{2}(x). \tag{9}\]
_Remark 3.5_ (Free from distribution shift issues).: Since equation 8 holds for all \(x\) where the mixture \(p_{x}(x)>0\), the estimator \(\hat{r}_{p/q}(x)\) in equation 9 is valid for all \(x\) in the union of the domain of \(p,q,m_{1},\dots,m_{K}\). This means that MDRE does not suffer from the distribution shift problems that occur when solving a sequence of binomial logistic regression problems as in TRE. We exemplify this in Section 3.3 after introducing three schemes to construct the auxiliary distributions \(m_{1},\dots,m_{K}\).
### Constructing the auxiliary distributions
In MDRE, auxiliary distributions need to be constructed such that they have overlapping support with the empirical densities of \(p\) and \(q\). This allows the multi-class classification probabilities to be better calibrated and leads to an accurate density ratio estimation. We demonstrate this in panel (c) of Figure 1, where \(p=\mathcal{N}(-1,0.1)\) and \(q=\mathcal{N}(1,0.2)\) and the single auxiliary distribution \(m\) is set to be Cauchy \(\mathcal{C}(0,1)\) that clearly overlaps with the other two distributions. The classification probabilities are shown as the scatter plot that is overlayed on the empirical densities of these distributions. Compared to the BDRE case in panel (a), which has high confidence in regions without any data, the multi-class classifier assigns, for \(p\) and \(q\), high class probabilities only over the support of the data and not where there are barely any data points from these two distributions. Moreover, the auxiliary distribution well covers the space where \(p\) and \(q\) have low density, which provides the necessary training data to inform the values of \(\hat{h}_{1}(x)\) and \(\hat{h}_{2}(x)\) in that area, which leads to an accurate estimate of the log-density ratio shown in panel (d). This is contrast to BDRE in panel (a) where the classifier, while constrained enough to get the classification right, is not learned well enough to also get the density ratio right (panel b). This subtle, yet important distinction between the usage of auxiliary distributions in MDRE compared to BDRE and TRE enables MDRE to generalize on out-of-domain samples, as we will demonstrate in Section 4.1.
Next, we briefly describe three schemes to construct auxiliary distributions for MDRE and leave the details to Appendix B: **(1) Overlapping Distribution** Unlike TRE, the formulation of MDRE does not require "gradually bridging" the two distributions \(p\) and \(q\), hence, we introduce a novel approach to constructing auxiliary distributions. We define \(m_{k}\) as any distribution whose samples overlap with both \(p\) and \(q\), and \(p<<m_{k}\), \(q<<m_{k}\). This includes heavy-tailed distributions (e.g. Cauchy, Student-t), normal distributions, uniform distributions, or their mixtures. We use this scheme in all low-dimensional simulations. **(2) Linear Mixing** In this scheme, \(m_{k}\) is defined as the distribution of the samples generated by linearly combining samples \(X_{p}=\{x^{i}_{p}\}_{i=1}^{N}\) and \(X_{q}=\{x^{i}_{q}\}_{i=1}^{N}\) from distributions \(p\) and \(q\), respectively. The generative process for a single sample \(x^{i}_{m_{k}}\) from \(m_{k}\) is given by \(x^{i}_{m_{k}}=(1-\alpha_{k})x^{i}_{p}+\alpha_{k}x^{i}_{q}\), with \(x_{p}\in X_{p},x^{i}_{q}\in X_{q}\). This construction is similar to the linear combination scheme for auxiliary distributions introduced by Rhodes et al. (2020) with a few key differences that we expand upon in Appendix B. One difference is that \(\alpha_{k}\) is not limited t o \(0<\alpha_{k}<1\), which allows for non-convex mixtures that completely surround both \(p\) and \(q\). We use this construction scheme in higher-dimensional simulations. **(3) Dimension-wise Mixing** This construction scheme was introduced in (Rhodes et al., 2020). Samples from the single auxiliary distribution \(m\) are obtained by combining different subsets of dimensions from samples from \(p\) and \(q\). We use this scheme for experiments involving high-dimensional image data.
### Free from distribution-shift problems
We continue with the example task of estimating the density ratio between p = N (-1, 0.1) and q = N (1, 0.2) and here illustrate Remark 3.5 that MDRE does not suffer from the distribution shift problem identified in Section 2.1. We test MDRE with two types auxiliary distributions. First using a heavy-tailed distribution (\(m=\text{Cauchy}(0,1)\)), under the overlapping distributions scheme, and second, with waymark distributions \(m_{1},m_{2},m_{3}\) as used by TRE in Figure 2 using their linear-mixing construction scheme.
Figure 3 shows the result for the heavy tailed \(\text{Cauchy}(0,1)\) auxiliary distribution (green, shown in the left most figure). We can see that the log-ratio learned by MDRE is accurate even beyond the empirical support of \(p\) and \(q\). This is because MDRE is trained on samples from the mixture of \(p,q\) and \(m\) and hence, per Remark 3.5, does not encounter distribution-shift, over the support of the mixture distribution. Figure 4 shows the result when using the auxiliary distributions of TRE that we used in Figure 2. We see that the learned log-ratio well matches the true log-ratio on samples from \(p\) and \(q\), as well as the auxiliary distributions. This can be directly compared to third row of Figure 2 where TRE suffers from distribution shift problems and does not yield a well-estimated log-ratio. Note that we do not present results that correspond to the second row of 2 since the estimation of the log-ratio in MDRE _does not_ depend on any intermediate density ratios.
Figure 4: MDRE using TRE’s auxiliary distributions. Each scatter plot shows the overall log-density ratio estimates on samples from the distribution on the x-axis (MDRE in red and true ratio in blue). MDRE is capable of accurately estimating ratios on all samples. Contrast with the bottom row of Figure 2.
Figure 3: MDRE with \(m=\text{Cauchy}(0,1)\). The first density plots shows the target densities as well as the auxiliary distribution (in green, hard to see due to heavy tail and the range of axes). The three scatter plots show the estimated (red) and true (blue) density ratio \(p/q\) evaluated on samples from \(p,m\), and \(q\). MDRE accurately estimates the ratio across the input domain. Contrast with Figure 2.
## 4 Experiments
We here provide an empirical analysis of MDRE on both synthetic and real data, showing that it performs better than previous methods--BDRE, TRE, and F-DRE--on three different density ratio estimation tasks. We consider cases where numerator and denominator densities differ because their mean is different, i.e. \(p\) and \(q\) exhibit first-order discrepancies (FOD), and cases where the difference stems from different higher-order moments (higher-order discrepancies, HOD). Ratio estimation is closely related to KL divergence and mutual information estimation since the KL divergence is the expectation of the log-ratios under \(p\), and mutual information can be expressed as a KL divergence between joint and product of marginals. Being quantities of core interest in machine learning, we will use them to evaluate the ratio estimation methods.
### 1D Gaussian experiments with large KL divergence
In the following 1D experiments, we consider two scenarios, one where \(p=\mathcal{N}(-1,0.08)\) and \(q=\mathcal{N}(2,0.15)\), and one where the mean of \(p\) is shifted to \(-2\) in order to increase the degree of separation between the two distributions. In both cases, MDRE's auxiliary distribution \(m\) is \(\mathcal{C}\)auchy(0,1), so that we have a three-class classification problem (\(p\), \(q\), \(m\)) and three functions \(h^{i}_{\theta}\) that parameterize the classifier of MDRE. The three functions are quadratic polynomials of the form \(w_{1}x^{2}+w_{2}x+b\). For all the methods we set the total number of samples to 100K.2 We provide the exact hyperparameter settings for MDRE and other baselines in Table 5 in Appendix C.
Footnote 2: We found that MDRE’s results are unchanged even when using smaller sample sizes of 1K or 10K, see Table 4 in Appendix C.
Table 1 shows the results. We can see that MDRE yields more accurate estimates of the KL divergences than the baselines, which are off by a significant margin.
We note that KL estimation only requires evaluating the log-ratios on samples from the numerator distribution \(p\). In Figure 5, we thus show results for all methods where we evaluate the estimated log-ratios on a wide interval (-12, 12). The figure shows that none of the baseline methods can accurately estimate the ratio well on the whole interval while MDRE performs well overall. This is important be
\begin{table}
\begin{tabular}{l l l l l l l} \hline \hline \(p\) & \(q\) & True KL & BDRE & TRE & F-DRE & MDRE (ours) \\ \hline \(\mathcal{N}(-1,0.08)\) & \(\mathcal{N}(2,0.15)\) & 200.27 & 21.74 \(\pm\) 4.10 & 136.05 \(\pm\) 5.91 & 14.87 \(\pm\) 1.72 & 203.32 \(\pm\) 2.01 \\ \(\mathcal{N}(-2,0.08)\) & \(\mathcal{N}(2,0.15)\) & 355.82 & 20.22 \(\pm\) 3.64 & 208.11 \(\pm\) 18.31 & 14.22 \(\pm\) 5.30 & 360.35 \(\pm\) 1.37 \\ \hline \hline \end{tabular}
\end{table}
Table 1: 1D density ratio estimation task for \(p\) and \(q\) with large first-order and higher-order differences. In all cases, MDRE outperforms all the baselines.
Figure 5: Log density-ratio estimates corresponding to the numbers reported in Table 1. Note that the ground truth and MDRE curves are overlapping, while all the other estimators are significantly worse.
cause it means that the ratio is well estimated in regions where \(p\) and \(q\) have little probability mass. These results demonstrate the effectiveness of MDRE with a single auxiliary distribution whose samples overlaps with those from both \(p\) and \(q\), in lieu of using a chain of BDREs with up to \(K=28\) closely-packed auxiliary distributions as used by TRE. Please see Appendix C for additional results and details.
To provide further clarity into MDRE's density ratio estimation behavior, we analyze the uncertainty of its log ratio estimates using Bayesian analysis. We use a standard normal prior on the classifier parameters and obtain posterior samples with Hamiltonian Monte-Carlo. These posterior samples then yield samples of the density ratio estimates. Figure 6 shows that the high accuracy of MDRE's KL divergence estimates can be attributed to MDRE being confidently accurate around the union of the high density regions of both \(p\) and \(q\). A more detailed analysis is provided in Appendix D.
### High dimensional experiments with large MI
Following Rhodes et al. (2020), we use the MI estimation benchmark from Belghazi et al. (2018); Poole et al. (2019) to evaluate MDRE on a more challenging, higher-dimensional problem. In this task, the goal is to estimate the mutual information between a standard normal distribution and a Gaussian random variable \(x\in\mathbb{R}^{2d}\) with a block-diagonal covariance matrix where each block is \(2\times 2\) with ones on the diagonal and \(\rho\) on the off-diagonal. The correlation coefficient \(\rho\) is computed from the number of dimensions and the target mutual information \(I=-d/2\log(1-\rho^{2})\). Since this problem construction only induces higher-order discrepancies (HOD), we added an additional challenge by moving the means of the two distributions, thus additionally inducing first-order discrepancies (FOD).
For MDRE, we model the \(h_{\theta}^{i}\) with quadratic functions of the form \(x^{T}W_{1}x+W_{2}x+b\). We use linear-mixing to construct each \(m_{k}\), where \(K=3\) or \(K=5\). In Appendix E, we provide the exact configurations for MDRE in Table 6 and explain how to choose \(m\) and \(K\) in practice.
Table 2 shows the results for each MI task averaged across 3 runs with different random seeds. MDRE outperforms all baselines in the original MI task where the means of the distribution are the same. The difference between the performance of MDRE and the baselines is particularly stark when the means are allowed to be nonzero. Only MDRE estimates the MI reasonably well while all baselines dramatically underestimate it. We further note that MDRE only uses up to 5 auxiliary distributions, lowering its compute requirements compared to TRE, which is the next best performing method and uses up to 15 auxiliary distributions for its telescoping chain.
We found that the resolution proposed by Kato and Teshima (2021) to overcome the over-fitting issue in Bregman Divergence minimization-based DREs, does not work well in practice. On the high-dimensional
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline Dim & \(\mu_{1},\mu_{2}\) & True MI & BDRE & TRE & F-DRE & MDRE (ours) \\ \hline
40 & 0, 0 & 20 & 10.90 \(\pm\) 0.04 & 14.52 \(\pm\) 2.07 & 14.87 \(\pm\) 0.33 & 18.81 \(\pm\) 0.15 \\ & -1, 1 & 100 & 29.03 \(\pm\) 0.09 & 33.95 \(\pm\) 0.14 & 13.86 \(\pm\) 0.26 & 119.96 \(\pm\) 0.94 \\ \hline
160 & 0, 0 & 40 & 21.47 \(\pm\) 2.62 & 34.09 \(\pm\) 0.21 & 12.89 \(\pm\) 0.87 & 38.71 \(\pm\) 0.73 \\ & -0.5, 0.6 & 136 & 24.88 \(\pm\) 8.93 & 69.27 \(\pm\) 0.24 & 13.74 \(\pm\) 0.13 & 133.64 \(\pm\) 3.70 \\ \hline
320 & 0, 0 & 80 & 23.47 \(\pm\) 9.64 & 72.85 \(\pm\) 3.93 & 9.17 \(\pm\) 0.60 & 87.76 \(\pm\) 0.77 \\ & -0.5, 0.5 & 240 & 24.86 \(\pm\) 4.07 & 100.18 \(\pm\) 0.29 & 10.53 \(\pm\) 0.03 & 217.14 \(\pm\) 6.02 \\ \hline \hline \end{tabular}
\end{table}
Table 2: High-dimensional mutual information estimation task. MDRE is able to accurately estimate the MI often by a very large margins.
setup of row 2 in Table 2, while the ground truth MI is 100, and MDRE estimates it as \(119\pm 0,94\), the best model from Kato and Teshima (2021) yields 1.60, significantly underestimating the true value and being a factor of ten smaller than the classifier-based DRE baselines. For further results, such as plots of estimated log ratio vs. ground-truth log ratio, training curves, and more, please see Appendix E.
Above, following prior work, we evaluated the methods on problems where \(p\) and \(q\) are normal distributions. To enhance this analysis, we further evaluate MDRE on the three new experimental setups below. The results are summarized in Table 3.
**Breaking Symmetry** In our high-dimensional experiments reported in Table 2, the means of the Gaussian distributions \(p\) and \(q\) were symmetric around zero in the majority of cases. In order to ensure that this symmetry did not provide an advantage to MDRE, we also evaluate it on Gaussians \(p\) and \(q\) with randomized means. The results are shown in rows 2 and 6 of Table 3. We see that MDRE continues to estimate the ground truth KL divergence accurately, demonstrating that it did not benefit unfairly from the symmetry of distributions around zero.
**Model Mismatch** In rows 4, 5, 7, and 8 of Table 3, we evaluate MDRE by replacing one or both distributions \(p\) and \(q\) with a Student-t distribution of the same scale with randomized means. For the Student-t distributions, we set the degrees of freedom as 5, 10 or 20. These experiments test how well MDRE performs when there is model mismatch, i.e. how MDRE performs using the same quadratic model that was used when \(p\) and \(q\) were set to be Gaussian with lighter tails. We find that MDRE is still able to accurately estimate the ground truth KL in these cases. We found the same to be true for other test distributions such as a Mixture of Gaussians (shown in row 3 of Table 3).
**Finite Support \(p\) and \(q\)** Finally, we test MDRE on another problem where \(p\) and \(q\) are finite support distributions that have both FOD and HOD. This is done by setting \(p\) and \(q\) to be truncated normal distributions, as shown in row 1 of Table 3. We also set \(m\) to be a truncated normal distribution with its scale set to 2 to allow it to have overlap with both \(p\) and \(q\). This setting is similar to the 1D Gaussian example illustrated in Section 3.3 and MDRE manages to estimate the ground-truth KL divergence accurately.
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline Dim & \(p\) & \(q\) & \(m\) & True KL & Est. KL \\ \hline \multirow{3}{*}{1} & Truncated Normal & Truncated Normal & Truncated Normal & & \\ & loc=-1, scale=0.1 & loc=1, scale=0.2 & loc=-1, scale=2 & 50.65 & 52.35 \\ & support=(-1.1,-0.9) & support=(-1.1,1.2) & support=(-1.1,1.2) & & \\ \hline \multirow{3}{*}{160} & Normal & Normal & & & \\ & loc=R(-.5,.5), cov=\(2\times 2\) BD & loc=R(-.5,.5), cov=\(I\) & Linear Mixing & 54.29 & 54.10 \\ \hline \multirow{3}{*}{160} & Normal & MoG: 0.5*Normal(0.9, \(I\)) & & & \\ & loc=-1, cov=\(2\times 2\) BD & + 0.5*Normal(1.1,\(I\)) & Linear Mixing & 105.60 & 98.27 \\ \hline \multirow{3}{*}{160} & Student T loc=R(-.5,.5), & Student T & & & \\ & scale=\(2\times 2\) BD, df=5 & loc=R(-.5,.5), scale=I, df=5 & Linear Mixing & 51.26 & 49.01 \\ \hline \multirow{3}{*}{320} & Student T loc=R(-.5,.5), & Student T & & & \\ & scale=\(2\times 2\) BD, df=10 & loc=R(-.5,.5), scale=I, df=10 & Linear Mixing & 53.82 & 51.03 \\ \hline \multirow{3}{*}{320} & Normal & Normal & & & \\ & loc=R(-1,1), cov=\(2\times 2\) BD & loc=R(-1,1), cov=\(I\) & Linear Mixing & 110.05 & 102.63 \\ \hline \multirow{3}{*}{320} & Student T loc=R(-1,1), & Student T & & & \\ & scale=\(2\times 2\) BD, df=10 & loc=R(-1,1), scale=\(I\), df=10 & Linear Mixing & 103.12 & 113.53 \\ \hline \multirow{3}{*}{320} & Normal & Student T & & & \\ & loc=0, cov=\(2\times 2\) BD & loc=0, scale=\(I\), df=20 & Linear Mixing & 82.02 & 83.63 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Robustness evaluation for MDRE. Here R(a,b) stands for randomized mean vector where each dimension is sampled uniformly from the interval \((a,b)\). MDRE is able to consistently estimate the ground-truth KL with high accuracy in all of the cases.
### Representation learning for SpatialMultiOmniglot
In order to benchmark MDRE on large-scale real-world data, following the setup from Rhodes et al. (2020), we apply MDRE to the task of mutual information estimation and representation learning for the SpatialMultiOmniglot problem (Ozair et al., 2019). The goal is to estimate the mutual information between \(u\) and \(v\) where \(u\) is a \(n\times n\) grid of Omniglot characters from different Omniglot alphabets and \(v\) is a \(n\times n\) grid containing (stochastic) realizations of the next characters of the corresponding characters in \(u\). After learning, we evaluate the representations from the encoder with a standard linear evaluation protocol (Oord et al., 2018). For MDRE, similarly to TRE, we utilize a separable architecture commonly used in the MD-based representation learning literature and model the unnormalized log-scores \(h_{\theta}^{i}\) with functions of the form \(g(u)^{T}Wf(v)\) where \(g\) and \(f\) are 14-layer convolutional ResNets (He et al., 2015). While this model amounts to sharing of parameters across the \(h_{\theta}^{i}\), we would like to emphasize that in all preceding examples, we did not share parameters among the \(h_{\theta}^{i}\). We construct the auxiliary distributions via dimension-wise mixing.
We here only compare MDRE to the single ratio baseline and TRE because Rhodes et al. (2020, Figure 4) already demonstrated that TRE significantly outperforms both Contrastive Predictive Coding (CPC) (Oord et al., 2018) and Wasserstein Predictive Coding (WPC) (Ozair et al., 2019) on exactly the same task. Please refer to Appendix F for the detailed experimental setup.
As can be seen in Figure 6(a), MDRE performs better than TRE and the single ratio baseline, exactly matching the ground truth MI. This improvement in MI estimation is reflected in the representations. Figure 6(b) illustrates that MDRE's encoder learns representations that achieve \(\sim\)100% Omniglot character classification for both \(d=n^{2}=4,9\). On the other hand, the performances of the single ratio estimator and TRE (using the same exact dimension-wise mixing to construct auxiliary distributions) both degrade as the complexity of the task increases, with TRE only reaching up to 91% and 85% for \(d=4\) and \(d=9\), respectively. All models were trained with the same encoder architecture to ensure fair comparison.
We further studied the effect of changing \(K\) in the \(d=4\) setup. For \(K=1\), we aggregate all the dimension-wise mixed samples into 1 class, whereas for \(K=3\), we separate them into their respective classes (corresponding to the number of dimensions mixed). We illustrate this effect in Figure 6(c). In line with the finding of Ma and Collins (2018), increasing the number of K not only helps MDRE to reach the ground truth MI, but also the quality of representations improves from 86.7% to 100% test classification accuracy.
## 5 Discussion
In this work, we presented the multinomial logistic regression based density ratio estimator (MDRE), a new method for density ratio estimation that had better finite sample (non-asymptotic) performance in our simulations than current state-of-the-art methods. We showed that it addresses the sensitivity to possible distribution-shift issues of the recent method by Rhodes et al. (2020). MDRE works by introducing auxiliary
Figure 7: SpatialMultiOmniglot representation learning results. Plot (a) shows the MI estimated by the three methods, MDRE is able to estimate the ground truth MI very accurately. Plot (b) shows the resulting classification accuracy and plot (c) the impact of varying the number of auxiliary distributions on MI estimation with MDRE.
distributions that have overlapping support with the numerator and denominator distributions of the ratio. It then trains a multinomial logistic regression model to estimate the density-ratio. We demonstrated that MDRE is both theoretically grounded and empirically strong, and that it sets a new state of the art for high-dimensional density ratio estimation problems.
However, there are some limitations. First, while the ratio was well estimated in our empirical studies, we do not provide any bounds on the estimation, meaning that estimated KL divergences or mutual information values may be over- or underestimated. Second, the choice of the auxiliary distribution, \(m\), is an important factor of consideration that significantly impacts the performance of MDRE. While in this work we demonstrate the efficacy of three schemes for constructing the auxiliary distribution, empirically, it is, by no means, an exhaustive study. We hope to address these issues in future work, including the development of learning-based approaches to auxiliary distribution construction.
|
2310.12003
|
Gromov-Thurston manifolds and anti-de Sitter geometry
|
We consider hyperbolic and anti-de Sitter (AdS) structures on $M\times
(0,1)$, where $M$ is a $d$-dimensional Gromov-Thurston manifold. If $M$ has
cone angles greater than $2\pi$, we show that there exists a "quasifuchsian"
(globally hyperbolic maximal) AdS manifold such that the future boundary of the
convex core is isometric to $M$. When $M$ has cone angles less than $2\pi$,
there exists a hyperbolic end with boundary a concave pleated surface isometric
to $M$.
Moreover, in both cases, if $M$ is a Gromov-Thurston manifold with $2k$
pieces (as defined below), the moduli space of quasifuchsian AdS structures
(resp. hyperbolic ends) satisfying this condition contains a submanifold of
dimension $2k-3$.
When $d=3$, the moduli space of quasifuchsian AdS (resp. hyperbolic)
manifolds diffeomorphic to $M\times (0,1)$ contains a submanifold of dimension
$2k-2$, and extends up to a "Fuchsian" manifold, that is, an AdS (resp.
hyperbolic) warped product of a closed hyperbolic manifold by~$\R$.
We use this construction of quasifuchsian AdS manifolds to obtain new compact
quotients of $\O(2d,2)/\U(d,1)$. The construction uses an explicit
correspondence between quasifuchsian $2d+1$-dimensional AdS manifolds and
compact quotients of $\O(2d,2)/\U(d,1)$ which we interpret as the space of
timelike geodesic Killing fields of $\AdS^{2d+1}$.
|
Daniel Monclair, Jean-Marc Schlenker, Nicolas Tholozan
|
2023-10-18T14:33:14Z
|
http://arxiv.org/abs/2310.12003v1
|
# Gromov-Thurston manifolds and anti-de Sitter geometry
###### Abstract.
We consider hyperbolic and anti-de Sitter (AdS) structures on \(M\times(0,1)\), where \(M\) is a \(d\)-dimensional Gromov-Thurston manifold. If \(M\) has cone angles greater than \(2\pi\), we show that there exists a "quasifuchsian" (globally hyperbolic maximal) AdS manifold such that the future boundary of the convex core is isometric to \(M\). When \(M\) has cone angles less than \(2\pi\), there exists a hyperbolic end with boundary a concave pleated surface isometric to \(M\).
Moreover, in both cases, if \(M\) is a Gromov-Thurston manifold with \(2k\) pieces (as defined below), the moduli space of quasifuchsian AdS structures (resp. hyperbolic ends) satisfying this condition contains a submanifold of dimension \(2k-3\).
When \(d=3\), the moduli space of quasifuchsian AdS (resp. hyperbolic) manifolds diffeomorphic to \(M\times(0,1)\) contains a submanifold of dimension \(2k-2\), and extends up to a "Fuchsian" manifold, that is, an AdS (resp. hyperbolic) warped product of a closed hyperbolic manifold by \(\mathbb{R}\).
We use this construction of quasifuchsian AdS manifolds to obtain new compact quotients of \(\mathrm{O}(2d,2)/\mathrm{U}(d,1)\). The construction uses an explicit correspondence between quasifuchsian \(2d+1\)-dimensional AdS manifolds and compact quotients of \(\mathrm{O}(2d,2)/\mathrm{U}(d,1)\) which we interpret as the space of timelike geodesic Killing fields of \(\mathrm{AdS}^{2d+1}\).
J.-M. S. was partially supported by FNR project O20/14766753.
4.5 Equilateral polygons with a central symmetry
* 5 Geometrization of Gromov-Thurston manifolds
* 5.1 Hipped hypersurfaces in \(\mathrm{AdS}^{d+1}\) and polygons in \(\mathrm{dS}^{2}\)
* 5.2 Geometrization of Gromov-Thurston cone-manifolds for \(a>1\)
* 5.3 Hipped hypersurfaces in \(\mathbb{H}^{d+1}\) and polygons in \(\mathrm{S}^{2}\)
* 5.4 Geometrization of Gromov-Thurston cone-manifolds for \(a<1\)
* 6 Fuchsian deformations in dimension 3+1
* 6.1 The Hodgson-Kerckhoff deformation theorem
* 6.2 Deformations towards the Fuchsian locus
* 6.3 Deformation to Fuchsian manifolds
* 6.4 Integration of bending deformations
* 6.5 Regularity of the map \(p\mapsto N_{p}\)
* 7 Initial singularities
* 7.1 Dualities
* 7.2 Initial singularities of de Sitter spacetimes
* 7.3 Initial singularities of AdS spacetimes
* 8 Compact Clifford-Klein forms
* 8.1 From GHC manifolds to compact quotients
* 8.2 Geodesic Killing fields
* 8.3 Killing fields orthogonal to a strongly convex hypersurface
* 8.4 Fiber bundle over convex Cauchy hypersurfaces
## 1. Introduction and main results
Gromov and Thurston constructed in [21] families of closed manifolds of dimension at least 4 which carry negatively curved Riemannian metrics but do not admit any locally homogeneous metric.
Roughly speaking, these manifolds are obtained by taking ramified covers and quotients of certain closed hyperbolic manifolds which admit a dihedral group of symmetries generated by two reflections along totally geodesic hypersurfaces, see Section 2. In particular, they carry a natural hyperbolic metric with a cone singularity of rational angle along a totally geodesic submanifold of codimension 2. Using arguments based on Mostow's rigidity in codimension 1, Gromov and Thurston prove that they cannot carry a smooth hyperbolic metric. However, their very geometric origin suggests that one might endow them with geometric structures of a "weaker" type.
Indeed, Kapovich proved in [28] that Gromov-Thurston manifolds with cone singularity of angle less than \(2\pi\) carry a convex projective structure, namely, that they are quotients of a convex open subset of a projective space by a discrete group of transformations. In particular, their fundamental group admits quasi-isometric embeddings in a real linear group.
In a similar spirit, we will show here that, if \(M\) is a Gromov-Thurston manifold with cone singularity larger than \(2\pi\), then \(M\times\mathbb{R}\) carries a _globally hyperbolic maximal Cauchy compact anti-de Sitter structure_, later abreviated in GHMC AdS structure. Following a recent trend, we will also use the term "quasifuchsian AdS manifold", which brings to mind the analogy between those AdS manifolds and quasifuchsian hyperbolic manifolds (see [34, 1]). This provides exotic examples of such manifolds and answers negatively to Questions 5.1 and 5.2 of the survey [3]. Note that counter-examples to these questions were also constructed by Lee-Marquis [32] in dimension up to \(8+1\) using reflection groups.
By the work of Gueritaud-Guichard-Kassel-Wienhard [23], our construction of exotic AdS quasifuchsian groups in dimension \(2d+1\) also provides examples of exotic compact quotients of the homogeneous spaces \(\mathrm{O}(2d,2)/\mathrm{U}(d,1)\). These are, to our knowledge, the first examples of discrete groups acting properly discontinuously and cocompactly on a homogeneous space of
reductive type which are not isomorphic to a uniform lattice in some other Lie group. We will describe an explicit geometric relation between these two objects in Section 8.
In dimension \(3\), though Gromov-Thurston's construction still makes sense, their manifolds also carry a smooth hyperbolic metric, according (for instance) to Perelman's geometrization theorem. In that case, using Hodgson-Kerckhoff's results on conical hyperbolic metrics in dimension \(3\), we will construct a family of GHMC AdS structures on \(M\times\mathbb{R}\) that interpolates between the "Fuchsian structure" and our general construction. We also show that this family "integrates" linear combinations of infinitesimal "bending" deformations of the representation \(i:\pi_{1}(M)\to\mathrm{SO}(3,1)\) within \(\mathrm{SO}(3,2)\).
### AdS structures associated to Gromov-Thurston manifolds
Recall that a Lorentzian manifold is called _globally hyperbolic Cauchy compact_ if it admits a compact _Cauchy hypersurface_, i.e. a topological hypersurface intersecting any inextendible timelike curve at a single point. It is further called _maximal_ (abbreviated in GHMC) if it is maximal for the inclusion among such spaces.
If \(N\) is a GHMC _anti-de Sitter manifold_ (i.e. a GHMC Lorentzian manifold of constant sectional curvature \(-1\)) of dimension \(d+1\), then \(N\) is the quotient of a convex domain of the _anti-de Sitter space_\(\mathrm{AdS}^{d+1}\) by a discrete subgroup \(\Gamma\) of \(\mathrm{Isom}(\mathrm{AdS}^{d+1})\simeq\mathrm{O}(d,2)\) (see [34, 4]). If furthermore \(N\) admits a _convex_ Cauchy hypersurface, then the group \(\Gamma\) is Gromov-hyperbolic, its embedding into \(\mathrm{O}(d,2)\subset\mathrm{GL}(d+2,\mathbb{R})\) has a refined discreteness property called \(P_{1}\)_-Anosov_ (see Theorem 3.29) and, by a theorem of Barbot [5], any continuous deformation of the inclusion in \(\mathrm{Hom}(\Gamma,\mathrm{O}(d,2))\) is again the holonomy of a GHMC AdS manifold homeomorphic to \(N\). In that case, we will call \(N\) a _quasifuchsian AdS manifold_. We give more details on AdS geometry in Section 3.
We recall in Section 2 the construction of Gromov-Thurston cone-manifolds. The main point that we will use here is that those are cone-manifolds of dimension \(d\) (for \(d\geq 3\)), obtained by gluing \(2k\) isometric "pieces" along a manifold of dimension \(d-2\). Each piece is a hyperbolic manifold with "corner", whose boundary is composed of two totally geodesic hypersurfaces meeting along a manifold of codimension \(2\) with an interior dihedral angle of \(\pi/n\). Gromov-Thurston manifolds thus carry a hyperbolic metric with a cone singularity along a totally geodesic submanifold of codimension \(2\), and the angle around this cone singularity can be smaller or greater than \(2\pi\), depending on whether \(k<n\) or \(k>n\).
The main result of this paper is the following:
**Theorem 1.1**.: _Let \((M,g)\) be a \(d\)-dimensional Gromov-Thurston cone-manifold with cone angle larger than \(2\pi\) at the singularity, \(d\geq 3\). Then there is a quasifuchsian AdS spacetime \(N\) of dimension \(d+1\) for which the future boundary of the convex core is isometric to \((M,g)\)._
There is some flexibility in our construction which allows to construct a non-trivial moduli space of such quasifuchsian AdS spacetimes.
**Theorem 1.2**.: _Let \(M\) be a \(d\)-dimensional Gromov-Thurston cone-manifold with \(2k\) pieces, with cone angle larger than \(2\pi\), \(d\geq 3\). Then there is a \(2k-3\) parameter family of quasifuchsian AdS manifolds for which the future boundary of the convex core is isometric to \(M\)._
Theorem 1.1 is partly motivated by Questions 5.1 and 5.2 of the survey [3]. When this survey was written, the only known examples of GHMC AdS manifolds were either deformations of _Fuchsian_ AdS manifolds - those admitting a totally geodesic Cauchy hypersurface - or quotients of an open convex domain of \(\mathrm{AdS}^{d+1}\) by a uniform lattice in \(\mathrm{O}(p,1)\times\mathrm{O}(q,1)\subset\mathrm{O}(d,2)\), \(p+q=d\). In particular, such manifolds are always homeomorphic to \(M\times\mathbb{R}\) with \(M\) a compact quotient of \(\mathbb{H}^{p}\times\mathbb{H}^{q}\) by a uniform lattice. Question 5.1 of the survey [3] asked whether these are all the possible topologies, while Question 5.2 asked whether every GHMC manifold could be deformed to one of these standard ones. In the same direction, Barbot-Merigot [6, Question 8.7] asked whether any AdS quasifuchsian manifold is homeomorphic to the product of a hyperbolic manifold with \(\mathbb{R}\). Note that the answer to these questions is known to be positive in dimension \(2+1\) by the work of Mess (see [34]).
Question 5.1 (and thus Question 5.2) was answered negatively by Lee and Marquis [32] in dimension \(4+1\) to \(8+1\): they constructed Coxeter reflection groups which are not hyperbolic
lattices but admit AdS quasifuchsian representations. However, as often with reflection groups, these can only exist up to a certain dimension. In contrast, Theorem 1.1 provides a negative answer to Question 5.1 in every dimension \(d+1\geq 4+1\). Indeed, Gromov-Thurston manifolds of dimension \(d\geq 4\) are not diffeomorphic to quotients of \(\mathbb{H}^{p}\times\mathbb{H}^{q}\). In fact, we have the stronger result:
**Theorem 1.3** (Gromov-Thurston).: _The fundamental group of a Gromov-Thurston manifold \(M\) of dimension \(d\geq 4\) is not commensurable to a lattice in any Lie group._
_Remark 1.4_.: One easily sees that \(\pi_{1}(M)\) cannot be a lattice in a Lie group with non-trivial solvable radical. Since \(\pi_{1}(M)\) surjects onto a uniform hyperbolic lattice, Margulis superrigidity implies that \(\pi_{1}(M)\) is not a lattice in a higher rank semisimple Lie group either.
With arguments involving Mostow's rigidity, Gromov and Thurston prove that it is not a lattice in \(\operatorname{Isom}(\mathbb{H}^{d})\). They also construct a Riemannian metric on \(M\) with sectional curvature pinched between \(-1-\epsilon\) and \(-1\), which cannot exist on quotients of other rank \(1\) symmetric spaces by a result of Yau-Zheng [42]. To construct such a metric, one needs the additional assumption that the "injectivity radius" of the singular locus is sufficiently large (so that one has enough room to smoothen the singular hyperbolic metric).
However, Giralt proved in her thesis [19] that \(\pi_{1}(M)\) is always cubulable and virtually special. By a theorem of Delzant-Py [17], it is thus not isomorphic to a complex hyperbolic lattice, without any additional geometric assumption. Another consequence is that \(\pi_{1}(M)\) has the Haagerup property (see [14]). This rules out the possibility that \(\pi_{1}(M)\) be a lattice in the remaining rank \(1\) simple Lie groups \(\operatorname{Sp}(n,1)\) and \(\operatorname{F}_{4}^{-20}\), which have Kazhdan's property (T).
As another consequence of Theorem 1.1, one obtains the existence of nice linear representations of fundamental groups of Gromov-Thurston cone-manifolds with cone angle larger than \(2\pi\).
**Corollary 1.5**.: _Let \(M\) be a Gromov-Thurston cone-manifold of dimension \(d\) with cone angle larger than \(2\pi\). Then \(\pi_{1}(M)\) admits a quasi-isometric embedding into \(\operatorname{O}(d,2)\). In particular, \(\pi_{1}(M)\) is linear._
_Remark 1.6_.: Kapovich, on the other hand, constructed _convex projective structures_ on Gromov-Thurston cone-manifolds with cone angle less than \(2\pi\)[28]. Combining his result with ours, we get that fundamental groups of \(d\)-dimensional Gromov-Thurston manifolds embed quasi-isometrically in \(\operatorname{PGL}(d+2,\mathbb{R})\) without any angle condition.
_Remark 1.7_.: In fact, both our representations and those of Kapovich satisfy a stronger form of quasi-isometric property called \(P_{1}\)_-Anosov property_. We refer to [29, 16, 23] for more details on \(P_{1}\)-Anosov representations.
_Remark 1.8_.: Giralt's cubulation theorem combined with the work of Haglund-Wise [24] implies that fundamental groups of Gromov-Thurston manifolds virtually embed into right-angled Artin groups and are thus linear. These arguments, however, give little control on the dimension of a faithful linear representation.
Yet another consequence of Theorem 1.1 is that a quasifuchsian AdS manifold of dimension \(d+1\) does not always contain a Cauchy hypersurface whose geometry is intrinsically locally isometric to the hyperbolic space \(\mathbb{H}^{d}\) when \(d\geq 3\). The fact that this is true for \(d=2\) was crucial in the proof of the rigidity theorem in [20], stating that the _limit set_ of a quasifuchsian AdS manifold of dimension \(2+1\) has _Lorentzian Hausdorff dimension_ smaller than \(1\), with equality only in the Fuchsian case. Such a statement is believed to be true in higher dimension, but Theorem 1.1 confirms that the techniques used in [20] cannot be used in this case.
### Exotic compact Clifford-Klein forms
A _compact Clifford-Klein form_ of a homogeneous space \(G/H\) is a quotient of \(G/H\) by a discrete subgroup \(\Gamma\subset G\) acting properly discontinuously and cocompactly on \(G\).
Gueritaud-Guichard-Kassel-Wienhard remarked in [23] that AdS quasifuchsian subgroups \(\Gamma\) of \(\operatorname{O}(2d,2)\) act properly discontinuously and cocompactly on the pseudo-Riemannian symmetric space \(\operatorname{O}(2d,2)/\mathrm{U}(d,1)\). Hence, we obtain as a direct consequence of Theorem 1.1:
**Corollary 1.9**.: _Let \(M\) be a Gromov-Thurston cone-manifold of dimension \(2d\) with cone angle larger than \(2\pi\). Then there exists a faithful representation \(\rho:\pi_{1}(M)\to\mathrm{O}(2d,2)\) such that \(\rho(\pi_{1}(M))\) acts properly discontinuously and cocompactly on \(\mathrm{O}(2d,2)/\mathrm{U}(d,1)\)._
Very few pseudo-Riemannian symmetric spaces \(G/H\) are known to admit compact quotients that are _non-standard_, i.e. where \(\Gamma\) is not commensurable to a lattice in a connected subgroup of \(G\). The space \(\mathrm{O}(2d,2)/\mathrm{U}(d,1)\) is one of them. So far, however, these non-standard quotients were obtained as deformations of the standard ones (corresponding to AdS Fuchsian manifolds). Together with the work of Lee-Marquis [32], Corollary 1.9 thus provides the first _exotic_ examples of compact Clifford-Klein forms of \(\mathrm{O}(2d,2)/\mathrm{U}(d,1)\) (i.e. which are not deformations of standard ones). In fact, to our knowledge, these are the first examples of a compact Clifford-Klein form \(\Gamma\backslash G/H\) for which \(\Gamma\) is not virtually isomorphic to a lattice in some Lie group.
Gueritaud-Guichard-Kassel-Wienhard's argument to associate compact Clifford-Klein forms to AdS quasifuchsian manifolds is rather indirect. In Section 7, we provide a direct, geometric explanation of this correspondence, which also allows for a more precise analysis of the Clifford-Klein forms which are obtained. We start by interpreting \(\mathrm{O}(2d,2)/\mathrm{U}(d,1)\) as the space of timelike unit geodesic Killing vector fields in \(\mathrm{AdS}^{2d+1}\). The correspondence follows from the fact that given a smooth, complete strictly convex Cauchy hypersurface \(\mathcal{H}\) in a quasifuchsian AdS manifold of dimension \(2d+1\), any unit timelike geodesic Killing vector field on \(\mathrm{AdS}^{2d+1}\) is orthogonal to the lift to \(\mathrm{AdS}^{2d+1}\) of \(\mathcal{H}\) at a unique point. This gives a natural projection of the Clifford-Klein form to the Cauchy hypersurface.
**Theorem 1.10**.: _Let \(N\) be an AdS quasifuchsian manifold of dimension \(2d+1\) with fundamental group \(\Gamma\subset\mathrm{O}(2d,2)\) and \(\mathcal{H}\) a Cauchy hypersurface. Then there exists a smooth fibration \(\pi:\Gamma\backslash\mathrm{O}(2d,2)/\mathrm{U}(d,1)\to\mathcal{H}\) whose fibers are translates of the compact homogeneous subspace \(\mathrm{O}(2d)/\mathrm{U}(d)\)._
This Theorem confirms, for the case of \(\mathrm{O}(2d,2)/\mathrm{U}(d,1)\) a general conjecture formulated by the third author in [39, Section 8].
### Hyperbolic ends associated to Gromov-Thurston manifolds
Our anti-de Sitter geometrization of Gromov-Thurston manifolds with cone angle larger than \(2\pi\) has a hyperbolic counterpart when the cone angle is smaller than \(2\pi\). In that case, one can realize a Gromov-Thurston manifold \(M\) of dimension \(d\) as the boundary of a _hyperbolic end_ of dimension \(d+1\).
**Definition 1.11**.: A _hyperbolic end_ of dimension \(d+1\) is a manifold with boundary of the form \(M\times[0,+\infty)\) with \(M\) closed of dimension \(d\) equipped with a hyperbolic metric, such that a neighbourhood of \(M\times\{0\}\) is developped to the exterior of a convex hypersurface in \(\mathbb{H}^{d+1}\), and which is maximal (in the sense of inclusion) under this condition.
A simple example of a hyperbolic end is provided by the closure a connected component of the complement of the convex core in a quasifuchsian hyperbolic \(3\)-dimensional manifold.
The following theorems are the hyperbolic counterparts of Theorems 1.1 and 1.2
**Theorem 1.12**.: _Let \(M\) be a \(d\)-dimensional Gromov-Thurston cone-manifold with cone angle smaller than \(2\pi\) at the singularity. Then there exists a hyperbolic end \(N\) of dimension \(d+1\) for which the boundary is isometric to \(M\)._
Moreover, if \(k\geq 2\), then there is a non-trivial moduli space of deformations of the hyperbolic ends realizing \(M\) as their concave pleated boundary.
**Theorem 1.13**.: _Let \(M\) be a Gromov-Thurston cone-manifold with \(2k\) pieces, with cone angles smaller than \(2\pi\). Then there is a \(2k-3\) parameter family of hyperbolic ends for which the concave pleated boundary is isometric to \(M\)._
_Remark 1.14_.: The construction of hyperbolic ends associated to Gromov-Thurston manifolds is already suggested in the initial paper of Gromov-Thurston and was a starting point for Kapovich's investigation of the geometry of these manifolds. Though it might be considered folklore knowledge, its details do not seem to appear in the litterature, hence our decision to include them here.
A hyperbolic end \(N=M\times[0,+\infty)\) admits a conformal compactification obtained by adding a "boundary at infinity" \(\partial_{\infty}N=M\times\{+\infty\}\). This boundary admits an atlas with charts in \(\partial_{\infty}\mathbb{H}^{d+1}\simeq\mathbb{S}^{d}\) and transitions maps in \(\mathrm{O}(d+1,1)\simeq\mathrm{M\ddot{o}b}(\mathbb{S}^{d})\), providing \(M\) with a conformally flat structure. We thus have the following:
**Corollary 1.15**.: _Let \(M\) be a \(d\)-dimensional Gromov-Thurston cone-manifold with cone angle smaller than \(2\pi\). Then \(M\) admits a conformally flat metric._
_Remark 1.16_.: One can show that different hyperbolic ends yield different conformal structures (see Theorem 3.45 and [31]). Hence we also have a \(2k-3\)-dimensional moduli space of flat conformal structures on \(M\).
Finally, each flat conformal structure on \(M\) is also the conformal boundary at infinity of a unique maximal globally hyperbolic _de Sitter structure_ on \(M\times\mathbb{R}\), which is in some sense "dual" to the hyperbolic end. Hence we get:
**Corollary 1.17**.: _Let \(M\) be a \(d\)-dimensional Gromov-Thurston cone-manifold with cone angle smaller than \(2\pi\). Then there exists a GHMC de Sitter spacetime diffeomorphic to \(M\times\mathbb{R}\)._
We refer to Scannell's thesis [36] for more details on the correspondence between hyperbolic ends, conformally flat manifolds and GHMC de Sitter spacetimes.
An important difference between the AdS and hyperbolic settings is the following: for GHMC AdS manifolds \(M\times\mathbb{R}\), the fact that the universal cover \(\widetilde{M}\) is developed to a spacelike hypersurface forces this development to be an embedding and the holonomy \(\rho:\pi_{1}(M)\to\mathrm{O}(d,2)\) to be discrete and faithful. In contrast, if \(M\times[0,+\infty)\) is a hyperbolic end, the development of \(\widetilde{M}\) need not be an embedding and the holonomy representation is not in general discrete and faithful. Gromov and Thurston remark in their paper that, for cone singularities sufficiently close to \(2\pi\) and assuming that the singular locus of \(M\) has a sufficiently large injectivity radius, one could construct hyperbolic ends for which \(\widetilde{M}\) is quasi-isometrically embedded in \(\mathbb{H}^{d+1}\) and the holonomy \(\rho:\pi_{1}(M)\to\mathrm{O}(d+1,1)\) is convex-cocompact. We will not prove this result which is beyond the scope of this paper.
### Deformations in dimension \(3+1\)
In this section we consider the case \(d=3\). Contrary to higher dimension, Gromov-Thurston manifolds of dimension \(3\) do carry smooth hyperbolic structures. Thanks to the rich deformation theory for hyperbolic cone-manifolds in this dimension [25], we can provide more precise results and show that our AdS spacetimes with Gromov-Thurston Cauchy hypersurfaces (as defined in Section 2) can be deformed continuously to Fuchsian AdS spacetimes.
**Theorem 1.18**.: _Let \(M\) be a 3-dimensional Gromov-Thurston cone-manifold with \(2k\) pieces, with cone angle larger than \(2\pi\). Then there is a connected \(2k-2\) parameter family of quasifuchsian AdS spacetimes diffeomorphic to \(M\times(0,1)\) containing a \(2k-3\)-dimensional family of spacetimes with future boundary of the convex core isometric to \(M\) and a point corresponding to a Fuchsian AdS spacetime._
**Theorem 1.19**.: _Let \(M\) be a 3-dimensional Gromov-Thurston cone-manifold with \(2k\) pieces, with cone angle smaller than \(2\pi\). Then there is a connected \(2k-2\) parameter family of hyperbolic ends diffeomorphic to \(M\times(0,\infty)\) containing a \(2k-3\)-dimensional family of hyperbolic ends with pleated boundary isometric to \(M\) and a point corresponding to a Fuchsian end._
### Integrating bending deformations
In dimension \(d=3\), Theorems 1.18 and 1.19 can be interpreted in terms of integration of infinitesimal deformations of the holonomy representation of a hyperbolic 3-dimensional manifold in \(\mathrm{O}(3,2)\) and \(\mathrm{O}(4,1)\) respectively.
When a \(d\)-dimensional hyperbolic manifold \(M\) contains a 2-sided totally geodesic hypersurface, then its holonomy representation can be deformed into larger Lie groups such as \(\mathrm{O}(d+1,1)\), \(\mathrm{O}(d,2)\) or \(\mathrm{GL}(d+1,\mathbb{R})\), via some generalized "bending" (see for instance [27]). It was already noted in [27] that if \(M\) contains \(r\) such disjoint hypersurfaces, then the deformation space has dimension at least \(r\). On the other hand, when the hypersurfaces intersect, for \(d\geq 3\), the deformation space
might be singular, and some infinitesimal deformations given by sums of infinitesimal bending deformations along two intersecting hypersurfaces may not be integrated into actual deformations.
Theorems 1.18 and 1.19 show that, in dimension \(d=3\), a significant degree of flexibility exists to deform Fuchsian representations of Gromov-Thurston manifolds.
**Theorem 1.20**.: _Let \(M\) be a 3-dimensional Gromov-Thurston manifold with \(2k\) pieces and no cone singularity (that is, total angle \(2\pi\) at the gluing curve). Then there exists a neighbourhood \(U\) of \(0\) in \(\mathbb{R}^{k}\) such that, for every \((\theta_{1},\cdots,\theta_{k})\in U\), there exists a quasifuchsian AdS \(4\)-manifold \(N_{\theta}\) with a Cauchy hypersurface which is a Gromov-Thurston manifold homeomorphic to \(M\), consisting of \(2k\) totally geodesic pieces pleated at angles \(t\theta_{1},t\theta_{2},\cdots,t\theta_{k},t\theta_{1},\cdots,t\theta_{k}\) (in cyclic order around the singular curve)._
**Theorem 1.21**.: _Let \(M\) be a 3-dimensional Gromov-Thurston manifold, with \(2k\) pieces and no cone singularity (that is, total angle \(2\pi\) at the gluing curve). Then there exists a neighbourhood \(U\) of \(0\) in \(\mathbb{R}^{k}\) such that, for every \((\theta_{1},\cdots,\theta_{k})\in U\), there exists a quasifuchsian hyperbolic \(4\)-manifold \(N_{\theta}\) containing a hypersurface which is a Gromov-Thurston manifold homeomorphic to \(M\), consisting of \(2k\) totally geodesic pieces pleated at angles \(t\theta_{1},t\theta_{2},\cdots,t\theta_{k},t\theta_{1},\cdots,t\theta_{k}\) (in cyclic order around the singular curve)._
The holonomy of \(N_{\theta}\) is a representation \(\rho_{\theta}:\pi_{1}(M)\to\operatorname{SO}_{\circ}(3,2)\) or \(\operatorname{SO}_{\circ}(4,1)\). When all but one of the \(\theta_{i}\) vanish, the representation \(\rho_{\theta}\) corresponds to Johnson-Millson "bending deformation" of \(\rho_{0}:\pi_{1}(M)\to\operatorname{SO}_{\circ}(3,1)\). Theorem 1.20 gives a geometric construction of representations which combine several bendings along intersecting hypersurfaces. In particular, it shows that linear combinations of infinitesimal bendings can be integrated (see Section 6.4). Those deformations should be compared to the stamping deformations defined by Apanasov [2] (see also [7]) which seem to be closely related (in the hyperbolic setting).
### Initial singularity of GHMC spacetimes
The results presented above, concerning the induced metrics on the future boundary of the convex cores of quasifuchsian AdS spacetimes, or on the concave boundary of hyperbolic ends, have consequences for the possible geometry of the initial singularity of GHMC AdS or dS spacetimes.
In dimension \(2+1\), the geometry of the initial singularity of an AdS or dS spacetime is rather well understood, thanks to the work of Mess [34, 1]. The initial singularity is the quotient of a real tree by an action of the fundamental group of the spacetime. In some cases, the initial singularity is a finite graph (the quotient of a simplicial tree by the fundamental group of the manifold) but this is rather exceptional.
In higher dimension, however, the geometric structure of the initial singularity is much more mysterious. Here we provide examples of spacetimes for which the initial singularity is remarkably simple. We do not know to what extent this phenomenon is "generic", or whether "generic" spacetimes in dimension \(d+1\), for \(d\geq 3\), have a much more intricate initial singularity.
**Theorem 1.22**.: _Let \(M\) be a \(d\)-dimensional Gromov-Thurston manifold with \(2k\) pieces, with cone angles larger than \(2\pi\) at the singularities. There is a \(2k-3\)-dimensional family of quasifuchsian AdS spacetimes of dimension \(d+1\) with Cauchy hypersurfaces diffeomorphic to \(M\) for which the initial singularity is a 2-dimensional cell complex, with exactly one 2-dimensional cell._
**Theorem 1.23**.: _Let \(M\) be a \(d\)-dimensional Gromov-Thurston manifold with \(2k\) pieces, with cone angles smaller than \(2\pi\) at the singularities. There is a \(2k-3\)-dimensional family of GHMC \(\operatorname{dS}\) spacetimes of dimension \(d+1\) with Cauchy hypersurfaces diffeomorphic to \(M\) for which the initial singularity is a 2-dimensional cell complex, with exactly one 2-dimensional cell._
Those two statements follow from the description of the geometry of the pleated boundary (resp. the future boundary of the convex core) for the hyperbolic ends (resp. AdS manifolds) appearing in Theorem 1.13 and Theorem 1.2, through the duality between hyperbolic and de Sitter space, resp. between the AdS space and itself. This correspondence is briefly recalled in Section 7, where Theorem 1.22 and Theorem 1.23 are proven.
### Outline of the paper
In Section 2, we recall the construction of Gromov-Thurston manifolds. We then recall in Section 3 a number of background definitions and statements that are needed, such as the key definitions of AdS geometry, hyperbolic ends, and properties of hypersurfaces in hyperbolic and AdS manifolds. We explain in particular that the data of a quasifuchsian AdS manifold with Cauchy hypersurface homeomorphic to \(M\) is equivalent to the data of a _space-like embedding structure_ on \(M\) i.e. an atlas of local embeddings of \(M\) as spacelike hypersurfaces in AdS, with coordinate changes in \(\operatorname{Isom}(\operatorname{AdS})\). When \(M\) is a Gromov-Thurston manifold, one is then reduced to prescribing a way to "bend" the hyperbolic pieces in the anti-de Sitter space.
Such bendings are parametrized by their link along the codimension 2 singularity, which is a spacelike polygon in the de Sitter space of dimension 2. Section 4 focuses on the geometry of polygons in the sphere and the de Sitter plane. It starts with a characterization of the infinitesimal variations of lengths and angles of spherical and de Sitter polygons, and further describes various families of polygons (equilateral polygons, polygons with a central symmetry), which give us the material to prove our main theorems.
In Section 5 we prove the main results of the paper concerning the geometrization of Gromov-Thurston manifolds in dimension \(d\), for \(d\geq 4\), while Section 6 contains the proofs of the main results for Gromov-Thurston manifolds in dimension 3. Finally, Section 7 is focused on the initial singularities of de Sitter and anti-de Sitter spacetimes, and Section 8 on the applications to compact Clifford-Klein forms of \(\operatorname{O}(2d,2)/\mathrm{U}(d,1)\).
## 2. Gromov-Thurston (cone-)manifolds
Here we describe a fairly general version of the Gromov-Thurston construction, providing us with a family of cone-manifolds that we will "geometrize", in the sense that we will show that they occur as either the future boundary of the convex core of a quasifuchsian AdS manifold, or the concave boundary of a hyperbolic end.
### Hyperbolic cone-manifolds
We first recall the definition given by Thurston [40, Section 3], [10, Def. 3.1] of a hyperbolic cone-manifold. The definition is recursive in the dimension. We briefly recall this definition here for hyperbolic, Euclidean and spherical cone-manifolds.
* A one-dimensional cone-manifold is simply a one-dimensional Riemannian manifold.
* For \(d\geq 2\), a \(d\)-dimensional spherical (resp. hyperbolic, Euclidean) cone-manifold \(M\) is a compact metric space, together with a singular metric in which every point has a neighborhood isometric to \(N\times[0,\epsilon]\) equipped with the singular metric \(dr^{2}+\sin^{2}(r)h\) (resp. \(dr^{2}+\sinh^{2}(r)h\), \(dr^{2}+r^{2}h\)) where \(N\) is a spherical cone-manifold of dimension \(d-1\) equipped with the singular metric \(h\).
For instance, a 2-dimensional hyperbolic cone-manifold - also called hyperbolic surface with cone singularities - contains a finite set of singular points. It is hyperbolic outside of those singular points, and each singular point has a neighborhood isometric to a "model" which only depends on one parameter, an "angle" which is the length of the 1-dimensional manifold appearing in the definition.
### Dihedral hyperbolic manifolds
Gromov-Thurston's construction starts with the data of a closed oriented hyperbolic manifold \(M\) of dimension \(d\) and two isometric involutions \(\sigma_{1}\) and \(\sigma_{2}\) of \(M\) with the following properties:
* The fixed loci of \(\sigma_{1}\) and \(\sigma_{2}\) are connected embedded totally geodesic hypersurfaces,
* The intersection \(S=\operatorname{Fix}\sigma_{1}\cap\operatorname{Fix}\sigma_{2}\) is connected,
* Fix \(\sigma_{1}\) and Fix \(\sigma_{2}\) intersect along \(S\) with an angle \(\frac{\pi}{n}\).
* Fix \(\sigma_{1}\) and Fix \(\sigma_{2}\) are homologically trivial.
The existence of manifolds \(M\) of any dimension \(d\geq 2\) with those properties is proved in [21]. Under these conditions, \(\sigma_{1}\) and \(\sigma_{2}\) generate a dihedral group of isometries of \(M\) of order \(2n\), denoted \(D_{n}\). We denote by \(R_{n}\) its cyclic subgroup of order \(n\), spanned by \(\rho=\sigma_{1}\sigma_{2}\). We call the data of \((M,\sigma_{1},\sigma_{2})\) a \(n\)_-dihedral hyperbolic manifold_.
Let \(H_{1}\subset\operatorname{Fix}\sigma_{1}\) be the closure of a connected component of \(\operatorname{Fix}\sigma_{1}\setminus S\), and \(H_{2}\subset\operatorname{Fix}\sigma_{2}\) the closure of a connected component of \(\operatorname{Fix}\sigma_{2}\setminus S\) chosen so that the oriented angle at \(S\) from \(H_{1}\) to
\(H_{2}\) is \(\frac{\pi}{n}\). We then consider the copies of \(H_{1}\) and \(H_{2}\) under the isometry \(\rho=\sigma_{1}\sigma_{2}\) which we denote \(H_{2i+1}=\rho^{i}(H_{1})\) and \(H_{2i}=\rho^{i-1}(H_{2})\) for \(i=1,\dots,n-1\). Together, they divide \(M\) into \(2n\) pieces \(V_{1},\dots,V_{2n}\) which are fundamental domains for the action of \(D_{n}\). Note that \(\operatorname{Fix}\sigma_{1}=H_{1}\cup H_{n+1}\) and \(\operatorname{Fix}\sigma_{2}=H_{2}\cup H_{n+2}\).
When considering the action of the cyclic subgroup \(R_{n}\), a fundamental domain is given by the union of two of the former small pieces, e.g. the domain bounded by \(H_{1}\) and \(H_{3}\) containing \(H_{2}\) (see Figure 1).
The quotient \(\overline{M}=R_{n}\backslash M\) is a topological manifold and the quotient map \(M\to\overline{M}\) is a ramified covering of degree \(n\): it is \(n\) to \(1\) on the complement of \(S\) and injective in restriction to \(S\). We still denote by \(S\) its image under the quotient map. One can show (see [21]) that \(S\) bounds two codimension \(1\) submanifolds with boundary \(\overline{H}_{1},\overline{H}_{2}\subset\overline{M}\), which are the respective projections of \(H_{1}\) and \(H_{2}\).
### Gromov-Thurston manifolds
**Definition 2.1**.: Let \(M\) be an \(n\)-dihedral hyperbolic manifold. For every \(a\in\frac{1}{n}\mathbb{N}_{>0}\), we define the _Gromov-Thurston_ manifold \(M^{a}\) of ramification \(a\) associated to \(M\) as the cyclically ramified cover of \(\overline{M}\) along \(S\) of degree \(na\).
More visually, \(H_{1},H_{3},\dots,H_{2n-1}\) cut \(M\) into \(n\) copies of the aforementioned fundamental piece, and \(M^{a}\) is obtained by gluing \(na\) copies of this fundamental piece (see Figure 1).
_Example 2.2_.: We have \(M^{1/n}=\overline{M}\) and \(M^{1}=M\). If \(a\) is an integer, then \(M^{a}\) is the cyclically ramified cover of \(M\) along \(S\) of degree \(a\).
The hyperbolic metric \(g_{\mathbb{H}}\) on \(M\) induces a singular hyperbolic metric on \(M^{a}\) with a cone singularity of angle \(2\pi a\) along \(S\subset M^{a}\), the preimage of \(S\subset\overline{M}\) by the covering map. In particular, for \(a\geq 1\), this metric is locally \(CAT(-1)\), implying that the fundamental group \(\pi_{1}(M^{a})\) is Gromov hyperbolic.
We will denote by \(H_{1},\dots,H_{2k}\) the lifts of \(\overline{H}_{1}\) and \(\overline{H}_{2}\) to \(M^{a}\) (in cyclic order around \(S\)), and denote by \(V_{i}\) the component of \(M^{a}\backslash\bigcap_{i=1}^{2k}H_{i}\) bounded by \(H_{i}\) and \(H_{i+1}\). (These notations are compatible with the ones introduced in the previous paragraph in the particular case \(k=n\).)
Applying Mostow's rigidity in dimension \(d-1\) (and more precisely to the hypersurfaces \(\overline{H}_{1}\) and \(\overline{H}_{2}\)), Gromov and Thurston show that in dimension \(d\geq 4\) the fundamental group of \(M^{a}\) is never
Figure 1. A \(n\)-dihedral manifold \(M\), its fundamental piece and the quotient \(\overline{M}\).
isomorphic to a hyperbolic lattice when \(a\neq 1\). Gromov-Thurston manifolds in dimension \(d\geq 4\) are thus never homeomorphic to quotients of the hyperbolic space. In fact, their fundamental group is not commensurable to a lattice in any Lie group (see Remark 1.4).
## 3. Globally hyperbolic AdS manifolds and hyperbolic ends
We recall in this section some key notions concerning AdS geometry and more specifically the geometry of globally hyperbolic AdS spacetimes. Additional results can be found e.g. in [34, 1, 6, 12]. We also present hyperbolic ends.
### The anti-de Sitter space
Here we use the hyperboloid model of the anti-de Sitter space. Let \(\mathbb{R}^{d,2}\) denote the real vector space \(\mathbb{R}^{d+2}\) endowed with the standard quadratic form \(\mathbf{q}\) of signature \((d,2)\):
\[\mathbf{q}(x)=x_{1}^{2}+\ldots+x_{d}^{2}-x_{d+1}^{2}-x_{d+2}^{2}\.\]
We denote by \(\langle\cdot,\cdot\rangle\) the associated bilinear form.
**Definition 3.1**.: The anti-de Sitter space of dimension \(d+1\) is the quadric:
\[\mathrm{AdS}^{d+1}=\{x\in\mathbb{R}^{d,2}\mid\mathbf{q}(x)=-1\}\.\]
The restriction of \(\mathbf{q}\) to the tangent bundle of \(\mathrm{AdS}^{d+1}\) endows the anti-de Sitter space with a Lorentzian metric of constant sectional curvature \(-1\), which we denote by \(g_{\mathrm{AdS}}\). This metric is homogeneous under the action of the group \(\mathrm{O}(d,2)\) of linear transformations of \(\mathbb{R}^{d,2}\) preserving \(\mathbf{q}\). We denote by \(\mathrm{SO}_{\circ}(d,2)\) the connected component of the identity in \(\mathrm{O}(d,2)\). This is an index \(4\) subgroup consisting of those isometries of \(\mathrm{AdS}^{d+1}\) preserving an orientation of space and time. We call it for short the group of orientation-preserving isometries.
The subgroup of \(\mathrm{SO}_{\circ}(d,2)\) fixing the point \((0,\ldots,0,1)\in\mathrm{AdS}^{d+1}\) is the group \(\mathrm{SO}_{\circ}(d,1)\) embedded via
\[A\mapsto\begin{pmatrix}A&\\ &1\end{pmatrix}\.\]
The (space and time-oriented) anti-de Sitter space of dimension \(d+1\) can thus be identified with the coset space
\[\mathrm{SO}_{\circ}(d,2)/\mathrm{SO}_{\circ}(d,1)\.\]
_Boundary._ The space \(\mathrm{AdS}^{d+1}\) can also be identified with an open set of the \(d+1\)-dimensional sphere, seen as the double cover of \(\mathbf{P}\mathbb{R}^{d,2}\), via the map
\[\begin{array}{ccc}\mathrm{AdS}^{d+1}&\to&\mathrm{S}^{d+1}=(\mathbb{R}^{d,2} \setminus\{0\})/\mathbb{R}_{>0}\mathrm{Id}\\ x&\mapsto&\mathbb{R}_{>0}x\.\end{array}\]
Its boundary in \(\mathrm{S}^{d+1}\) is called the _Einstein space_.
**Definition 3.2**.: The Einstein space \(\mathrm{Ein}^{d}\) is defined as
\[\mathrm{Ein}^{d}=\partial_{\infty}\mathrm{AdS}^{d+1}=\{x\in\mathbb{R}^{d,2} \backslash\{0\}\mid\mathbf{q}(x)=0\}/\mathbb{R}_{>0}\mathrm{Id}\.\]
The Einstein space carries a conformally flat Lorentz metric which is conformally invariant under the action of \(\mathrm{SO}_{\circ}(d,2)\).
_Geodesics and causality in \(\mathrm{AdS}^{d+1}\)._ The geodesics of \(\mathrm{AdS}^{d+1}\) are its intersections with \(2\)-planes \(P\) in \(\mathbb{R}^{d,2}\). These are of three kinds:
* If \(\mathbf{q}_{|P}\) is negative definite, then \(P\cap\mathrm{AdS}^{d+1}\) is an ellipse. It is a _timelike geodesic_, i.e. the Lorentz metric is negative along that geodesic.
* If \(\mathbf{q}_{|P}\) is non-positive with \(1\)-dimensional kernel, then \(P\cap\mathrm{AdS}^{d+1}\) consists of two parallel affine lines, each of which is a _lightlike geodesic_, i.e. the Lorentz metric vanishes along that geodesic.
* If \(\mathbf{q}_{|P}\) has signature \((1,1)\), then \(P\cap\mathrm{AdS}^{d+1}\) consists of two branches of hyperbolas, each of which is a _spacelike geodesic_, i.e. the Lorentz metric is positive along that geodesic.
We call two points \(x\) and \(y\) in \(\operatorname{AdS}^{d+1}\)_space (resp. light, time) related_ if they belong to the same spacelike (resp. lightlike, timelike) geodesic. We have the following characterization:
**Proposition 3.3**.: _Two points \(x\) and \(y\in\operatorname{AdS}^{d+1}\) are_
* _space related if and only if_ \(\langle x,y\rangle<-1\)_,_
* _light related if and only if_ \(\langle x,y\rangle=-1\)_,_
* _time related if and only if_ \(-1<\langle x,y\rangle<1\)_._
_Remark 3.4_.: If \(\langle x,y\rangle\geq 1\) then \(x\) and \(y\) do not belong to a common geodesic, but \(x\) and \(-y\) are light or space related. In the projective model \(\operatorname{AdS}^{d+1}/\pm\operatorname{Id}\), any two points are either space, light, or time related.
_Photons and causality in \(\operatorname{Ein}^{d}\)._ A _photon_ in \(\operatorname{Ein}^{d}\) is the projectivisation of a totally isotropic \(2\)-plane in \(\mathbb{R}^{d,2}\). We call two points \([x]\) and \([y]\in\operatorname{Ein}^{d}\)_light related_ if they belong to the same photon and _space related_ if they are the endpoints of a spacelike geodesic in \(\operatorname{AdS}^{d+1}\). We have again a characterization in terms of scalar products:
**Proposition 3.5**.: _Two points \([x]\) and \([y]\in\operatorname{Ein}^{d}\) are_
* _space related if and only if_ \(\langle x,y\rangle<0\)_,_
* _light related if and only if_ \(\langle x,y\rangle=0\)_._
Note that \([x]\) is always space or light related to either \([y]\) or \([-y]\). Causality thus does not make sense in the projective model \(\operatorname{Ein}^{d}=\operatorname{Ein}^{d}/\pm\operatorname{Id}\). A more robust notion is space relation for triples of points.
**Definition 3.6**.: We call a subset \(S\) of \(\operatorname{Ein}^{d}\)_acausal_ if any two points in \(S\) are space related, and _achronal_ if any two points in \(S\) are space or light related.
**Proposition 3.7**.: _If \(\{[x],[y],[z]\}\subset\operatorname{Ein}^{d}\) is acausal, then the restriction of \(\mathbf{q}\) to \(\operatorname{Span}(x,y,z)\) has signature \((2,1)\)._
_Conversely, if the restriction of \(\mathbf{q}\) to \(\operatorname{Span}(x,y,z)\) has signature \((2,1)\), then there exist unique \(\epsilon_{y}\) and \(\epsilon_{z}\in\{-1,1\}\) such that \(\{[x],[\epsilon_{y}y],[\epsilon_{z}z]\}\) is acausal._
### Spacelike hypersurfaces in \(\operatorname{AdS}^{d+1}\)
A smooth hypersurface \(\mathcal{H}\) in \(\operatorname{AdS}^{d+1}\) is called spacelike when the restriction of the Lorentz metric to \(\mathcal{H}\) is positive definite. Here, we will construct hypersurfaces that are piecewise geodesic, and it is thus useful to generalize this definition to a lower regularity.
Let \(\mathcal{H}\) be a Lipschitz manifold of dimension \(d\). Let \(d_{\mathcal{H}}\) be a Lipschitz distance on \(\mathcal{H}\) (i.e. a distance which is locally bi-Lipschitz to the Euclidean distance in local coordinates).
**Definition 3.8**.: A map \(i:\mathcal{H}\to\operatorname{AdS}^{d+1}\) is a _spacelike immersion_ if it is locally Lipschitz and if every point in \(\mathcal{H}\) has a neighbourhood \(U\) such that there exists a constant \(c>0\) satisfying
\[\langle i(p),i(p^{\prime})\rangle\leq-1-c\,d_{\mathcal{H}}^{2}(p,p^{\prime})\]
for all \(p,p^{\prime}\in U\).
A Lipschitz hypersurface \(\mathcal{H}\) of \(\operatorname{AdS}^{d+1}\) is called _spacelike_ if the inclusion \(i:\mathcal{H}\to\operatorname{AdS}^{d+1}\) is a spacelike immersion.
By Proposition 3.3 the above condition implies that \(i(p)\) is space related to \(i(p^{\prime})\) when \(p,p^{\prime}\in\mathcal{H}\) are sufficiently close. The constant \(c\) prevents the hypersurface to be "tangent" to the light cone through \(p\). In particular, we have:
**Proposition 3.9**.: _If \(\mathcal{H}\) and \(i\) are of class \(\mathcal{C}^{1}\), then \(i\) is a spacelike immersion if and only if \(i^{*}g_{\operatorname{AdS}}\) is positive definite at every point._
Before we prove Proposition 3.9, let us interpret spacelike immersions in terms of graphs in an appropriate model for \(\operatorname{AdS}^{d+1}\). Consider the open hemisphere \(\operatorname{S}^{d}_{+}=\{(x_{0},\dots,x_{d})\in\operatorname{S}^{d}|\,x_{0}>0\}\).
The map
\[\Phi:\begin{cases}\mathrm{S}^{1}\times\mathrm{S}^{d}_{+}&\to&\mathrm{AdS}^{d+1}\\ (\theta,x)&\mapsto&\left(\frac{x_{1}}{x_{0}},\dots,\frac{x_{d}}{x_{0}},\frac{ \cos\theta}{x_{0}},\,\frac{\sin\theta}{x_{0}}\right)\end{cases}\]
is a diffeomorphism, and \(\Phi^{*}g_{\mathrm{AdS}}=x_{0}^{-2}(-d\theta^{2}+g_{\mathrm{S}^{d}})\). An important feature of this model is that \(\mathrm{S}^{d}_{+}\) equipped with \(x_{0}^{-2}g_{\mathrm{S}^{d}}\) is isometric to the hyperbolic space \(\mathbb{H}^{d}\), in particular the hypersurfaces \(\theta=\mathrm{cst}\) are totally geodesic copies of \(\mathbb{H}^{d}\). Another one is that it extends to the boundary as a conformal map \(\partial\Phi:\mathrm{S}^{1}\times\mathrm{S}^{d-1}\to\partial\mathrm{AdS}^{d+1 }=\mathrm{Ein}^{d}\).
**Lemma 3.10**.: _Let \(\mathcal{H}\) be a Lipschitz manifold, and \(i:\mathcal{H}\to\mathrm{AdS}^{d+1}\) a spacelike immersion. Write \(i(p)=\Phi(\theta(p),x(p))\) for \(p\in\mathcal{H}\). Then the map \(x:\mathcal{H}\to\mathrm{S}^{d}_{+}\) is locally bi-Lipschitz._
Proof.: For \((\theta,x),(\theta^{\prime},x^{\prime})\in\mathrm{S}^{1}\times\mathrm{S}^{d}_ {+}\) we find
\[\langle\Phi(\theta,x),\Phi(\theta^{\prime},x^{\prime})\rangle =\frac{x_{1}x_{1}^{\prime}+\dots+x_{d}x_{d}^{\prime}-\cos(\theta- \theta^{\prime})}{x_{0}x_{0}^{\prime}}\] \[=\frac{1-\frac{1}{2}\|x-x^{\prime}\|^{2}-\cos(\theta-\theta^{ \prime})}{x_{0}x_{0}^{\prime}}-1\] \[\geq-\frac{\|x-x^{\prime}\|^{2}}{2x_{0}x_{0}^{\prime}}-1\]
This shows that for \(p,p^{\prime}\in\mathcal{H}\), we have
\[\|x(p)-x(p^{\prime})\|^{2}\geq-x_{0}(p)x_{0}(p^{\prime})(1+\langle i(p),i(p^{ \prime})\rangle\]
Since \(p\mapsto x_{0}(p)\) is continuous, it is locally bounded from below by some \(c^{\prime}>0\), and locally we find
\[\|x(p)-x(p^{\prime})\|\geq\frac{\sqrt{c}}{c^{\prime}}d_{\mathcal{H}}(p,p^{ \prime})\]
Lemma 3.10 means that a Lipschitz spacelike hypersurface is locally a graph in the conformal model \(\mathrm{AdS}^{d+1}\approx\mathrm{S}^{1}\times\mathrm{S}^{d}_{+}\). Now given a function from \(\mathrm{S}^{d}_{+}\) to \(\mathrm{S}^{1}\), we wish to know under which condition its graph is a Lipschitz spacelike hypersurface.
**Lemma 3.11**.: _Let \(U\subset\mathrm{S}^{d}_{+}\) be an open subset, and consider a map \(\theta:U\to\mathrm{S}^{1}\). The map \(i:U\to\mathrm{AdS}^{d+1}\) defined by \(i(x)=\Phi(\theta(x),x)\) is a spacelike immersion if and only if \(\theta\) is locally contracting (i.e. every point in \(U\) has a neighbourhood on which \(\theta\) is \(k\)-Lipschitz for some \(k\in(0,1)\))._
_Remark 3.12_.: As a distance on \(\mathrm{S}^{d}_{+}\) we can pick either the spherical distance or the Euclidean distance. The condition on \(\theta\) being locally contracting does not depend on this choice, since for every \(\varepsilon>0\) we can find a neighbourhood of any point on which they are \(1+\varepsilon\)-bi-Lipschitz. We will use the Euclidean distance in the proof.
Proof of Lemma 3.11.: For \(x,x^{\prime}\in U\) we have
\[\langle i(x),i(x^{\prime})\rangle=\frac{1-\frac{1}{2}\|x-x^{\prime}\|^{2}-\cos (\theta(x)-\theta(x^{\prime}))}{x_{0}x_{0}^{\prime}}-1.\]
If \(\theta\) is \(k\)-Lipschitz for some \(k\in(0,1)\), then up to shrinking \(U\) we obtain
\[1-\cos(\theta(x)-\theta(x^{\prime}))\leq\frac{k}{2}\|x-x^{\prime}\|^{2}\]
Since \(x_{0},x_{0}^{\prime}\leq 1\), we find
\[\langle i(x),i(x^{\prime})\rangle\leq-1-\frac{1-k}{2}\|x-x^{\prime}\|^{2}\,\]
hence \(i\) is a spacelike immersion.
Now assume that \(i\) is a spacelike immersion, and let \(c\in(0,\frac{1}{2})\) be a constant given by the definition (note that \(c\) can always be replaced by a smaller constant). The map \(\theta\) is continuous, so we can shrink \(U\) in order to have
\[1-\cos(\theta(x)-\theta(x^{\prime}))\geq\frac{1-c}{2}|\theta(x)-\theta(x^{ \prime})|^{2}\]
for all \(x,x^{\prime}\in U\). This in turn leads to
\[|\theta(x)-\theta(x^{\prime})|^{2}\leq\underbrace{\frac{\frac{1}{2}-c}{\frac {1}{2}-\frac{c}{2}}}_{<1}\|x-x^{\prime}\|^{2}\.\]
Proof of Proposition 3.9.: If \(i\) is \(\mathcal{C}^{1}\), then the map \(x:\mathcal{H}\to\mathrm{S}^{d}_{+}\) from Lemma 3.10 is bi-Lipschitz and \(\mathcal{C}^{1}\), hence a local diffeomorphism. So we may assume that \(\mathcal{H}\) is an open subset of \(\mathrm{S}^{d}_{+}\) and that \(i(x)=\Phi(\theta(x),x)\) where \(\theta:\mathcal{H}\to\mathrm{S}^{1}\) is \(\mathcal{C}^{1}\). Now Lemma 3.11 shows that \(i\) is a spacelike immersion if and only if \(\theta\) is locally contracting, which is equivalent to \(\|d\theta\|<1\).
But \(i^{*}g_{\mathrm{AdS}}\) is in the same conformal class as \(-d\theta^{2}+g_{S^{d}}\), so it is positive definite if and only if \(\|d\theta\|<1\).
More generally, if \(i:\mathcal{H}\to\mathrm{AdS}^{d+1}\) is a spacelike Lipschitz immersion and \(\gamma:[0,1]\to\mathcal{H}\) is a Lipschitz path, then \(i\circ\gamma\) is Lipschitz hence differentiable at Lebesgue almost every point. The derivative of \(i\circ\gamma\) is never timelike, so one can then define the _length_ of \(i(\gamma)\) as
\[L_{\mathrm{AdS}}(i(\gamma))=\int_{0}^{1}\sqrt{g_{\mathrm{AdS}}((i\circ\gamma) ^{\prime}(t),(i\circ\gamma)^{\prime}(t))}\mathrm{d}t\.\]
Finally, for \(x,y\in\mathcal{H}\), set
\[i^{*}d_{\mathrm{AdS}}(x,y)=\inf_{\gamma(0)=x,\gamma(1)=y}L_{\mathrm{AdS}}(i \circ\gamma)\.\]
The spacelike immersion property of \(i\) implies that \(i^{*}d_{\mathrm{AdS}}\) is a Lipschitz distance on \(\mathcal{H}\) (the usual proof for smooth Riemannian metrics can be easily adapted thanks to the graph description of Lemma 3.10). When \(\mathcal{H}\) and \(i\) are \(\mathcal{C}^{1}\), it is simply the Riemannian distance associated to \(i^{*}g_{\mathrm{AdS}}\).
**Definition 3.13**.: The spacelike immersion \(i\) is called _complete_ when the distance \(i^{*}d_{\mathrm{AdS}}\) is complete.
A spacelike hypersurface \(\mathcal{H}\subset\mathrm{AdS}^{d+1}\) is called _complete_ if the inclusion \(i:\mathcal{H}\to\mathrm{AdS}^{d+1}\) is a complete spacelike immersion.
The following proposition brings together a number of key properties of spacelike hypersurfaces in \(\mathrm{AdS}^{d+1}\).
**Proposition 3.14**.: _Let \(\mathcal{H}\) be a connected Lipschitz manifold of dimension \(d\) and \(i:\mathcal{H}\to\mathrm{AdS}^{d+1}\) a complete spacelike immersion. Then_
1. \(\mathcal{H}\) _is homeomorphic to the open ball_ \(\mathrm{B}^{d}\) _of dimension_ \(d\)_,_
2. \(i\) _is an embedding,_
3. \(i(x)\) _and_ \(i(y)\) _are space related for any_ \(x\neq y\in\mathcal{H}\)_,_
4. \(i\) _extends continuously to an achronal embedding_ \[\partial i:\mathrm{S}^{d-1}=\partial\mathrm{B}^{d}\to\mathrm{Ein}^{d}\.\]
Proof.: Write \(i(p)=\Phi(\theta(p),x(p))\) for \(p\in\mathcal{H}\). We have seen in Lemma 3.10 that \(x:\mathcal{H}\to\mathrm{S}^{d}_{+}\) is locally bi-Lipschitz. We can use it to define another distance on \(\mathcal{H}\) by defining the length of a Lipschitz curve \(\gamma:[0,1]\to\mathcal{H}\) as the hyperbolic length \(L_{\mathbb{H}^{d}}(x\circ\gamma)\), and the distance \(x^{*}d_{\mathbb{H}^{d}}\) as the infimum of lengths of paths joining two points.
Note that \(L_{\mathrm{AdS}}(i\circ\gamma)\leq L_{\mathbb{H}^{d}}(x\circ\gamma)\), so \(i^{*}d_{\mathrm{AdS}}\leq x^{*}d_{\mathbb{H}^{d}}\). Completeness of \(i\) and the Hopf-Rinow Theorem for length spaces (see [22, Theorem 1.9] or [13, Proposition I.3.7]) imply that closed balls for \(i^{*}d_{\mathrm{AdS}}\) are compact, so \(x^{*}d_{\mathbb{H}^{d}}\) is also complete. This shows that \(x\) is a local isometry between length spaces, the source being complete, so it is a covering map (see [13, Proposition
I.3.28] for a proof in the context of length spaces). This implies that the hyperbolic metric on \(x(\mathcal{H})\) is complete, so \(x\) is onto. Since it is a covering, it must be a homeomorphism. This proves (1) and (2).
Now consider a lift \(\widetilde{i}:\mathcal{H}\to\widetilde{\operatorname{AdS}}^{d+1}\) to the universal cover. Lemma 3.10 shows that \(\widetilde{i}(\widetilde{\mathcal{H}})\) is weakly spacelike as defined in [12, Section 3.2], so property (3) follow from [12, Proposition 3.5]. This in turn implies that \(\mathcal{H}\) is (globally) the graph of a \(1\)-Lipschitz function \(\theta:\mathrm{S}^{d}_{+}\to\mathbb{R}\) (for the spherical distance), so (4) follows from the extendability of Lipschitz functions.
As a consequence of this proof we get the following description of complete spacelike hypersurfaces in \(\operatorname{AdS}^{d+1}\).
**Corollary 3.15**.: _Let \(\mathcal{H}\subset\operatorname{AdS}^{d+1}\) be a complete spacelike hypersurface. There is a distance decreasing function \(\theta:\mathrm{S}^{d}_{+}\to\mathrm{S}^{1}\) (i.e. \(|\theta(x)-\theta(x^{\prime})|<d_{\mathrm{S}^{d}}(x,x^{\prime})\) whenever \(x\neq x^{\prime}\)) such that \(\mathcal{H}=\{\Phi(\theta(x),x)|x\in S^{d}_{+}\}\)._
_Remark 3.16_.: Here it is important to use the spherical distance on \(\mathrm{S}^{d}_{+}\) rather than the Euclidean distance.
### Second fundamental form
Here we recall the classical notion of second fundamental form in a setting which applies both to spacelike hypersurfaces in \(\operatorname{AdS}^{d+1}\) and to \(\operatorname{AdS}^{d+1}\) itself inside the flat pseudo-Riemannian space \(\mathbb{R}^{d,2}\).
Let \((M,g)\) be a smooth oriented pseudo-Riemannian manifold of signature \((p,q)\), and let \(\mathcal{H}\subset M\) be an oriented hypersurface of class \(\mathcal{C}^{2}\) such that the restriction of \(g\) to \(\mathcal{H}\) has signature \((p,q-1)\). Let us denote by \(TM_{|\mathcal{H}}\) the pull-back of the tangent bundle \(TM\) by the inclusion \(\mathcal{H}\hookrightarrow M\). The tangent bundle \(T\mathcal{H}\) is a sub-bundle of \(TM_{|\mathcal{H}}\) and \(TM_{|\mathcal{H}}\) splits orthogonally (with respect to \(g\)) as
\[TM_{|\mathcal{H}}=T\mathcal{H}\oplus\mathbb{R}N\,\]
where \(N\) is the unit normal to \(\mathcal{H}\) (i.e. \(g(N,N)\equiv-1\) and the orientation of \(N\) is compatible with those of \(M\) and \(\mathcal{H}\)).
The Levi-Civita connection \(\nabla^{M}\) of \(M\) restricts to a connection on \(TM_{|\mathcal{H}}\) (that we still denote \(\nabla^{M}\)). Since \(g(N,N)\) is constant, \(\nabla^{M}_{X}N\) is orthogonal to \(N\) and thus tangent to \(\mathcal{H}\) for every vector \(X\) tangent to \(\mathcal{H}\).
**Definition 3.17**.: The _second fundamental form_ of \(\mathcal{H}\) is the bilinear form on \(T\mathcal{H}\) defined by
\[\mathrm{II}_{\mathcal{H}}(X,Y)=g(\nabla^{M}_{X}N,Y)\.\]
The second fundamental form relates the Levi-Civita connection \(\nabla^{\mathcal{H}}\) of \((\mathcal{H},g_{|\mathcal{H}})\) to the ambient connection \(\nabla^{M}\):
**Proposition 3.18**.: _For every vector fields \(X\) and \(Y\) on \(\mathcal{H}\), we have_
\[\nabla^{M}_{X}Y=\nabla^{\mathcal{H}}_{X}Y+\mathrm{II}_{\mathcal{H}}(X,Y)N\.\]
_Remark 3.19_.: The reader familiar with Riemannian geometry will notice a sign difference in the definition of the second fundamental form. This is due to the fact that we assume here that \(g\) is negative in the normal direction.
### Convexity in \(\operatorname{AdS}^{d+1}\)
Let \(V\) be a hyperplane in \(\mathbb{R}^{d,2}\) in restriction to which the quadratic form \(\mathbf{q}\) has signature \((d,1)\). Then \(V\cap\operatorname{AdS}^{d+1}\) is a two-sheeted hyperboloid of dimension \(d\), each connected component of which is a totally geodesic spacelike hypersurface. We call such a connected component a _spacelike hyperplane_. They are the totally geodesic copies of \(\mathbb{H}^{d}\) in \(\operatorname{AdS}^{d+1}\). For any \(\theta_{0}\in\mathrm{S}^{1}\) the set \(\Phi(\{\theta_{0}\}\times\mathrm{S}^{d}_{+})\), where \(\Phi:\mathrm{S}^{1}\times\mathrm{S}^{d}_{+}\to\operatorname{AdS}^{d+1}\) is the diffeomorphism defined above, is a spacelike hyperplane.
If \(W\subset V\cap\operatorname{AdS}^{d+1}\) is a spacelike hyperplane, we denote by \(\overline{W}\) the other component of \(V\cap\operatorname{AdS}^{d+1}\), i.e. the image of \(W\) by \(x\mapsto-x\). We say that a point \(x\in\operatorname{AdS}^{d+1}\backslash\overline{W}\) is _in the past_ of
if there exists a future-oriented timelike geodesic segment from \(x\) to a point of \(W\) which does not intersect \(\overline{W}\).
**Definition 3.20**.: Let \(\mathcal{H}\subset\mathrm{AdS}^{d+1}\) be a Lipschitz spacelike hypersurface. We say that \(\mathcal{H}\) is _convex_ if for every \(x\in\mathcal{H}\), there exists a spacelike hyperplane \(W\) containing \(x\) such that \(\mathcal{H}\) is contained in the past of \(W\).
The hypersurface \(\mathcal{H}\) is _locally convex_ if every point \(x\in\mathcal{H}\) has an open neighbourhood \(U\) such that \(\mathcal{H}\cap U\) is convex.
For complete hypersurfaces, the local convexity property globalizes.
**Proposition 3.21**.: _Let \(\mathcal{H}\) be a complete Lipschitz spacelike hypersurface. Then \(\mathcal{H}\) is convex if and only if it is locally convex._
Proof.: Let \(p\in\mathcal{H}\) and consider a spacelike hyperplane \(W\) containing \(p\) such that a neighbourhood of \(p\) in \(\mathcal{H}\) is in the past of \(W\).
Up to the action of \(\mathrm{SO}_{\circ}(d,2)\), we may assume that \(W=\Phi(\{0\}\times S^{d}_{+})\), and write \(p=\Phi(0,x)\). We have seen in the proof of Proposition 3.14 that \(\mathcal{H}\) is the graph of a function \(\theta:\mathrm{S}^{d}_{+}\to\mathrm{S}^{1}\) in the conformal model \(\mathrm{AdS}^{d+1}\approx\mathrm{S}^{1}\times\mathrm{S}^{d}_{+}\). Since \(\theta\) is distance decreasing, it is not onto and we may consider \(\theta\) as a real valued map.
Assume by contradiction that \(\mathcal{H}\) is not in the past of \(W\), i.e. \(\theta>0\) at some point \(y\in\mathrm{S}^{d}_{+}\). Along the spherical geodesic \(\gamma:[0,1]\to\mathrm{S}^{d}_{+}\) joining \(x\) and \(y\), the function \(\theta\) possesses a local minimum. If \(z=\gamma(t_{0})\) is such a point of \(\mathrm{S}^{d}_{+}\), consider a spacelike hyperplane \(W^{\prime}\) containing \(\Phi(\theta(z),z)\) such that a neighbourhood of \(\Phi(\theta(z),z)\) in \(\mathcal{H}\) is in the past of \(W^{\prime}\). Note that \(W^{\prime}\) is the graph of a function \(\alpha:\mathrm{S}^{d}_{+}\to S^{1}\). For \(t\) near \(t_{0}\), we have \(\alpha\circ\gamma(t)\geq\theta\circ\gamma(t)\geq\theta(z)\), with equality at \(t=t_{0}\). Hence \((\alpha\circ\gamma)^{\prime}(t_{0})=0\), and the geodesic \(t\mapsto\Phi(\alpha\circ\gamma(t),\gamma(t))\) is tangent to \(\Phi(\{\theta(z)\},\mathrm{S}^{d}_{+})\) at \(\Phi(\theta(z),z)\), therefore \(\alpha\circ\gamma\) is constant. It implies that \(\theta\circ\gamma\) is constant on a neighbourhood of every point where a local minimum is obtained, hence \(\theta\circ\gamma\geq 0\) which is a contradiction.
For hypersurfaces of class \(\mathcal{C}^{2}\), convexity is characterized by the sign of the second fundamental form:
**Proposition 3.22**.: _Let \(\mathcal{H}\) be a spacelike hypersurface of class \(\mathcal{C}^{2}\). Then \(\mathcal{H}\) is locally convex if and only if its second fundamental form is non-negative._
This motivates the following strengthenings:
**Definition 3.23**.: A complete spacelike hypersurface \(\mathcal{H}\) of class \(\mathcal{C}^{2}\) is called _strongly convex_ if its second fundamental form \(\mathrm{II}_{\mathcal{H}}\) is positive definite, and _uniformly strongly convex_ if there exists a constant \(c>0\) such that
\[\Pi_{\mathcal{H}}\geq c\,g_{|\mathcal{H}}\.\]
### Globally hyperbolic spacetimes
Let \(N\) be a Lorentzian manifold. A \(\mathcal{C}^{1}\) curve on \(N\) is called _causal_ if its tangent direction is nowhere spacelike. Such a curve is called _inextensible_ if it is maximal among causal curves (for the inclusion).
**Definition 3.24**.: A _Cauchy hypersurface_ in \(N\) is a topological hypersurface intersecting every inextensible causal curve at exactly one point. A Lorentzian manifold admitting a Cauchy hypersurface is called _globally hyperbolic_.
A globally hyperbolic Lorentzian manifold \(N\) always admits a _temporal function_, i.e. a function to \(\mathbb{R}\) with no critical points whose level sets are spacelike hypersurfaces, see e.g. [9]. Moreover, all smooth Cauchy hypersurfaces are diffeomorphic, and if \(M\) is a smooth Cauchy hypersurface, then \(N\) is diffeomorphic to \(M\times\mathbb{R}\).
**Definition 3.25**.: A globally hyperbolic Lorentzian manifold is _Cauchy compact_ if its Cauchy hypersurfaces are compact.
**Definition 3.26**.: A globally hyperbolic Cauchy compact AdS manifold is _maximal_ if it is not isometric to a proper subset of another globally hyperbolic AdS manifold.
From now on, we abreviate "globally hyperbolic Cauchy compact" into "GHC", and "globally hyperbolic maximal Cauchy compact" into "GHMC".
We recall here a description of GHMC AdS spacetimes, due to Mess [34, 1]. (It is only stated in dimension \(2+1\) in [34], but the argument works in higher dimension as pointed in [4]. For other proofs see [4, Corollary 11.2] and [6, Proposition 4.8].)
**Definition 3.27**.: Let \(\Lambda\) be a closed subset of \(\operatorname{Ein}^{d}\). The _domain of dependence_ of \(\lambda\) is the open set
\[\Omega(\Lambda)=\{x\in\operatorname{AdS}^{d+1}\mid\langle x,y\rangle<0\text{ for all }[y]\in\Lambda\}\.\]
The following theorem describes GHMC AdS manifolds as quotients of the domain of dependence of the boundary at infinity of a Cauchy hypersurface. Il was proved by Mess in dimension 2+1 and extended by Barbot in higher dimensions.
**Theorem 3.28** (Mess).: \(\Gamma\) _be a subgroup of \(\operatorname{SO}_{\circ}(d,2)\) acting freely, properly discontinuously and cocompactly on a spacelike hypersurface \(\mathcal{H}\) and let \(\partial_{\infty}\mathcal{H}\) be the boundary of \(\mathcal{H}\) in \(\operatorname{Ein}^{d}\). Then \(\Gamma\) acts properly discontinuously on \(\Omega(\partial_{\infty}\mathcal{H})\), the quotient \(N=\Gamma\backslash\Omega(\partial_{\infty}\mathcal{H})\) is a GHMC AdS manifold and \(\Gamma\backslash\mathcal{H}\subset N\) is a Cauchy hypersurface._
_Conversely, let \(N\) be a GHMC AdS manifold of dimension \(d+1\). Then there exists a discrete subgroup \(\Gamma\) of \(\operatorname{SO}_{\circ}(d,2)\) and a \(\Gamma\)-invariant complete spacelike hypersurface \(\mathcal{H}\subset\operatorname{AdS}^{d+1}\) such that \(\Gamma\) acts properly discontinuously and cocompactly on \(\mathcal{H}\) and \(N\) is isometric to \(\Gamma\backslash\Omega(\partial_{\infty}\mathcal{H})\)._
The following statement combines results from Barbot-Merigot [6], Barbot [5] and Danciger-Gueritaud-Kassel [15].
**Theorem 3.29** (Barbot-Merigot, Barbot, Danciger-Gueritaud-Kassel).: _Let \(N=\Gamma\backslash\Omega\) be a GHMC \(\operatorname{AdS}\) manifold. The following properties are equivalent:_
1. _The group_ \(\Gamma\) _is Gromov hyperbolic,_
2. _The limit set_ \(\Lambda_{\Gamma}\) _is acausal,_
3. _The manifold_ \(N\) _contains a convex Cauchy hypersurface,_
4. _The manifold_ \(N\) _contains a strongly convex Cauchy hypersurface,_
5. _The group_ \(\Gamma\) _acts convex-cocompactly on_ \(\Omega\)_._
We call such a GHMC AdS manifold _quasifuchsian_. It is called _Fuchsian_ if it possesses a totally geodesic Cauchy hypersurface.
Sketch of the proof.: \((v)\Rightarrow(iv)\) is Lemma 6.4 in [15].
\((iv)\Rightarrow(iii)\) is straightforward.
\((iii)\Rightarrow(i)\) follows from Proposition 8.3 in [6].
\((i)\Rightarrow(ii)\) is Theorem 1.4 in [5].
Finally, the main result of [6] is that \((ii)\) is equivalent to \(\Gamma\) being \(P_{1}\)-Anosov, and the latter is equivalent to \((v)\) according to Theorem 1.7 in [15].
### Spacelike \(\operatorname{AdS}\) structures
In this section we introduce a notion _spacelike_ AdS _structure_ on a manifold \(M\), in a way that emulates the notion of \((G,X)\)-structure. The "developing map" of such a structure is an equivariant spacelike immersion of the universal cover of \(M\), from which one obtains a GHMC AdS manifold homeomorphic to \(M\times\mathbb{R}\). This will allow us to reduce the construction of GHMC manifolds with prescribed Cauchy hypersurfaces to the construction of a spacelike AdS structure on a manifold one dimension lower.
Let \(M\) be a Lipschitz manifold of dimension \(d\).
**Definition 3.30**.: A _spacelike_ AdS _atlas_ on \(M\) is the data of an atlas \((U_{i},\phi_{i})_{i\in I}\) where \((U_{i})_{i\in I}\) is an open cover of \(M\) and \(\phi_{i}:U_{i}\to\operatorname{AdS}^{d+1}\) is a Lipschitz spacelike immersion, such that for all \(i,j\in I\) and all \(x\in U_{i}\cap U_{j}\), there exists an orientation preserving isometry \(g\) of \(\operatorname{AdS}^{d+1}\) such that
\[\phi_{j}=g\circ\phi_{i}\]
in a neighbourhood of \(x\).
A _spacelike_ AdS _structure_ on \(M\) is a maximal spacelike AdS atlas.
Two spacelike AdS atlases \((U_{i},\phi_{i})_{i\in I}\) and \((V_{j},\psi_{j})_{j\in J}\) on \(M\) are _equivalent_ if for every \((i,j)\in I\times J\) and every \(x\in U_{i}\cap V_{j}\), there exists an orientation preserving isometry \(g\) of \(\mathrm{AdS}^{d+1}\) such that
\[\psi_{j}=g\circ\phi_{i}\]
in a neighbourhood of \(x\).
Note that this definition is almost identical to that of a \((G,X)\)-structure, except that the charts are required to be spacelike immersions instead of local homeomorphisms.
Emulating the theory of \((G,X)\)-structures, we want to "patch together" the local charts into an equivariant spacelike immersion of the universal cover. In order to do so, remark first that there is a unique way to patch together two local charts:
**Lemma 3.31**.: _Let \(U\) be a non empty open subset of \(M\), \(\phi:U\to\mathrm{AdS}^{d+1}\) a spacelike immersion, and \(g\) an orientation preserving isometry of \(\mathrm{AdS}^{d+1}\) such that \(g\circ\phi=\phi\). Then \(g=\mathrm{Id}\)._
Proof.: Consider a point \(x\in\phi(U)\) at which \(\phi(U)\) is differentiable (it exists because of Rademacher's Theorem). The tangent space \(T_{x}\phi(U)\) is spacelike because of Lemma 3.11. Now \(g\) fixes every point of the spacelike hyperplane tangent to \(\phi(U)\) at \(x\), hence \(g=\mathrm{Id}\).
As a consequence, we can still define the holonomy representation and the developing map of a spacelike AdS structure.
**Corollary 3.32**.: _Let \((U_{i},\phi_{i})_{i\in I}\) be a spacelike \(\mathrm{AdS}\) atlas on \(M\). Then there exists a representation \(\rho:\pi_{1}(M)\to\mathrm{SO}_{\circ}(d,2)\) and a \(\rho\)-equivariant spacelike immersion \(\phi:\widetilde{M}\to\mathrm{AdS}^{d+1}\) such that, for all \(x\in\widetilde{M}\) and all \(U_{i}\) containing \(\pi(x)\), there exists \(g\in\mathrm{SO}_{\circ}(d,2)\) such that_
\[\phi_{i}\circ\pi=g\circ\phi\]
_in a neighbourhood of \(x\). (Here, \(\pi:\widetilde{M}\to M\) denotes the covering map.)_
_Moreover, if another pair \((\rho^{\prime},\phi^{\prime})\) satisfies the same properties, then there is a unique \(g\in\mathrm{SO}_{\circ}(d,2)\) such that \(\rho^{\prime}=g\rho g^{-1}\) and \(\phi^{\prime}=g\circ\phi\)._
Proof.: The proof is the same as for \((G,X)\)-structures. Let us write \(G=\mathrm{SO}_{\circ}(d,2)\) and \(H=\mathrm{SO}_{\circ}(d,1)\), so that \(\mathrm{AdS}^{d+1}=G/H\).
Given \(i,j\in I\) and \(x\in U_{i}\cap U_{j}\), write \(g_{ij}(x)\in G\) the element such that \(\phi_{i}=g_{ij}(x)\cap\phi_{j}\) on a neighbourhood of \(x\) (it is unique thanks to Lemma 3.31). Because of its uniqueness, it satisfies the cocycle rule \(g_{ik}(x)=g_{ij}(x)g_{jk}(x)\) whenever \(x\in U_{i}\cap U_{j}\cap U_{k}\), and we can consider the \(G\)-principal bundle \(P\) over \(M\) with transitions \(g_{ij}\).
Another consequence of Lemma 3.31 is that the map \(x\mapsto g_{ij}(x)\) is locally constant, so \(P\) inherits a flat connection, and we can consider its holonomy representation \(\rho:\pi_{1}(M)\to G\).
Now let \(E\) be the associated \(G/H\)-bundle over \(M\). The relation \(\phi_{i}=g_{ij}\circ\phi_{j}\) shows that there is a section \(\sigma\) of \(E\) that locally reads as \(\phi_{i}\). This section lifts to a \(\rho\)-equivariant map \(\phi:\widetilde{M}\to G/H=\mathrm{AdS}^{d+1}\) with the required properties.
If another pair \((\rho^{\prime},\phi^{\prime})\) satisfies the same properties, then for any \(x\in\widetilde{M}\) there is an element \(g(x)\in G\) such that \(\phi^{\prime}=g(x)\circ\phi\) on a neighbourhood of \(x\). Using Lemma 3.31 once again, we see that the map \(x\mapsto g(x)\) is locally constant, hence constant. The equivariance implies that \(\rho^{\prime}=g\rho g^{-1}\).
Conversely, a representation \(\rho:\pi_{1}(M)\to\mathrm{SO}_{\circ}(d,2)\) and an equivariant spacelike immersion \(\phi:\widetilde{M}\to\mathrm{AdS}^{d+1}\) define a spacelike AdS structure on \(M\) by choosing an open cover \((U_{i})_{i\in I}\) of \(M\) so that \(\pi:\widetilde{M}\to M\) is invertible on each \(U_{i}\), with \(\phi_{i}:U_{i}\to\mathrm{AdS}^{d+1}\) defined as \(\phi\circ\pi^{-1}\) on \(U_{i}\).
Let now \(\phi:\widetilde{M}\to\mathrm{AdS}^{d+1}\) be a \(\rho\)-equivariant spacelike immersion. Note that the pulled-back distance \(\phi^{*}d_{\mathrm{AdS}}\) (as defined in Section 3.2) is \(\pi_{1}(M)\)-invariant. If \(M\) is moreover compact, then it is complete, and \(\phi\) is thus an embedding (see Proposition 3.14). This implies that \(\rho\) is discrete and faithful and \(\rho(\pi_{1}(M))\) acts properly discontinuously on \(\phi(\widetilde{M})\). By Theorem 3.28, \(\rho\) is the thus the holonomy of a GHMC AdS spacetime \(N\) and \(\phi\) embeds \(M\) as a Cauchy hypersurface in \(N\). We thus obtain the following:
**Theorem 3.33**.: _Let \(M\) be a closed manifold of dimension \(d\), let \(\rho\) be a representation of \(\pi_{1}(M)\) into \(\mathrm{SO}_{\circ}(d,2)\) and let \(\phi:\widetilde{M}\to\mathrm{AdS}^{d+1}\) be a \(\rho\)-equivariant spacelike embedding. Then there exists a unique \(\rho\)-invariant open domain \(\Omega_{\rho}\subset\mathrm{AdS}^{d+1}\) such that:_
1. \(\pi_{1}(M)\) _acts properly discontinuously on_ \(\Omega_{\rho}\) _via_ \(\rho\)_,_
2. _the quotient_ \(N_{\rho}=\rho(\pi_{1}(M))\backslash\Omega_{\rho}\) _is a GHMC_ \(\mathrm{AdS}\) _manifold,_
3. _the map_ \(\phi\) _factors to an embedding of_ \(M\) _into_ \(N_{\rho}\) _whose image is a Cauchy hypersurface._
### Convex ruled spacelike AdS structures
**Definition 3.34**.: Let \(N\) be a quasifuchsian AdS spacetime of dimension \(d+1\). A Lipschitz hypersurface \(M\subset N\) is called _spacelike_ if its lift \(\widetilde{M}\) to the universal cover of \(N\) (which is an open subset of \(\mathrm{AdS}^{d+1}\) by Mess's Theorem) is a Lipschitz spacelike hypersurface.
We say that \(M\) is _past-convex_ if \(\widetilde{M}\) is past-convex.
We say that \(M\) is _ruled_ if each \(x\in M\) lies in the relative interior of a geodesic segment of \(N\) which is contained in \(M\).
**Lemma 3.35**.: _Let \(N\) be a quasifuchsian AdS spacetime. Then \(N\) contains a unique past-convex ruled Cauchy hypersurface._
Proof.: For the existence, write \(N=\Gamma\backslash\Omega\), and let \(\Lambda=\partial\Omega\cap\partial_{\infty}\mathrm{AdS}^{d+1}\) be its limit set. We denote by \(\mathrm{Conv}(\Lambda)\subset\mathrm{AdS}^{d+1}\cup\partial_{\infty}\mathrm{ AdS}^{d+1}\) the convex hull of \(\Lambda\), and \(C(N)=\Gamma\backslash(\mathrm{Conv}(\Lambda)\setminus\Lambda)\subset N\) the convex core of \(N\). Its boundary \(\partial N\) has two connected components, and we consider the future component \(\partial_{+}C(N)\). It is a Cauchy hypersurface in \(N\)[6, Lemma 4.9], and moreover a spacelike hypersurface thanks to [6, Lemma 3.16] and Lemma 3.11. It is past-convex by definition. Let \(x\in\partial_{+}C(N)\), and consider a lift \(\widetilde{x}\in\partial\mathrm{Conv}(\Lambda)\). This lift does not belong to \(\Lambda\), so it cannot be an extreme point of the convex set \(\mathrm{Conv}(\Lambda)\), it therefore lies in the relative interior of a geodesic segment of \(\mathrm{AdS}^{d+1}\) contained in \(\mathrm{Conv}(\Lambda)\). This geodesic segment is spacelike because \(\partial_{+}C(N)\) is a spacelike hypersurface. It must lie in \(\partial\mathrm{Conv}(\Lambda)\) because the interior of \(\mathrm{Conv}(\Lambda)\) is convex.
Now for the uniqueness, let \(S\subset N\) be a ruled past convex Cauchy hypersurface, and \(\widetilde{S}\subset\Omega\) its lift. We then have \(\partial\widetilde{S}\subset\partial_{\infty}\mathrm{AdS}^{d+1}=\Lambda\) thanks to [12, Corollary 3.8]. Let \(\mathcal{C}\subset\mathrm{AdS}^{d+1}\cup\partial_{\infty}\mathrm{AdS}^{d+1}\) be the closed convex hull of \(\widetilde{S}\). It is compact and convex, so by the Krein-Milman Theorem it is the convex hull of its set of extreme points \(E\subset\widetilde{S}\cup\Lambda\subset\mathrm{AdS}^{d+1}\cup\partial\mathrm{ AdS}^{d+1}\). Since \(S\) is ruled, we have \(\widetilde{S}\cap E=\emptyset\), so \(\widetilde{S}\subset\mathcal{C}=\mathrm{Conv}(\Lambda)\). Since \(S\) is past-convex, we find that \(S=\partial_{+}C(N)\).
**Definition 3.36**.: A spacelike AdS structure \((U_{i},\phi_{i})_{i\in I}\) on a manifold \(M\) is _convex ruled_ if for all \(i\in I\), \(\phi_{i}(U_{i})\) is contained in a past-convex ruled hypersurface.
**Lemma 3.37**.: _Let \(M\) be a closed manifold of dimension \(d\). There is a one-to-one correspondence between convex ruled spacelike AdS structures on \(M\) and quasifuchsian \(\mathrm{AdS}\) spacetimes homeomorphic to \(M\times\mathbb{R}\)._
Proof.: It follows from Lemma 3.35, Theorem 3.33, and characterization \((iii)\) in Theorem 3.29.
### Convex ruled hyperbolic embedding structures and hyperbolic ends
In this section we outline the analog in hyperbolic manifolds of the notion of spacelike AdS structures developed in Sections 3.6 and 3.7.
**Definition 3.38**.: Let \(\mathcal{H}\) be an oriented Lipschitz manifold. A map \(\phi:\mathcal{H}\to\mathbb{H}^{d+1}\) is called a _Lipschitz embedding_ if \((x,y)\mapsto d_{\mathbb{H}^{d+1}}(\phi(x),\phi(y))\) is a Lipschitz distance on \(\mathcal{H}\). It is called a _Lipschitz immersion_ if every point in \(\mathcal{H}\) has a neighbourhood \(U\) such that the restriction of \(\phi\) to \(U\) is a Lipschitz embedding.
A Lipschitz immersion \(\phi:\mathcal{H}\to\mathbb{H}^{d+1}\) is called _ruled_ if for every point \(x\in\mathcal{H}\) there is an injective continuous curve \(\gamma:(-\varepsilon,\varepsilon)\to\mathcal{H}\) such that \(\gamma(0)=x\) and \(\phi\circ\gamma\) is a geodesic segment in \(\mathbb{H}^{d+1}\).
A Lipschitz embedding \(\phi:\mathcal{H}\to\mathbb{H}^{d+1}\) is called _convex_ if there is an open convex set \(V\subset\mathbb{H}^{d+1}\) such that \(\phi(\mathcal{H})\) is an open subset of \(\partial V\) and the orientations induced on \(\partial V\) from \(V\) and \(\mathcal{H}\) coincide. A Lipschitz immersion \(\phi:\mathcal{H}\to\mathbb{H}^{d+1}\) is called _locally convex_ if every point in \(\mathcal{H}\) has a neighbourhood \(U\) such that the restriction of \(\phi\) to \(U\) is a convex Lipschitz embedding.
We can use convex ruled Lipschitz embeddings to define convex ruled hyperbolic embedding structures, the Riemannian counterpart of convex ruled spacelike AdS structures.
**Definition 3.39**.: Let \(M\) be a \(d\)-dimensional oriented manifold. A _hyperbolic embedding atlas_ on \(M\) is the data of an atlas \((U_{i},\phi_{i})_{i\in I}\) where \((U_{i})_{i\in I}\) is an open cover of \(M\) and \(\phi_{i}:U_{i}\to\mathbb{H}^{d+1}\) is a Lipschitz embedding, such that for all \(i,j\in I\) and all \(x\in U_{i}\cap U_{j}\), there exists an orientation preserving isometry \(g\) of \(\mathbb{H}^{d+1}\) such that
\[\phi_{j}=g\circ\phi_{i}\]
in a neighbourhood of \(x\).
A _hyperbolic embedding structure_ is a maximal hyperbolic embedding atlas. It is called _convex ruled_ if the local charts \(\phi_{i}\) are locally convex and ruled.
Two hyperbolic embedding structures \((U_{i},\phi_{i})_{i\in I}\) and \((V_{j},\psi_{j})_{j\in J}\) on \(M\) are _isomorphic_ if for every \((i,j)\in I\times J\) and every \(x\in U_{i}\cap V_{j}\), there exists an orientation preserving isometry \(g\) of \(\mathbb{H}^{d+1}\) such that
\[\psi_{j}=g\circ\phi_{i}\]
in a neighbourhood of \(x\).
The method used for Corollary 3.32 also provides a developing map and a holonomy representation for convex ruled hyperbolic embedding structures.
**Lemma 3.40**.: _Let \((U_{i},\phi_{i})\) be a hyperbolic embedding atlas on \(M\). Then there exists a representation \(\rho:\pi_{1}(M)\to\operatorname{SO}_{\circ}(d+1,1)\) and a \(\rho\)-equivariant Lipschitz immersion \(\rho:\widetilde{M}\to\mathbb{H}^{d+1}\) such that, for all \(x\in\widetilde{M}\) and all \(U_{i}\) containing \(\pi(x)\), there exists \(g\in\operatorname{SO}_{\circ}(d+1,1)\) such that_
\[\phi_{i}\circ\pi=g\circ\phi\]
_in a neighbourhood of \(x\). (Here \(\pi:\widetilde{M}\to M\) denotes the universal covering map.)_
_Moreover, if another pair \((\rho^{\prime},\phi^{\prime})\) satisfies the same properties, then there is a unique \(g\in\operatorname{SO}_{\circ}(d+1,1)\) such that \(\rho^{\prime}=g\rho g^{-1}\) and \(\phi^{\prime}=g\circ\phi\)._
Just as in the AdS case, the converse also holds: a pair \((\rho,\phi)\) where \(\rho:\pi_{1}(\widetilde{M})\to\operatorname{SO}_{\circ}(d+1,1)\) is a representation and \(\phi:\widetilde{M}\to\mathbb{H}^{d+1}\) is a \(\rho\)-equivariant Lipschitz immersion determines a hyperbolic embedding structure on \(M\). Finally, the map \(\phi\) is convex ruled if and only if the corresponding hyperbolic embedding structure is convex ruled.
The hyperbolic manifolds associated to convex ruled hyperbolic embedding structures, hyperbolic ends, require a precise definition. In simple terms, a hyperbolic end is a hyperbolic manifold with compact concave boundary, which is maximal in the sense of inclusion for this condition. In order to give a precise definition, we need to define what we mean by a hyperbolic manifold with concave boundary.
**Definition 3.41**.: Let \(\phi:\mathcal{H}\to\mathbb{H}^{d+1}\) be a convex Lipschitz embedding, and \(V\subset\mathbb{H}^{d+1}\) a convex open set such that \(\phi(\mathcal{H})\) is an open subset of \(\partial V\). We define the set \(\mathcal{N}(\phi)\) of _normal vectors_ to \(\phi\) to be the set of pairs \((x,v)\) where \(x\in\mathcal{H}\) and \(v\in T_{\phi(x)}\mathbb{H}^{d}\) is a unit vector pointing outside of \(V\) such that \(v^{\perp}\) is a support hyperplane of \(V\).
The _concave development_ of a convex Lipschitz embedding \(\phi:\mathcal{H}\to\mathbb{H}^{d+1}\) is the set
\[\mathcal{W}(\phi)=\left\{\exp_{\phi(x)}(tv)\,\Big{|}\,(x,v)\in\mathcal{N}( \phi),t\geq 0\right\}.\]
**Definition 3.42**.: A _concave hyperbolic atlas_ on a topological space \(X\) is the data of an atlas \((U_{i},\phi)_{i\in I}\) where \((U_{i})_{i\in I}\) is an open cover of \(X\) and the maps \(\phi_{i}:U_{i}\to\mathbb{H}^{d+1}\) have the following properties:
1. Each \(\phi_{i}\) is a homeomorphism from \(U_{i}\) to \(\phi_{i}(U_{i})\),
2. For each \(i\in I\) there is a convex Lipschitz embedding \(\psi_{i}:\mathcal{H}_{i}\to\mathbb{H}^{d+1}\) such that \(\phi_{i}(U_{i})=\mathcal{W}(\psi_{i})\),
3. For every pair \(i,j\in I\) there is an element \(g_{ij}\in\operatorname{SO}_{\circ}(d+1,1)\) such that \(\phi_{j}=g_{ij}\circ\phi_{i}\) on \(U_{i}\cap U_{j}\).
A _concave hyperbolic manifold_ is the data of a connected Hausdorff second countable topological space \(X\) and a maximal concave hyperbolic atlas on \(X\).
Note that a concave hyperbolic manifold is always a topological manifold with boundary, the interior being a smooth hyperbolic manifold.
**Definition 3.43**.: A _hyperbolic end_ of dimension \(d+1\) is a concave hyperbolic manifold with non empty compact boundary, and which is maximal (in the sense of inclusion) under this condition.
Examples of hyperbolic ends arise naturally as the components of the complement of the convex core in a convex cocompact hyperbolic manifold. Let us focus on the quasifuchsian case.
**Definition 3.44**.: A _quasifuchsian hyperbolic manifold_ of dimension \(d+1\) is a complete hyperbolic manifold \(N\) homeomorphic to \(M\times\mathbb{R}\), with \(M\) a closed manifold of dimension \(d\), and which contains a non-empty compact subset \(K\subset N\) with the following convexity property: for any \(x,y\in K\), any geodesic of \(N\) joining \(x\) and \(y\) is contained in \(K\).
The smallest such compact set \(K\) is called the _convex core_ of \(N\), denoted by \(C(N)\). A quasifuchsian hyperbolic manifold is called _Fuchsian_ if its convex core is a totally geodesic hypersurface.
If \(N\) is a quasifuchsian hyperbolic manifold, then \(N\setminus C(N)\) has two connected components, and the closure of each component is a hyperbolic end. When \(d=1\), a hyperbolic end is just a funnel (the quotient by a hyperbolic isometry of a half hyperbolic plane bounded by the axis of this isometry). When \(d\geq 2\), a hyperbolic end cannot always be obtained as an end of a quasifuchsian hyperbolic manifold. Indeed, a hyperbolic end with boundary homeomorphic to \(M\) induces a holonomy representation \(\rho:\pi_{1}(M)\to\mathrm{SO}_{\circ}(d+1,1)\), which is faithful and discrete for a quasifuchsian manifold, but may fail to be either for a hyperbolic end.
A hyperbolic end \(E\) actually comes with two boundaries. The first one, called the _pleated boundary_ of \(E\) and denoted by \(\partial_{0}E\), is the one already mentioned after Definition 3.42, obtained when \(E\) is seen as a manifold with boundary. We will discuss its geometry further in the next section.
The other boundary is the ideal boundary \(\partial_{\infty}E\). It is homeomorphic to \(M\) and locally modelled on \(\partial_{\infty}\mathbb{H}^{d+1}=\mathbb{S}^{d}\) with the action of \(\mathrm{SO}_{\circ}(d+1,1)\) by Mobius transformations. In other words, \(\partial_{\infty}E\) is equipped with a conformally flat structure. Moreover, this correspondence between hyperbolic
Figure 2. The concave development
ends homeomorphic to \(M\times[0,+\infty)\) and conformally flat structures on \(M\) is a bijection, according to the following:
**Theorem 3.45** (Kulkarni-Pinkall [31]).: _Let \(M\) be an oriented closed manifold of dimension \(d\) whose fundamental group is not virtually abelian.1 Then every conformally flat structure on \(M\) is the ideal boundary of a unique hyperbolic end with interior homeomorphic to \(M\times(0,+\infty)\)._
Footnote 1: This topological condition ensures that conformally flat structures on \(M\) are _hyperbolic_ as defined in [31], and is satisfied by closed hyperbolic manifolds as well as Gromov–Thurston manifolds.
When \(d=2\), the group \(\operatorname{SO}_{\circ}(3,1)\) is isomorphic to \(\operatorname{PSL}(2,\mathbb{C})\), and a theorem of Gallo-Kapovich-Marden [18] states that any non elementary representation of the fundamental group of a closed surface into \(\operatorname{SL}(2,\mathbb{C})\) descends to the holonomy representation of a conformally flat structure, thus providing an abundance of non quasifuchsian hyperbolic ends.
The bridge between convex ruled hyperbolic embedding structures and hyperbolic ends is the pleated boundary. The following lemma can be inferred from [31, Theorems 5.9 and 8.6] (see also [38]), but we can also give a direct proof without going through conformally flat structures.
**Lemma 3.46**.: _If \(E\) is a hyperbolic end, then the pleated boundary \(\partial_{0}E\) carries a convex ruled hyperbolic embedding structure._
Proof.: Following Definition 3.42, we see that the restriction of charts of a concave hyperbolic atlas to \(\partial_{0}E\) are convex Lipschitz embeddings. The fact that \(\partial_{0}E\) is ruled comes from the maximality of an end: if \(\partial_{0}E\) were strictly concave, we could "push" it to obtain a larger concave hyperbolic manifold (see Figure 3). More precisely, consider a point \(x\in\partial_{0}E\) at which the property fails, and a concave hyperbolic atlas for which \(x\) is in a unique chart \((U_{i},\phi_{i})\). Then consider a support hyperplane \(H\subset\mathbb{H}^{d+1}\) to \(V_{i}\) at \(\phi_{i}(x)\) (following the notations of Definition 3.42). The failure of the ruling at \(x\) means that one can push \(H\) slightly to a hyperplane \(H^{\prime}\) that cuts the image of \(\phi\) in an arbitrarily small neighborhood of \(\phi_{i}(x)\). In particular, we can assume that this neighborhood is not contained in any other chart. Replacing \(V_{i}\) with its intersection with the half-space bounded by \(H^{\prime}\) that does not contain \(\phi_{i}(x)\) and keeping the other charts, we get a concave hyperbolic atlas on a larger manifold, thus contradicting the maximality of \(E\).
We now wish to see how any convex ruled hyperbolic embedding structure can be obtained as the pleated boundary of a hyperbolic end.
**Lemma 3.47**.: _Any convex ruled hyperbolic embedding structure can be obtained as the pleated boundary of a hyperbolic end, unique up to isometry._
This could be obtained by [31, Theorem 10.6], but we provide here a direct construction without going through conformally flat structures, inspired by [38, Section 4].
Figure 3. Proof of Lemma 3.46.
Proof of Lemma 3.47.: Let \((U_{i},\phi_{i})_{i\in I}\) be a convex ruled hyperbolic embedding atlas on a compact manifold \(M\). We obtain a concave hyperbolic manifold with pleated boundary isometric to \(M\) by considering:
\[E=\bigsqcup_{i\in I}\mathcal{W}(\phi_{i})/\sim\]
where we identify \(\mathcal{W}(\phi_{i}|_{U_{i}\cap U_{j}})\) with \(\mathcal{W}(\phi_{j}|_{U_{i}\cap U_{j}})\) through the element \(\gamma_{ij}\in\mathrm{SO}_{\circ}(d+1,1)\) such that \(\phi_{i}|_{U_{i}\cap U_{j}}=\gamma_{ij}\circ\phi_{j}|_{U_{i}\cap U_{j}}\).
Consider a concave hyperbolic manifold with compact pleated boundary \(E^{\prime}\supset E\), and let \((x,x^{\prime})\in\partial_{0}E\times\partial_{0}E^{\prime}\) be maximising the distance. In a chart, we find that \(\partial_{0}E^{\prime}\) must be (locally around \(x^{\prime}\)) included in the \(r\)-neighbourhood of a geodesic where \(r=d(x,x^{\prime})\) (because \(\partial_{0}E\) is ruled). But for \(r>0\), the \(r\)-neighbourhood of a geodesic in \(\mathbb{H}^{d+1}\) is strictly convex, thus \(r=0\), i.e. \(E^{\prime}=E\) and \(E\) is maximal.
Now consider another hyperbolic end \(E^{\prime}\) inducing the same convex ruled hyperbolic embedding structure on the pleated boundary \(\partial_{0}E^{\prime}\approx M\). If \((V_{j},\psi_{j})_{j\in J}\) is a concave hyperbolic atlas on \(E^{\prime}\), then whenever \(V_{j}\cap\partial_{0}E^{\prime}\neq\emptyset\), \(\psi_{j}(V_{j})\) contains a neighbourhood of \(\psi_{j}(V_{j}\cap\partial_{0}E^{\prime})\) in \(\mathcal{W}(\psi_{j}|_{V_{j}\cap\partial_{0}E^{\prime}})\). So \(E^{\prime}\) contains a neighbourhood of \(\partial_{0}E\) in \(E\). By maximality of \(E^{\prime}\), we find that \(E\subset E^{\prime}\), and by maximality of \(E\) we must have \(E=E^{\prime}\), hence the uniqueness.
Note that we can also obtain the flat conformal structure from this construction. If \(\phi:\mathcal{H}\to\mathbb{H}^{d+1}\) is a convex Lipschitz embedding, we can consider the ideal boundary \(\partial_{\infty}\mathcal{W}(\phi)\) of its concave development:
\[\partial_{\infty}\mathcal{W}(\phi)=\overline{\mathcal{W}(\phi)}\cap\partial_ {\infty}\mathbb{H}^{d+1}=\left\{\lim_{t\to+\infty}\exp_{\phi(x)}(tv)\,\bigg{|} \,(x,v)\in\mathcal{N}(\phi)\right\}\subset\partial_{\infty}\mathbb{H}^{d+1}=S^ {d}.\]
One gets a conformally flat manifold by setting:
\[\partial_{\infty}E=\bigsqcup\partial_{\infty}\mathcal{W}(\phi_{i})/\sim\]
where we identify \(\partial_{\infty}\mathcal{W}(\phi_{i}|_{U_{i}\cap U_{j}})\) with \(\partial_{\infty}\mathcal{W}(\phi_{j}|_{U_{i}\cap U_{j}})\) through the element \(\gamma_{ij}\in\mathrm{SO}_{\circ}(d+1,1)\) such that \(\phi_{i}|_{U_{i}\cap U_{j}}=\gamma_{ij}\circ\phi_{j}|_{U_{i}\cap U_{j}}\).
As a consequence of Lemma 3.46 and Lemma 3.47, we get the following hyperbolic version of Lemma 3.37:
**Lemma 3.48**.: _Let \(M\) be a closed manifold. There is a one-to-one correspondence between convex ruled hyperbolic embedding structures on \(M\) up to equivalence and hyperbolic ends \(E\) with pleated boundary \(\partial_{0}E\approx M\) up to isometry._
## 4. Spherical and de Sitter polygons
This section is devoted to the study of polygons in the sphere \(\mathrm{S}^{2}\) and spacelike polygons in the de Sitter space \(\mathrm{dS}^{2}\). We construct a moduli space of such polygons, which is a smooth manifold, and describe various subsets of this moduli space, namely equilateral polygons, and equilateral polygons with a central symmetry.
As we will see in the next sections, spherical and de Sitter polygons parametrize bendings of Gromov-Thurston manifolds in the hyperbolic and anti-de Sitter space respectively. Once this is established, the results of the present section will readily prove the main theorems of the paper.
### Spherical polygons and their deformations
Let us start with the more familiar setting of spherical polygons. We will use the following definition:
**Definition 4.1**.: A spherical \(k\)-gon, \(k\geq 3\), is a tuple \((v_{i})_{i\in\mathbb{Z}/k\mathbb{Z}}\) of pairwise distinct points in \(\mathrm{S}^{2}\) such that
1. \(0<d(v_{i},v_{i+1})<\pi\) for all \(i\in\mathbb{Z}/k\mathbb{Z}\),
2. \((v_{i},v_{i+1})\cap[v_{j},v_{j+1}]=\emptyset\) for \(i\neq j\).
_Remark 4.2_.: Condition 1 guarantees that two consecutive vertices \(v_{i},v_{i+1}\) are never antipodal, so that they are joined by a unique geodesic segment. Condition 2 and the fact that the vertices are pairwise distinct guarantee that our polygons do not have "crossings", so that the union of all edges \(\bigcup_{i\in\mathbb{Z}/k\mathbb{Z}}[v_{i},v_{i+1}]\) is an embedded topological circle.
_Remark 4.3_.: Our polygons are _labelled_, meaning that a polygon is considered different from another one with the same vertices permuted. In particular, the polygon \((v_{1},\ldots,v_{k})\) is different from \((v_{k},\ldots,v_{1})\), so our polygons are also _oriented_.
The set \(\mathcal{U}_{k}\) of spherical \(k\)-gons is clearly an open subset of \((\mathrm{S}^{2})^{k}\) and thus inherits a structure of \(2k\)-dimensional manifold. The group \(\mathrm{SO}(3)\) acts smoothly on \(\mathcal{U}_{k}(\mathrm{S})\) and this action is free (since the stabilizer of a given polygon fixes two non-antipodal points on the sphere). The quotient space
\[\mathcal{P}_{k}(\mathrm{S})\stackrel{{\mathrm{def}}}{{=}}\mathrm{ SO}(3)\backslash\mathcal{U}_{k}\]
is thus a manifold of dimension \(2k-3\) which we call the _moduli space of (labelled) \(k\)-gons_.
Given \(p=(v_{1},\ldots,v_{k})\) a spherical polygon in \(\mathrm{S}^{2}\), let us introduce the following auxiliary vectors:
* \(u_{i}^{+}\) is the unit vector in \(T_{v_{i}}\mathrm{S}^{2}=v_{i}^{\perp}\) directing the edge \([v_{i},v_{i+1}]\),
* \(u_{i}^{-}\) is the unit vector in \(T_{v_{i}}\mathrm{S}^{2}=v_{i}^{\perp}\) directing the edge \([v_{i},v_{i-1}]\),
* \(w_{i}=v_{i}\times u_{i}^{+}\) is the unit vector completing \((v_{i},u_{i}^{+})\) into an oriented orthonormal basis.
(These vectors depend on \(p\) in the same way the vertices \(v_{i}\) do, but we omit to write this dependence in order to lighten notations.)
Finally, we define \(l_{i}(p)\) to be the length of the edge \([v_{i},v_{i+1}]\) and \(\theta_{i}(p)\) to be the oriented angle at \(v_{i}\) between \(u_{i}^{+}\) and \(-u_{i}^{-}\), i.e. \(\theta_{i}(p)\in(-\pi,\pi)\) is such that
\[-u_{i}^{-}=\cos(\theta_{i}(p))u_{i}^{+}+\sin(\theta_{i}(p))w_{i}\.\]
With this convention, \(\theta_{i}=0\) if and only if \(v_{i-1}\), \(v_{i}\) and \(v_{i+1}\) are aligned.
The functions \(l_{i}:\mathcal{U}_{k}\to(0,\pi)\) and \(\theta_{i}:\mathcal{U}_{k}\to(-\pi,\pi)\) are clearly smooth and \(\mathrm{SO}(3)\)-invariant. They thus factor to smooth functions on \(\mathcal{P}_{k}(\mathrm{S})\).
**Theorem 4.4**.: _The map_
\[\Phi:\mathcal{P}_{k}(\mathrm{S}) \to(0,\pi)^{k}\times(-\pi,\pi)^{k}\] \[p \mapsto(l_{1}(p),\ldots,l_{k}(p),\theta_{1}(p),\ldots,\theta_{k}( p))\]
_is an embedding, and the image of \(\mathrm{d}\Phi\) at a point \(p\) is the set of tuples \((\dot{l}_{1},\ldots,\dot{l}_{k},\dot{\theta}_{1},\ldots,\dot{\theta}_{k})\) such that_
\[\sum_{i=1}^{k}\dot{\theta}_{i}v_{i}-\dot{l}_{i}w_{i}=0. \tag{1}\]
Proof of Theorem 4.4.: It is well-known that a polygon is characterized up to isometry by its lengths and angles, so that \(\Phi\) is a homeomorphism onto its image. While the fact that \(\Phi\) is an immersion is also quite intuitive, let us prove it with a little more care.
Fix \(p\in\mathcal{U}_{k}(\mathrm{S})\) and \(\dot{p}=(\dot{v}_{1},\ldots\dot{v}_{k})\) a tangent vector at \(p\). Assume that \(\mathrm{d}l_{i}(\dot{p})=\mathrm{d}\theta_{i}(\dot{p})=0\). We need to prove the existence of \(a\in\mathfrak{so}(3)\) such that \(\dot{v}_{i}=av_{i}\) for all \(i\).
For each \(i\), note that \((v_{i},v_{i+1},w_{i})\) is a basis of \(\mathbb{R}^{3}\) for each \(i\). Denoting by \(\dot{w}_{i}\) the first order variation of \(w_{i}\), we have
\[\langle\dot{w}_{i},w_{i}\rangle =0\] \[\langle\dot{w}_{i},v_{i+1}\rangle =-\langle w_{i},\dot{v}_{i+1}\rangle\]
since \(w_{i}\) is a unit vector orthogonal to \(v_{i}\) and \(v_{i+1}\). Define \(a_{i}\in\mathrm{End}(\mathbb{R}^{3})\) by
\[a_{i}v_{i} =\dot{v}_{i}\] \[a_{i}v_{i+1} =\dot{v}_{i+1}\] \[a_{i}w_{i} =\dot{w}_{i}\.\]
The identity
\[\langle a_{i}v,w\rangle=-\langle v,a_{i}w\rangle\]
is satisfied on the basis \((v_{i},v_{i+1},w_{i})\), hence \(a_{i}\in\mathfrak{so}(3)\).
Let us now prove that \(a_{i}=a_{i-1}\). By construction, \(a_{i}v_{i}=\dot{v}_{i}=a_{i-1}v_{i}\). Moreover, we have
\[w_{i-1}=\cos(\theta_{i})w_{i}-\sin(\theta_{i})u_{i}^{+}\.\]
Since \(\dot{\theta}_{i}=0\), we deduce that
\[a_{i-1}w_{i-1} =\dot{w}_{i-1}\] \[=\cos(\theta_{i})\dot{w}_{i}-\sin(\theta_{i})\dot{u}_{i}^{+}\] \[=\cos(\theta_{i})a_{i}w_{i}-\sin(\theta_{i})a_{i}u_{i}^{+}\] \[=a_{i}w_{i-1}\.\]
The endomorphism \(a_{i}-a_{i-1}\) is in \(\mathfrak{so}(3)\) and has a kernel of dimension at least \(2\), hence \(a_{i}=a_{i-1}\).
By an immediate induction, we conclude that all the \(a_{i}\) are equal to the same \(a\in\mathfrak{so}(3)\), which then satisfies \(\dot{v}_{i}=av_{i}\) for all \(i\). This proves that the map \(\Phi\) is an embedding.
Let us now characterize the image of \(\mathrm{d}\Phi\). Note that, since the \(v_{i}\) and \(w_{i}\) span \(\mathbb{R}^{3}\), the space of tuples \((\dot{l}_{1},\dots,\dot{l}_{k},\dot{\theta}_{1},\dots,\dot{\theta}_{k})\) satisfying Equation (1) has dimension \(k-3\). By equality of dimension, it is thus enough to verify that the equation is satisfied on the image of \(\mathrm{d}\Phi\), and by linearity it suffices to prove it for first order deformations where only one vertex, say \(v_{2}\), is moving.
Fix a polygon \(p=(v_{1},\dots,v_{k})\) and assume first that no three consecutive vertices are aligned. Then \((u_{i}^{-},u_{i}^{+})\) form a basis of \(T_{v_{i}}\mathrm{S}^{2}\) for each \(i\), and it is enough to verify that the relation (1) is satisfied for a first order variation of \(p\) when only one of the \(v_{i}\) moves in the direction \(u_{i}^{-}\) or \(u_{i}^{+}\). Let us thus prove that (1) holds when \(\dot{v}_{2}=u_{2}^{+}\) and \(\dot{v}_{i}=0\) for \(i\neq 2\) (this is enough by symmetry of the problem).
We can compute first order variations of \(\theta_{i}\) and \(l_{i}\) and get
1. \(\dot{\theta}_{1}=\frac{\sin(\theta_{2})}{\sin(l_{1})}\),
2. \(\dot{\theta}_{2}=-\sin(\theta_{2})\mathrm{cotan}(l_{1})\),
3. \(\dot{l}_{1}=\cos(\theta_{2})\),
4. \(\dot{l}_{2}=-1\),
5. \(\dot{l}_{i}=\dot{\theta}_{i}=0\) for \(i\notin\{1,2\}\).
Writing coordinates in the orthonormal basis \((v_{2},-u_{2}^{-},w_{1})\), we have:
\[v_{1} = (\cos(l_{1}),-\sin(l_{1}),0)\,\] \[v_{2} = (1,0,0)\,\] \[w_{1} = (0,0,1)\,\] \[w_{2} = (0,\sin(\theta_{2}),\cos(\theta_{2}))\.\]
And we conclude that
\[\sum_{i=1}^{k}\dot{\theta}_{i}v_{i}-\dot{l}_{i}w_{i}=\dot{\theta}_{1}v_{1}+ \dot{\theta}_{2}v_{2}-\dot{l}_{1}w_{1}-\dot{l}_{2}w_{2}=\]
\[\begin{array}{ccccc}\frac{\sin(\theta_{2})}{\sin(l_{1})}&\cdot&(\cos(l_{1} ),&-\sin(l_{1}),&0)&+\\ (-\sin(\theta_{2})\mathrm{cotan}(l_{1}))&\cdot&(1,&0,&0)&+\\ -\cos(\theta_{2})&\cdot&(0,&0,&1)&+\\ 1&\cdot&(0,&\sin(\theta_{2}),&\cos(\theta_{2}))&&\end{array}\]
which is equal to \(0\).
We deduce that Equation (1) holds on the image of \(\mathrm{d}\Phi\) at every \(p\) with no three consecutive vertices aligned, and conclude that it holds on all \(\mathcal{P}_{k}(\mathrm{S}^{2})\) by density.
### Spacelike polygons in \(\mathrm{dS}^{2}\) and their deformations
We know duplicate the above construction for spacelike polygons in \(\mathrm{dS}^{2}\). The results and their proof are formally the same, and we will only stress out the additional technicalities.
The two-dimensional de Sitter space \(\mathrm{dS}^{2}\) is the space of unit spacelike vectors in the \(2+1\) dimensional Minkowski space \(\mathbb{R}^{2,1}\). More precisely, we equip \(\mathbb{R}^{3}\) with the bilinear symmetric form
\[\langle x,y\rangle=x_{0}y_{0}+x_{1}y_{1}-x_{2}y_{2}\.\]
and consider the Lorentzian submanifold
\[\mathrm{dS}^{2}=\{x\in\mathbb{R}^{2,1}\ |\ \langle x,x\rangle=1\}\.\]
A timelike vector in \(\mathbb{R}^{2,1}\) is _future pointing_ if its third coordinate is positive. This defines an orientation of time in \(\mathrm{dS}^{2}\). One can then define an orientation of space 1 in the following way: given \(v\in\mathrm{dS}^{2}\) and \(u\in T_{v}\mathrm{dS}^{2}\) spacelike, let \(w\) be the unit future pointing vector orthogonal to \(v\) and \(u\). We say that \(u\) is _positive_ if \((v,u,w)\) is a direct basis of \(\mathbb{R}^{2,1}\). The group of isometries of \(\mathrm{dS}^{2}\) preserving the orientation of time and space is \(\mathrm{SO}_{\circ}(2,1)\), acting linearly on \(\mathrm{dS}^{2}\subset\mathbb{R}^{2,1}\). Geodesics in \(\mathrm{dS}^{2}\) are intersections of \(\mathrm{dS}^{2}\) with linear planes in \(\mathbb{R}^{2,1}\).
Footnote 1: DM - isn’t it more of an orientation of each spacelike geodesic?
**Definition 4.5**.: A spacelike de Sitter \(k\)-gon is a tuple \((v_{i})_{i\in\mathbb{Z}/k\mathbb{Z}}\) of pairwise distinct points in \(\mathrm{dS}^{2}\) such that
1. \(v_{i}\) and \(v_{i+1}\) are joined by a spacelike segment of length in \((0,\pi)\) for all \(i\in\mathbb{Z}/k\mathbb{Z}\),
2. the vector directing the segment \([v_{i},v_{i+1}]\) at \(v_{i}\) is positive for all \(i\),
3. \((v_{i},v_{i+1})\cap[v_{j},v_{j+1}]=\emptyset\) for \(i\neq j\).
_Remark 4.6_.: Condition (1) guarantees that two consecutive vertices \(v_{i},v_{i+1}\) are never antipodal, so that they are joined by a unique geodesic segment. Condition (3) ensures that our polygons do not have "crossings", so that the union of all edges \(\bigcup_{i\in\mathbb{Z}/k\mathbb{Z}}[v_{i},v_{i+1}]\) is an embedded topological circle. Finally, by Condition (2), this circle is "positively oriented". In particular, its homology class is the positive generator of \(\mathrm{H}_{1}(\mathrm{dS}^{2})=\mathbb{Z}\).
As in the Euclidean case, the set \(\mathcal{U}_{k}(\mathrm{dS})\) of spacelike de Sitter \(k\)-gons is an open subset of \((\mathrm{dS}^{2})^{k}\) on which the group \(\mathrm{SO}_{\circ}(2,1)\) acts smoothly and freely. Since \(\mathrm{SO}_{\circ}(2,1)\) is not compact anymore, one also needs to remark that this action is proper, which is easy because any element of \(\mathrm{SO}_{\circ}(2,1)\) is entirely characterized by the image of 2 independent vectors spanning a non-isotropic plane, such as two consecutive vertices of a spacelike polygon. We thus get that the quotient space
\[\mathcal{P}_{k}(\mathrm{dS})\stackrel{{\mathrm{def}}}{{=}} \mathrm{SO}_{\circ}(2,1)\backslash\mathcal{U}_{k}\]
is a manifold of dimension \(2k-3\) which we call the _moduli space of (labelled) spacelike de Sitter \(k\)-gons_.
Given \(p=(v_{1},\dots,v_{k})\) a spacelike polygon in \(\mathrm{dS}^{2}\), we introduce again the auxiliary vectors:
* \(u_{i}^{+}\) the unit vector in \(T_{v_{i}}\mathrm{dS}^{2}=v_{i}^{\perp}\) directing the edge \([v_{i},v_{i+1}]\),
* \(u_{i}^{-}\) the unit vector in \(T_{v_{i}}\mathrm{dS}^{2}=v_{i}^{\perp}\) directing the edge \([v_{i},v_{i-1}]\),
* \(w_{i}=v_{i}\times u_{i}^{+}\) the unit vector completing \((v_{i},u_{i}^{+})\) into an oriented orthonormal basis.
Note that we have \(\langle w_{i},w_{i}\rangle=-1\) since we are in the Minkowski space.
Finally, we define \(l_{i}(p)\) to be the length of the edge \([v_{i},v_{i+1}]\) and \(\theta_{i}(p)\) to be a "Lorentzian angle" at \(v_{i}\) between \(u_{i}^{+}\) and \(-u_{i}^{-}\), i.e. \(\theta_{i}(p)\in\mathbb{R}\) is such that
\[-u_{i}^{-}=\cosh(\theta_{i}(p))u_{i}^{+}+\sinh(\theta_{i}(p))w_{i}\.\]
Again, \(l_{i}:\mathcal{U}_{k}(\mathrm{dS})\to(0,\pi)\) and \(\theta_{i}:\mathcal{U}_{k}(\mathrm{dS})\to\mathbb{R}\) factor to smooth functions on \(\mathcal{P}_{k}(\mathrm{dS})\).
**Theorem 4.7**.: _The map_
\[\Phi:\mathcal{P}_{k}(\mathrm{dS}) \to(0,\pi)^{k}\times\mathbb{R}^{k}\] \[p \mapsto(l_{1}(p),\dots,l_{k}(p),\theta_{1}(p),\dots,\theta_{k}(p))\]
is an embedding, and the image of \(\mathrm{d}\Phi\) at a point \(p\) is the set of tuples \((\dot{l}_{1},\ldots,\dot{l}_{k},\dot{\theta}_{1},\ldots,\dot{\theta}_{k})\) such that_
\[\sum_{i=1}^{k}\dot{\theta}_{i}v_{i}+\dot{l}_{i}w_{i}=0. \tag{2}\]
Proof of Theorem 4.7.: The proof is almost exactly the same as in the spherical case.
First, a polygon is characterized up to isometry by its lengths and angles, so that \(\Phi\) is a homeomorphism onto its image.
To prove that \(\Phi\) is an immersion, we construct \(a_{i}\in\mathfrak{so}(2,1)\) such that \(a_{i}v_{i}=\dot{v}_{i}\) and \(a_{i}v_{i+1}=\dot{v}_{i+1}\), and we prove that all the \(a_{i}\) are equal by showing that \(a_{i}w_{i-1}=a_{i-1}w_{i-1}\).
To characterize the image of \(\mathrm{d}\Phi\), we reduce with the same arguments as in the spherical case to proving that Equation (2) is satisfied at a polygon \(p\) where no three consecutive vertices are aligned and along a first order variation where \(\dot{v}_{2}=u_{2}^{+}\) and \(\dot{v}_{i}=0\) for \(i\neq 2\).
We obtain similar formulae for the first order variation of the lengths and angles, namely
1. \(\dot{\theta}_{1}=\frac{\sinh(\theta_{2})}{\sin(l_{1})}\),
2. \(\dot{\theta}_{2}=-\sinh(\theta_{2})\mathrm{cotan}(l_{1})\),
3. \(\dot{l}_{1}=\cosh(\theta_{2})\),
4. \(\dot{l}_{2}=-1\),
5. \(\dot{l}_{i}=\dot{\theta}_{i}=0\) for \(i\notin\{1,2\}\).
Figure 4. A spacelike polygon in \(\mathrm{dS}^{2}\).
In the orthonormal frame \((v_{2},u_{2}^{-},w_{1})\), we have:
\[v_{1} = (\cos(l_{1}),-\sin(l_{1}),0)\,\] \[v_{2} = (1,0,0)\,\] \[w_{1} = (0,0,1)\,\] \[w_{2} = (0,-\sinh(\theta_{2}),\cosh(\theta_{2}))\.\]
And we compute again that
\[\sum_{i=1}^{k}\dot{\theta}_{i}v_{i}+\dot{l}_{i}w_{i}=\dot{\theta}_{1}v_{1}+ \dot{\theta}_{2}v_{2}+\dot{l}_{1}w_{1}+\dot{l}_{2}w_{2}=0\.\]
Note the single sign change in the second coordinate of \(w_{2}\), which induces the sign change in Equation (2) compared to Equation (1).
### Convex polygons
**Definition 4.8**.: A spherical or de Sitter polygon \(p\) will be called _convex_ if
\[\theta_{i}(p)\geq 0\]
for all \(i\).
By definition, the union of the edges of a spherical polygon forms an oriented Jordan curve, which separates the sphere into two topological discs. With our (perhaps non-standard) convention, a polygon is convex if and only if the disc _to its right_ is convex.
Similarly, a de Sitter polygon separates \(\mathrm{dS}^{2}\) into two cylindrical domains, and the polygon is convex if and only if the domain _in its past_ is convex.
### Equilateral polygons
**Definition 4.9**.: A spherical or spacelike de Sitter \(k\)-gon \(p\) is called _equilateral_ if
\[l_{1}(p)=\ldots=l_{k}(p)\.\]
Let \(\mathcal{P}_{k}^{eq}(\mathrm{S})\subset\mathcal{P}_{k}(\mathrm{S})\) denote the set of (equivalence classes of) equilateral spherical \(k\)-gons and \(\mathcal{P}_{k}^{l}(\mathrm{S})\subset\mathcal{P}_{k}^{eq}(\mathrm{S})\) the subset of equilateral \(k\)-gons _p_with \(l_{i}(p)=l\) for all \(i\).
**Proposition 4.10**.: _The space \(\mathcal{P}_{k}^{eq}(\mathrm{S})\) is a submanifold of \(\mathcal{P}_{k}(\mathrm{S})\) of dimension \(k-2\)._
_For all \(l<\frac{2\pi}{k}\), the subpace \(P_{k}^{l}(\mathrm{S})\) is a submanifold of \(\mathcal{P}_{k}(\mathrm{S})\) of dimension \(k-3\)._
Proof.: Define
\[\begin{array}{cccc}L:&\mathcal{P}_{k}(\mathrm{S})&\to&\mathbb{R}^{k}\\ &p&\to&(l_{1}(p),\ldots l_{k}(p))\end{array}\]
and
\[\begin{array}{cccc}D:&\mathcal{P}_{k}(\mathrm{S})&\to&\mathbb{R}^{k-1}\\ &p&\to&(l_{2}(p)-l_{1}(p),\ldots,l_{k}(p)-l_{k-1}(p))\,\end{array}\]
so that \(\mathcal{P}_{k}^{l}(\mathrm{S})=L^{-1}(l,\ldots,l)\) and \(\mathcal{P}_{k}^{eq}(\mathrm{S})=D^{-1}(0,\ldots,0)\).
Let \(p\) be an equilateral polygon of length \(l\). By Theorem 4.4, \((\dot{l}_{1},\ldots,\dot{l}_{k})\) belongs to the image of \(\mathrm{d}L\) if and only if there exist \((\dot{\theta}_{i})_{1\leq i\leq k}\) such that
\[\sum_{i=1}^{k}\dot{\theta}_{i}v_{i}=\sum_{i=1}^{k}\dot{l}_{i}w_{i}\.\]
This is the case as long as the \(v_{i}\) span \(\mathbb{R}^{3}\). Otherwise, all the \(v_{i}\) are aligned along an equator, hence \(kl=2\pi\). We conclude that \(L\) is a submersion along \(\mathcal{P}_{k}^{l}(\mathrm{S})\) for \(l<\frac{2\pi}{k}\) and the second part of the theorem follows.
We also deduce that \(D\) is a submersion at \(p\) unless \(p\) is contained in an equator. Assume now that \(p\) is contained in an equator. Then all the \(w_{i}\) are equal to a fixed unit vector \(w\) orthogonal to this equator. Fix \((\delta_{i})_{1\leq i\leq k-1}\in\mathbb{R}^{k-1}\) and set \(\dot{\theta}_{i}=0\) and \(\dot{l}_{i}=\sum_{j=1}^{i-1}\delta_{j}-s\), where
\[s=\frac{1}{k}\sum_{i=1}^{k}\sum_{j=1}^{i-1}\delta_{j}\.\]
Then we have
\[\dot{l}_{i+1}-\dot{l}_{i}=\delta_{i}\]
and
\[\sum_{i=1}^{k}\dot{\theta}_{i}v_{i}-\sum_{i=1}^{k}\dot{l}_{i}w_{i}=-\sum_{i=1} ^{k}\dot{l}_{i}w=0\.\]
Hence \((\delta_{1},\dots,\delta_{k-1})\) belongs to the image of \(\mathrm{d}D\). We conclude that \(D\) is a submersion along \(\mathcal{P}_{k}^{eq}(\mathrm{S})\) and the first part of the theorem follows.
Similarly, denoting by \(\mathcal{P}_{k}^{eq}(\mathrm{dS})\subset\mathcal{P}_{k}(\mathrm{dS})\) the set of (equivalence classes of) equilateral spacelike de Sitter \(k\)-gons and \(\mathcal{P}_{k}^{l}(\mathrm{dS})\subset\mathcal{P}_{k}^{eq}(\mathrm{dS})\) the subset of equilateral \(k\)-gons of length \(l\), we have
**Proposition 4.11**.: _The space \(\mathcal{P}_{k}^{eq}(\mathrm{dS})\) is a submanifold of \(\mathcal{P}_{k}(\mathrm{dS})\) of dimension \(k-2\)._
_For all \(l>\frac{2\pi}{k}\), the subspace \(P_{k}^{l}(\mathrm{dS})\) is a submanifold of \(\mathcal{P}_{k}(\mathrm{dS})\) of dimension \(k-3\)._
The proof is identical to that of Proposition 4.10.
The following proposition guarantees in particular the existence of convex equilateral polygons with any lengths in the appropriated range.
**Proposition 4.12**.: _There exists a smooth \(1\)-parameter family of spherical \(k\)-gons \((p_{l})_{l\in(0,\frac{2\pi}{k}]}\) such that \(p_{l}\) is convex equilateral of length \(l\) and \(\theta_{i}(p)=\theta(l)\) for some homeomorphism_
\[\theta:\left(0,\frac{2\pi}{k}\right]\to\left(\left(1-\frac{2}{k}\right)\pi, \pi\right]\.\]
_There exists a smooth \(1\)-parameter family of spacelike de Sitter \(k\)-gons \((p_{l})_{l\in\mathbb{R}_{\geq 0}}\) such that \(p_{l}\) is convex equilateral of length \(l\) and \(\theta_{i}(p)=\theta(l)\) for some homeomorphism_
\[\theta:\mathbb{R}_{\geq 0}\to\mathbb{R}_{\geq 0}\.\]
Proof.: In the spherical case, fix an orthogonal basis and set \(p_{\alpha}=(v_{j}(\alpha))_{1\leq j\leq k}\) where
\[v_{j}(\alpha)=\left(\cos(\alpha)\cos\left(\frac{2\pi j}{k}\right),\cos(\alpha )\sin\left(\frac{2\pi j}{k}\right),\sin(\alpha)\right)\.\]
In the de Sitter case, fix an orthogonal basis and set \(p_{\alpha}=(v_{j}(\alpha))_{1\leq j\leq k}\) where
\[v_{j}(\alpha)=\left(\cosh(\alpha)\cos\left(\frac{2\pi j}{k}\right),\cosh( \alpha)\sin\left(\frac{2\pi j}{k}\right),\sinh(\alpha)\right)\.\]
The polygon \(p_{\alpha}\) is symmetric under rotation of angle \(\frac{2\pi}{k}\), hence it is equilateral with lengths \(l(\alpha)\) and all angles equal to \(\theta(\alpha)\). A straightforward computation shows that the maps
\[\alpha\mapsto l(p_{\alpha})\quad\text{and}\quad\alpha\mapsto\theta(\alpha)\]
are both homeomorphisms between the appropriate intervals.
### Equilateral polygons with a central symmetry
Consider the involution \(\sigma\) of \(\mathcal{U}_{2k}(\mathrm{S})\) (respectively, of \(\mathcal{U}_{2k}(\mathrm{dS})\)) given by
\[\sigma(v_{1},\ldots,v_{2k})=(v_{k+1},\ldots,v_{2k},v_{1},\ldots,v_{k})\.\]
The involution commutes with the action of \(\mathrm{SO}(3)\) (resp. \(\mathrm{SO}_{\circ}(2,1)\)) and thus factors to an involution of \(\mathcal{P}_{2k}(\mathrm{S})\) (resp. \(\mathcal{P}_{2k}(\mathrm{dS})\)) that we still denote by \(\sigma\). Since two polygons with the same lengths and angles are congruent, the following properties are equivalent:
* the class of \(p\in\mathcal{P}_{2k}(\mathrm{S})\) (resp. \(\mathcal{P}_{2k}(\mathrm{dS})\)) is fixed by \(\sigma\),
* there exists a unit vector \(w\) (resp. a unit negative vector \(w\)) such that \[\sigma p=s_{w}p\,\] where \(s_{w}\) is the central symmetry with axis \(w\),
* \(l_{i}(p)=l_{i+k}(p)\) and \(\theta_{i}(p)=\theta_{i+k}(p)\) for all \(i\in\mathbb{Z}/2k\mathbb{Z}\).
Note that \(\sigma\) preserves the space of equilateral polygons. We denote by \(\mathcal{P}_{2k}^{sym}(\mathrm{S})\) (resp. \(\mathcal{P}_{2k}^{sym}(\mathrm{dS})\)) the moduli space of equilateral \(2k\)-gons with a central symmetry.
**Proposition 4.13**.: _The set \(\mathcal{P}_{2k}^{sym}(\mathrm{S})\) (resp. \(\mathcal{P}_{2k}^{sym}(\mathrm{dS})\)) is a submanifold of \(\mathcal{P}_{2k}^{eq}(\mathrm{S})\) (resp. \(\mathcal{P}_{2k}^{eq}(\mathrm{dS})\)) of dimension \(k\)._
Proof.: We do the proof in the spherical setting, but the proof in the de Sitter setting is identical.
Since \(\sigma\) is a smooth diffeomorphism of \(\mathcal{P}_{2k}^{eq}(\mathrm{S})\) of finite order, it is linearizable at every fixed point and its fixed locus is a submanifold of local dimension
\[\dim\ker(\mathrm{d}\sigma-\mathrm{Id})\.\]
Let \(p=(v_{1},\ldots,v_{2k})\) be an equilateral polygon with a central symmetry and \(w\) such that
\[v_{i+k}=s_{w}(v_{i})\.\]
Then \(\sigma\) fixes the isomorphism class of \(p\) and the action of \(\mathrm{d}_{p}\sigma\) on \(T_{p}P_{2k}^{eq}(\mathrm{S})\) sends a first order variation of the length and angles \((\dot{\theta}_{1},\ldots,\dot{\theta}_{2k},\dot{l})\) to
\[(\dot{\theta}_{k+1},\ldots\dot{\theta}_{2k},\dot{\theta}_{1},\ldots,\dot{ \theta}_{k},\dot{l})\.\]
By Theorem 4.4, the kernel of \(\mathrm{d}_{p}\sigma+\mathrm{Id}\) is identified with the set of tuples
\[(\dot{\theta}_{1},\ldots,\dot{\theta}_{k},-\dot{\theta}_{1},\ldots,-\dot{ \theta}_{k},0)\]
satisfying the relation
\[\sum_{i=1}^{k}\dot{\theta}_{i}(v_{i}-s_{w}v_{i})=0. \tag{3}\]
Note that the vectors \(v_{i}-s_{w}v_{i}\) are all orthogonal to \(w\). Moreover, they span \(w^{\perp}\), for otherwise all the \(v_{i}\) would be contained in a great circle passing through \(w\), which is absurd because \(s_{w}\) reverses the orientation of this circle while \(\sigma\) preserves the orientation.
Hence the set of \((\dot{\theta}_{i})_{1\leq i\leq k}\) satisfying (3) has dimension \(k-2\). Since \(T_{p}P_{2k}^{eq}(\mathrm{S})\) has dimension \(2k-2\), we deduce that
\[\dim\ker(\mathrm{d}\sigma-\mathrm{Id})=\dim\mathcal{P}_{2k}^{sym}(\mathrm{S} )=k\.\]
Recall that there is (up to isometry) a unique equilateral spherical (resp. spacelike de Sitter) \(2k\)-gon with vanishing angles. This polygon is contained in a (spacelike) geodesic, and its edges divide this geodesic in segments of length \(\frac{\pi}{k}\). In particular, it has a central symmetry. We denote this polygon by \(p_{0}\).
**Proposition 4.14**.: _There exists a neighbourhood \(\mathcal{V}\) of \(p_{0}\) in \(\mathcal{P}_{2k}^{sym}(\mathrm{dS})\) (resp. \(\mathcal{P}_{2k}^{sym}(\mathrm{S})\)) such that the map_
\[\begin{array}{rcl}\Theta:&\mathcal{V}&\rightarrow&\mathbb{R}^{k}\\ &p&\rightarrow&(\theta_{1}(p),\ldots\theta_{k}(p))\end{array}\]
_is a diffeomorphism onto an open neighbourhood of \((0,\ldots,0)\)._
Proof.: The proposition states that \(\Theta\) is a local diffeomorphism at \(p_{0}\). The proof is identical in the spherical and de Sitter case. By Proposition 4.13, the space \(\mathcal{P}^{sym}_{2k}(\mathrm{dS})\) (resp. \(\mathcal{P}^{sym}_{2k}(\mathrm{S})\)) has dimension \(k\), so it suffices to prove that \(\mathrm{d}\Theta\) is injective at \(p_{0}\).
Set \(p_{0}=(v_{1},\ldots v_{2k})\). Since all the \(v_{i}\) belong to the same geodesic, they are orthogonal to the same unit vector \(w\), hence all the auxiliary vectors \(w_{i}\) are equal to \(w\).
By Theorem 4.4, the kernel of \(\mathrm{d}_{p}\Theta\) identifies with the set of infinitesimal variations of angles and length of the form
\[(0,\ldots,0,\hat{l})\]
satisfying the equation
\[\dot{l}\left(\sum_{i=1}^{2k}w_{i}\right)=0. \tag{4}\]
Since all the \(w_{i}\) are equal to \(w\), (4) implies \(\dot{l}=0\). We conclude that \(\mathrm{d}_{p}\Theta\) is injective, hence \(\Theta\) is local diffeomorphism in a neighborhood of \(p_{0}\).
_Remark 4.15_.: The proof shows more generally that the map \(\Theta\) is immersive at every polygon \(p\) for which \(\sum_{i=1}^{k}w_{i}\neq 0\). One can show that it is the case when \(p\) is a convex spherical polygon and when \(p\) is any spacelike de Sitter polygon.
## 5. Geometrization of Gromov-Thurston manifolds
Thanks to the results of Section 3, in order to prove Theorem 1.1, it is enough to construct a convex ruled spacelike AdS structure on a given Gromov-Thurston manifold \(M^{a}\). The spacelike structure we construct will be totally geodesic away from the hypersurfaces \(H_{i}\) where the spacelike embedding is "folded". Along the codimension 2 locus \(S\), several dihedra with a total angle greater than \(2\pi\) are patched together. A similar construction will be made to prove Theorem 1.12, with the only difference that the cone angle is less than \(2\pi\) along the codimension 2 stratum.
### Hipped hypersurfaces in \(\mathrm{AdS}^{d+1}\) and polygons in \(\mathrm{dS}^{2}\)
In order to obtain a local isometry from a Gromov-Thurston cone-manifold \(M^{a}\) with \(a\geq 1\) into \(\mathrm{AdS}^{d+1}\), we need to understand the polyhedral hypersurfaces in \(\mathrm{AdS}^{d+1}\) that carry the same geometry. We will see that such hypersurfaces can be parametrized by spacelike polygons in \(\mathrm{dS}^{2}\).
**Definition 5.1**.: A _hipped hypersurface_ in \(\mathrm{AdS}^{d+1}\) is a Lipschitz spacelike hypersurface \(H\subset\mathrm{AdS}^{d+1}\) that is a finite union \(H=\bigcup_{i=1}^{k}X_{i}\) of subsets with the following properties:
1. Each \(X_{i}\) is a convex subset of a totally geodesic copy of \(\mathbb{H}^{d}\),
2. The relative boundary of \(X_{i}\) is the union of two half-spaces \(Y_{i}\) and \(Y_{i+1}\) of totally geodesic copies of \(\mathbb{H}^{d-1}\),
3. \(X_{i}\cap X_{i+1}=Y_{i+1}\) for all \(i\in\{1,\ldots,k\}\) (setting \(X_{k+1}=X_{1}\) and \(Y_{k+1}=Y_{1}\)),
4. There is a totally geodesic copy \(Z\subset\mathrm{AdS}^{d+1}\) of \(\mathbb{H}^{d-2}\), called the _stem_ of \(H\), such that \(Y_{i}\cap Y_{i+1}=Z\) for all \(i\in\{1,\ldots,k\}\).
Let us give precise definitions of angles between totally geodesic subspaces of \(\mathrm{AdS}^{d+1}\). First, consider two totally geodesic copies \(X_{1},X_{2}\subset\mathrm{AdS}^{d+1}\) of \(\mathbb{H}^{d}\) intersecting along a totally geodesic copy \(Y\) of \(\mathbb{H}^{d-1}\). Consider an isometry \(g\in\mathrm{SO}_{\circ}(d,2)\) such that \(g(X_{1})=X_{2}\) and \(g\) is the identity on \(Y\). Now \(Y\) corresponds to a vector subspace \(Y\subset\mathbb{R}^{d,2}\) of signature \((d-1,1)\), so \(Y^{\perp}\subset\mathbb{R}^{d,2}\) is a plane of signature \((1,1)\). It follows that the restriction of \(g\) to \(Y^{\perp}\) is conjugate to an element of \(\mathrm{SO}_{\circ}(1,1)\subset\mathrm{SO}_{\circ}(d,2)\), hence of the form \(\begin{pmatrix}\cosh t&\sinh t\\ \sinh t&\cosh t\end{pmatrix}\). The angle between \(X_{1}\) and \(X_{2}\) is the real number \(t\) (it could also be seen as the angle between the normal vectors to \(X_{1}\) and \(X_{2}\) at any point of \(Y\), which is the angle between timelike vectors).
Now consider \(Y_{1},Y_{2}\subset\mathrm{AdS}^{d+1}\) half-spaces of totally geodesic copies of \(\mathbb{H}^{d-1}\) with common relative boundary \(Z\approx\mathbb{H}^{d-2}\), consider an isometry \(g\in\mathrm{SO}_{\circ}(d,2)\) such that \(g(Y_{1})=Y_{2}\) and \(g\) is
the identity on \(Z\). The angle between \(Y_{1}\) and \(Y_{2}\) is the unique \(\theta\in\mathbb{R}/2\pi\mathbb{Z}\) such that \(g\) is conjugate in \(\mathrm{SO}_{\circ}(d,2)\) to the matrix
\[\begin{pmatrix}1_{d}&\\ &\cos\theta&-\sin\theta\\ &\sin\theta&\cos\theta\end{pmatrix}.\]
**Definition 5.2**.: Let \(H=\bigcup_{i=1}^{k}X_{i}\subset\mathrm{AdS}^{d+1}\) be a hipped hypersurface. The _dihedral angles_ are the angles between \(X_{i}\) and \(X_{i+1}\), and the _wedge angles_ are the angles between \(Y_{i}\) and \(Y_{i+1}\).
Using the exponential map of \(\mathrm{AdS}^{d+1}\) at a point of the stem, the coordinates we obtain on a hipped hypersurface show that it carries a cone hyperbolic metric whose singular locus is the stem (and is totally geodesic) and whose angle is the sum of the wedge angles.
**Lemma 5.3**.: _A hipped hypersurface in \(\mathrm{AdS}^{d+1}\) is past-convex if and only if the dihedral angles are non negative._
Proof.: If the angle between \(X_{i}\) and \(X_{i+1}\) is negative, consider a point \(x\) in the relative interior of their common intersection \(Y_{i+1}\). One can always find a spacelike geodesic in the future of \(x\) that intersects both \(X_{i}\) and \(X_{i+1}\) transversally, so its intersection with the past of \(H\) is disconnected.
Now assume that all the angles are non negative. In this case \(H\) is in the past of each spacelike hyperplane containing \(X_{1},\dots,X_{k}\), so it is past-convex.
**Lemma 5.4**.: _Let \(\alpha_{1},\dots,\alpha_{k}\in(0,\pi)\) and \(\theta_{1},\dots,\theta_{k}\in\mathbb{R}\). The set of hipped hypersurfaces \(H=\bigcup_{i=1}^{r}X_{i}\subset\mathrm{AdS}^{d+1}\) with wedge angles \(\alpha_{1},\dots,\alpha_{k}\) and dihedral angles \(\theta_{1},\dots,\theta_{k}\) considered up to isometry is in one-to-one correspondence with spacelike polygons \(p\subset\mathrm{dS}^{2}\) with side lengths \(\alpha_{1},\dots,\alpha_{k}\) and angles \(\theta_{1},\dots,\theta_{k}\) up to isometry. Through this correspondence, convex polygons are associated to past-convex hypersurfaces._
Proof.: Start with a hipped hypersurface \(H=\bigcup_{i=1}^{r}X_{i}\subset\mathrm{AdS}^{d+1}\), let \(Z\approx\mathbb{H}^{d-2}\) be its stem and consider the link \(\mathcal{L}\) of \(Z\). Fix \(z\in Z\), so that \(\mathcal{L}\) identifies with the set of unit spacelike vectors tangent to \(\mathrm{AdS}^{d+1}\) at \(z\) and orthogonal to \(Z\). Notice that since \(T_{z}\mathrm{AdS}^{d+1}=T_{z}Z\oplus(T_{z}Z)^{\perp}\), there is a natural identification between \(\mathcal{L}\) and \(\mathrm{dS}^{2}\) (seen as the set of unit spacelike vectors in \((T_{z}Z)^{\perp}\)).
Figure 5. A hipped hypersurface in \(\mathrm{AdS}^{3}\).
Intersecting \(H\) with \(\mathcal{L}\) yields a spacelike polygon \(p\subset\mathrm{dS}^{2}\). Its vertices \(v_{1},\dots,v_{k}\) are defined by \(v_{i}\in T_{z}Y_{i}\cap(T_{z}Z)^{\perp}\) and \(\exp_{z}(v_{i})\in Y_{i}\). The edges \([v_{i},v_{i+1}]\) correspond to \(T_{z}X_{i}\cap(T_{z}Z)^{\perp}\).
Starting with such a polygon \(p\subset\mathrm{dS}^{2}\) with vertices \(v_{1},\dots,v_{r}\), we consider any point \(z\in\mathrm{AdS}^{d+1}\) and any totally geodesic copy \(Z\subset\mathrm{AdS}^{d+1}\) of \(\mathbb{H}^{d-2}\) containing \(z\). By identifying \(\mathrm{dS}^{2}\) with unit spacelike vectors in \((T_{z}Z)^{\perp}\), we can define:
\[Y_{i} =\{\exp_{z}(u+tv_{i})\,|\,u\in T_{z}Z\,,\ t\geq 0\}\] \[X_{i} =\{\exp_{z}(u+tw)\,|\,u\in T_{z}Z\,,\ w\in[v_{i},v_{i+1}]\,,\ t \geq 0\}\]
Then \(H=\bigcup_{i=1}^{k}X_{i}\subset\mathrm{AdS}^{d+1}\) is a hipped hypersurface, and these two constructions are inverse to each other.
In this correspondence, the length of \([v_{i},v_{i+1}]\) is given by the angle between \(Y_{i}\) and \(Y_{i+1}\), i.e. \(\alpha_{i}\). The angle at \(v_{i}\) equals the angle between \(X_{i}\) and \(X_{i+1}\), namely \(\theta_{i}\). Following Lemma 5.3, we see that the hipped hypersurface \(H\) is past-convex if and only if the spacelike polygon \(p\) is convex.
### Geometrization of Gromov-Thurston cone-manifolds for \(a>1\)
We now consider a Gromov-Thurston manifold \(M^{a}\) with \(a=\frac{k}{n}>1\). Recall that \(M^{a}\) is obtained by gluing \(k\) "wedges" \(V_{1},\dots,V_{2k}\) of \(\overline{M}\) along \(S\), each making an angle \(2\pi/n\) at \(S\). These wedges are bounded by hypersurfaces \(H_{1},\dots,H_{2k}\) with boundary \(S\).
We wish to construct a spacelike AdS structure on \(M^{a}\). Since \(M^{a}\setminus S\) is a hyperbolic manifold, the only real work consists in constructing a Lipschitz spacelike immersion of \(M^{a}\) into \(\mathrm{AdS}^{d+1}\) in a neighbourhood of \(S\). The idea is to construct "folded" spacelike immersions using hipped hypersurfaces in \(\mathrm{AdS}^{d+1}\).
**Definition 5.5**.: A spacelike AdS structure on \(M^{a}\) is _folded_ if the image under the developing map of any connected component in the universal cover of \(M^{a}\) of the complement of the union of the lifts of \(H_{1},\dots,H_{2k}\) is included in a totally geodesic copy of \(\mathbb{H}^{d}\), and the induced metric on \(M^{a}\setminus S\) is the original hyperbolic metric.
The outcome of this discussion is therefore the following lemma.
**Lemma 5.6**.: _Let \(M^{a}\) be a Gromov-Thurston manifold of dimension \(d\) with \(a=\frac{k}{n}>1\). There is a one-to-one correspondence between folded spacelike AdS structures on \(M^{a}\) (up to equivalence) and hipped hypersurfaces in \(\mathrm{AdS}^{d+1}\) with \(2k\) wedges and wedge angles \(\frac{\pi}{n}\) (up to isometry). This correspondence associates a convex spacelike \(\mathrm{AdS}\) structure to a convex hipped hypersurface._
Proof.: Let \(\mathrm{dev}:\widetilde{M}_{a}\to\mathrm{AdS}^{d+1}\) be the developing map of a folded spacelike AdS structure on \(M^{a}\), let \(U\) be a (sufficiently small) neighbourhood of a lift to \(\widetilde{M}^{a}\) of a point of the stem \(S\). Then, by definition of a folded structure, \(\mathrm{dev}(U)\) is an open subset of a hipped hypersurface with \(2k\) wedges, with wedge angles \(\frac{\pi}{n}\).
Conversely, consider a convex hipped hypersurface \(\Sigma=\bigcup_{i=1}^{2k}X_{i}\subset\mathrm{AdS}^{d+1}\) with wedge angles \(\frac{\pi}{n}\). Denote by \(V_{i}\) the connected component of \(M^{a}\setminus\bigcup_{i=1}^{2k}H_{i}\) which is bounded by \(H_{i}\) and \(H_{i+1}\). Define an atlas of spacelike charts on \(M^{a}\) in the following way:
* If \(x\in M^{a}\setminus\bigcup_{i=1}^{2k}H_{i}\), choose a small ball \(U_{x}\) around \(x\) that does not intersect any of the \(H_{i}\) and define a chart \(\phi_{x}:U_{x}\to\mathrm{AdS}^{d+1}\) mapping \(U_{x}\) isometrically onto a spacelike hyperplane in \(\mathrm{AdS}^{d+1}\).
* If \(x\in H_{i}\setminus S\), choose a small ball \(U_{x}\) around \(x\) that does not intersect \(S\), and define a continuous chart \(\phi_{x}:U_{x}\to\mathrm{AdS}^{d+1}\) mapping isometrically \(U_{x}\cap V_{i-1}\) into \(X_{i-1}\), \(U_{x}\cap V_{i}\) into \(X_{i}\) and \(U_{x}\cap H_{i}\) into \(Y_{i}\).
* If \(x\in S\), choose a small ball \(U_{x}\) around \(x\), and define a continuous chart \(\phi_{x}:U_{x}\to\mathrm{AdS}^{d+1}\)\(x\) mapping isometrically \(U_{x}\cap V_{i}\) into \(X_{i}\) and \(U_{x}\cap V_{i}\) into \(Y_{i}\) for all \(i\).
One easily verifies that the properties of the charts \(\phi_{x}\) characterize them up to an isometry of \(\mathrm{AdS}^{d+1}\). Hence they form the atlas of a folded spacelike AdS structure. Moreover, the neighbourhood of any point in the stem \(x\) is mapped to a neighbourhood of the stem of the
hipped hypersurface \(\Sigma\), showing that this construction is a converse to the previous one.
Finally, through this correspondence, it is clear that a past convex hipped hypersurface is associated to a locally convex spacelike structure, hence a convex spacelike structure by Proposition3.21.
We can now combine everything to prove the main theorems of the paper.
Proof of Theorems 1.1 and 1.2.: Let \(M^{a}\) be a Gromov-Thurston manifold of dimension \(d\) with \(a=\frac{k}{n}>1\).
By Proposition 4.12, there exists a convex spacelike polygon \(p\) in \(\mathrm{dS}^{2}\) with \(2k\) sides of length \(\frac{\pi}{n}\). By Lemma 5.4, \(p\) defines a convex hipped hypersurface \(\Sigma_{p}\) in \(\mathrm{AdS}^{d+1}\). By Lemma 5.6, \(\Sigma_{p}\) defines a convex folded spacelike AdS structure \((\mathrm{dev}_{p},\rho_{p})\) on \(M^{a}\). Finally, by Theorem 3.33, this folded hyperbolic structure defines an embedding of \(M_{a}\) as a Cauchy hypersurface in a GHMC AdS manifold \(N_{\rho_{p}}\). This already proves Theorem 1.1.
We obtain a map \(p\to N_{\rho_{p}}\) from \(\mathcal{P}^{\frac{\pi}{2}}_{2k}(\mathrm{dS})\) to the deformation space of GHMC AdS \(d+1\)-manifolds. Moreover, for each \(p\in\mathcal{P}^{\frac{\pi}{2}}_{2k}(\mathrm{dS})\), the manifold \(N_{\rho_{p}}\) contains a past-convex folded spacelike hypersurface isometric to \(M_{a}\). In dimension \(d+1\geq 4\), folded spacelike hypersurfaces are ruled, hence \((\mathrm{dev}_{p},\rho_{p})\) is a convex ruled spacelike AdS structure. By Lemma 3.35, the image of \(\mathrm{dev}_{p}\) is the future boundary of the convex core of \(N_{\rho_{p}}\) and by Lemma 3.37, the map \(p\mapsto N_{\rho_{p}}\) is injective. Finally, by Proposition 4.11, the space \(\mathcal{P}^{\frac{\pi}{2}}_{2k}(\mathrm{dS})\) is a manifold of dimension \(k-3\). Hence the family of GHMC AdS manifolds \(\left(N_{\rho_{p}},p\in\mathcal{P}^{\frac{\pi}{2}}_{2k}(\mathrm{dS})\right)\) proves Theorem 1.2.
_Remark 5.7_.: Though we did not mention anything about the regularity of the map \(p\mapsto N_{\rho_{p}}\), it is quite clear that this map should be continuous for an appropriate topolopogy on the space of GHMC AdS manifolds homeomorphic to \(M^{a}\times\mathbb{R}\). We will discuss further these regularity questions in Section 6.5.
### Hipped hypersurfaces in \(\mathbb{H}^{d+1}\) and polygons in \(\mathrm{S}^{2}\)
We now move on to establish a Riemannian version of Lemma 5.4.
**Definition 5.8**.: A _hipped hypersurface_ is an oriented topological hypersurface \(H\subset\mathbb{H}^{d+1}\) which is a finite union \(H=\bigcup_{i=1}^{k}X_{i}\) of subsets with the following properties:
1. Each \(X_{i}\) is a convex subset of a totally geodesic copy of \(\mathbb{H}^{d}\),
2. The relative boundary of \(X_{i}\) is the union of two half-spaces \(Y_{i}\) and \(Y_{i+1}\) of totally geodesic copies of \(\mathbb{H}^{d-1}\),
3. \(X_{i}\cap X_{i+1}=Y_{i+1}\) for all \(i\in\{1,\dots,k\}\) (setting \(X_{k+1}=X_{1}\) and \(Y_{k+1}=Y_{1}\)),
4. There is a totally geodesic copy \(Z\approx\mathbb{H}^{d-2}\) of \(\mathbb{H}^{d+1}\), called the _stem_, such that \(Y_{i}\cap Y_{i+1}=Z\) for all \(i\in\{1,\dots,k\}\).
The _dihedral angles_ of \(H\) are the angles between \(X_{i}\) and \(X_{i+1}\), and the _wedge angles_ are the angles between \(Y_{i}\) and \(Y_{i+1}\). A hipped hypersurface is _convex_ if all its dihedral angles are non-negative (or, equivalently, if the component of \(\mathbb{H}^{d+1}\setminus H\) inducing the orientation of \(H\) with the outward pointing normal is convex).
**Lemma 5.9**.: _Let \(\alpha_{1},\dots,\alpha_{k}\in(0,\pi)\) and \(\theta_{1},\dots,\theta_{k}\in(-\pi,\pi)\). The set of hipped hypersurfaces \(H\subset\mathbb{H}^{d+1}\) with wedge angles \(\alpha_{1},\dots,\alpha_{k}\) and dihedral angles \(\theta_{1},\dots,\theta_{k}\) considered up to isometry is in one-to-one correspondence with spherical polygons \(p\subset\mathrm{S}^{2}\) with side lengths \(\alpha_{1},\dots,\alpha_{k}\) and angles \(\theta_{1},\dots,\theta_{k}\) up to isometry. Through this correspondence, convex polygons are associated to convex hipped hypersurfaces._
Proof.: The first part of the proof is almost identical to that of Lemma 5.4. Start with a hipped hypersurface \(H=\bigcup_{i=1}^{k}X_{i}\subset\mathbb{H}^{d+1}\), let \(Z\approx\mathbb{H}^{d-2}\) be its stem and consider the link \(\mathcal{L}\) of \(Z\). Fix
\(z\in Z\), so that \(\mathcal{L}\) identifies with the set of unit vectors tangent to \(\mathbb{H}^{d+1}\) at \(z\) and orthogonal to \(Z\), and thus identify \(\mathcal{L}\) with \(\mathrm{S}^{2}\) as the unit vectors in \((T_{z}Z)^{\perp}\).
The intersection of \(H\) with \(\mathcal{L}\) is a polygon \(p\subset\mathrm{S}^{2}\). Its vertices \(v_{1},\dots,v_{k}\) are defined by \(v_{i}\in T_{z}Y_{i}\cap(T_{z}Z)^{\perp}\) and \(\exp_{z}(v_{i})\in Y_{i}\). The edges \([v_{i},v_{i+1}]\) correspond to \(T_{z}X_{i}\cap(T_{z}Z)^{\perp}\).
Starting with such a polygon \(p\subset\mathrm{S}^{2}\) with vertices \(v_{1},\dots,v_{k}\), we consider any point \(z\in\mathbb{H}^{d+1}\) and any totally geodesic copy \(Z\subset\mathbb{H}^{d+1}\) of \(\mathbb{H}^{d-2}\) containing \(z\). Identify \(\mathrm{S}^{2}\) with unit spacelike vectors in \((T_{z}Z)^{\perp}\), and consider the hipped hypersurface \(H=\bigcup_{i=1}^{r}X_{i}\subset\mathbb{H}^{d+1}\) where:
\[X_{i}=\left\{\exp_{z}(u+tw)\,|\,u\in T_{z}Z\,,\ w\in[v_{i},v_{i+1}]\,,\ t\geq 0 \right\}\.\]
Once again, these two constructions are inverse to each other.
In this correspondence, the length of the edge \([v_{i},v_{i+1}]\) is given by the angle between \(Y_{i}\) and \(Y_{i+1}\), i.e. \(\alpha_{i}\). The angle at \(v_{i}\) is equal the angle between \(X_{i}\) and \(X_{i+1}\), namely \(\theta_{i}\).
Both convexities are equivalent to the non-negativity of \(\theta_{1},\dots,\theta_{k}\), and are therefore concomitant.
### Geometrization of Gromov-Thurston cone-manifolds for \(a<1\)
We now consider a Gromov-Thurston manifold \(M^{a}\) with \(a=\frac{k}{n}<1\). We wish to construct a convex ruled hyperbolic embedding structure on \(M^{a}\). Since \(M^{a}\setminus S\) is a hyperbolic manifold, we only need to construct such a structure on a neighbourhood of \(S\), and we will do so by using a hipped hypersurface.
**Definition 5.10**.: A hyperbolic embedding structure on \(M^{a}\) is _folded_ if the image under the developing map of any connected component in the universal cover of \(M^{a}\) of the complement of the union of the lifts of \(H_{1},\dots,H_{2k}\) is included in a totally geodesic copy of \(\mathbb{H}^{d}\), and the induced metric on \(M^{a}\setminus S\) is the original hyperbolic metric.
The outcome of this discussion is therefore the following lemma.
**Lemma 5.11**.: _Let \(M^{a}\) be a Gromov-Thurston manifold of dimension \(d\) with \(a=\frac{k}{n}<1\). There is a one-to-one correspondence between folded hyperbolic embedding structures on \(M^{a}\) (up to equivalence) and hipped hypersurfaces in \(\mathbb{H}^{d+1}\) with \(2k\) wedges and wedge angles \(\frac{\pi}{n}\) (up to isometry). This correspondence associates a convex hyperbolic embedding structure to a convex hipped hypersurface._
Proof.: The proof is almost identical to that of Lemma 5.6: the developing map of a folded hyperbolic embedding structure on \(M^{a}\) sends a lift to \(\widetilde{M}^{a}\) of the stem \(S\) isometrically into a hipped hypersurface with \(2k\) wedges, with wedge angles \(\frac{\pi}{n}\).
Conversely, given a hipped hypersurface \(H=\bigcup_{i=1}^{2k}X_{i}\) consisting of \(2k\) wedges with wedge angles \(\frac{\pi}{n}\), one constructs a folded hyperbolic embedding structure on \(M^{a}\) mapping locally isometrically \(V_{i}\) into \(X_{i}\), \(H_{i}\) into \(Y_{i}\) and \(S\) into \(\bigcap_{i=1}^{2k}Y_{i}\).
The two constructions are inverse to each other.
Proof of Theorems 1.12 and 1.13.: The proof again follows closely that of Theorems 1.1 and 1.2:
Since every convex folded spacelike embedding in dimension \(d+1\geq 4\) is ruled, Lemma 5.11 defines a map
\[p\mapsto(\mathrm{dev}_{p},\rho_{p})\]
Figure 6. A hipped hypersurface in \(\mathbb{H}^{3}\).
from \(\mathcal{P}^{\frac{\pi}{2}}_{2k}(\mathrm{S})\) to the set of convex ruled spacelike embeddings which are isometric to \(M^{a}\). By Theorem 3.48, every such spacelike embedding is the boundary of a unique hyperbolic end \(N_{\rho_{p}}\). By Lemma 4.12, \(\mathcal{P}^{\frac{\pi}{2}}_{2k}(\mathrm{S})\) is non-empty, proving Theorem 1.12. By Lemma 4.10, \(\mathcal{P}^{\frac{\pi}{2}}_{2k}(\mathrm{S})\) is a manifold of dimension \(k-3\), and the family of hyperbolic ends \(\left(N_{\rho_{p}},p\in\mathcal{P}^{\frac{\pi}{2}}_{2k}(\mathrm{S})\right)\) answers Theorem 1.13.
## 6. Fuchsian deformations in dimension 3+1
Gromov-Thurston 3-manifolds are irreducible, atoroidal and Haken. It thus follows from Thurston or Perelman's hyperbolization theorems that they also carry a smooth hyperbolic structure. Here we will give a simpler proof of this fact, showing moreover that the quasifuchsian AdS manifolds constructed in the previous sections are, in dimension \(3+1\), deformations of Fuchsian manifolds. Barbot's conjecture thus remains open in dimension \(3+1\).
We also prove the same result for deformations to Fuchsian manifolds of the hyperbolic ends defined above (corresponding to Gromov-Thurston cone-manifolds of cone angle smaller than \(2\pi\), still in dimension \(3+1\)). Both proofs are based on Hodgson-Kerckhoff's deformation theorem for conical hyperbolic 3-manifolds.
### The Hodgson-Kerckhoff deformation theorem
We recall first the Hodgson-Kerckhoff theorem, see [25], for 3-dimensional hyperbolic cone-manifolds, which will be a useful tool for us in understanding deformations of quasifuchsian AdS spacetimes (or hyperbolic ends) in dimension \(3+1\).
**Theorem 6.1** (Hodgson-Kerckhoff, [25]).: _Let \(M\) be a 3-dimensional hyperbolic manifold with cone singularities along a link \(\gamma=\gamma_{1}\sqcup\cdots\sqcup\gamma_{n}\), with angle \(\theta_{i}\in(0,2\pi)\) along \(\gamma_{i}\), \(1\leq i\leq n\). Then small deformations of \(M\) among hyperbolic cone-manifolds with constant singular locus are parameterized by the variations of the cone angles \(\theta_{1},\cdots,\theta_{n}\)._
Note that this deformation result was extended to 3-dimensional hyperbolic cone-manifolds with singularities along a graph, still with angles less than \(2\pi\), by Mazzeo and Moncouquiol [33], see also [41].
The Hodgson-Kerckhoff deformation theorem leads quite naturally to a deformation result for the "building blocks" of Gromov-Thurston 3-manifolds. This will in turn be used below to construct deformations of globally hyperbolic AdS spacetimes, or of hyperbolic ends, in dimension 3+1. We will actually need a more precise rigidity result which states that one can prescribe the cone angles as long as we remain in the range \((0,\pi)\).
**Theorem 6.2**.: _Let \(M\) be a 3-dimensional hyperbolic manifold with cone singularities along a link \(\gamma=\gamma_{1}\sqcup\cdots\sqcup\gamma_{n}\), with angle \(\theta_{i}\in(0,\pi)\) along \(\gamma_{i}\), \(1\leq i\leq n\). For every \(\theta^{\prime}_{1},\cdots,\theta^{\prime}_{n}\in(0,\pi)\), there is a unique one-parameter family \((g_{t})_{t\in[0,1]}\) of hyperbolic structures on \(M\) with cone singularities along the \(\gamma_{i}\) of angle \((1-t)\theta_{i}+t\theta^{\prime}_{i}\)._
We refer the reader to [11] or [10, Section 6.2] for a proof. Briefly, one can consider the subset of values of \(t\) which can be achieved. Theorem 6.1 shows that it is open in \([0,1]\), while a separate, compactness argument (using the condition that the angles are in \((0,\pi)\)) shows that it is closed.
### Deformations towards the Fuchsian locus
We now apply the previous results to the cyclic quotients of hyperbolic manifolds containing totally geodesic planes. We follow the notations of Section 2, and consider a hyperbolic 3-manifold \(M\) with a diedral group of symmetries \(D_{n}\) of order \(2n\). We then denote by \(\overline{M}\) the quotient of \(M\) by the cyclic subgroup \(R_{n}\) of \(D_{n}\). Then \(\overline{M}\) is a hyperbolic cone-manifold, with cone angle \(2\pi/n\) along the projection of \(S\) (which is still denoted by \(S\)). We also fix an integer \(k\geq 2\), and let \(a=k/n\). Theorem 6.2 readily gives the following:
**Corollary 6.3**.: _Under those hypothesis, for \(\epsilon>0\) sufficiently small, there exists a one-parameter family of hyperbolic metrics \((\bar{g}_{\alpha})\) on \(\overline{M}\) with a cone singularity along \(S\) of cone angle \(2\alpha\), for \(\alpha\) ranging in the interval \((0,\frac{\pi}{2}+\epsilon)\). Moreover, this family is unique up to isotopy._
We can then lift this deformation to a deformation of the manifold \(M^{a}\), which is a cover of \(\overline{M}\) of degree \(k\) ramified over \(S\).
**Corollary 6.4**.: _Still with the notations of above, for \(\epsilon>0\) sufficiently small, there exists a one-parameter family of hyperbolic metrics \((g_{\alpha})\) on \(M^{\frac{k}{n}}\) with cone singularity along \(S\) such that the hypersurfaces \(H_{i}\) are geodesic, and such that \(H_{i}\) and \(H_{i+1}\) meet with an angle \(\alpha\), for \(\alpha\) ranging in the interval \((\frac{\pi}{k}-\epsilon,\frac{\pi}{n}]\) when \(k>n\) and \([\frac{\pi}{n},\frac{\pi}{k}+\epsilon)\) when \(k<n\)._
Proof.: Note that, for \(k,n\geq 2\) the range of \(\alpha\) in Corollary 6.4 is contained in the range of \(\alpha\) of Corollary 6.3. Thus, for \(\alpha\) in the appropriate range, we can consider the deformation \((\bar{g}_{\alpha})\) of Corollary 6.3. Let \(\sigma\) denote the reflection of \(\overline{M}\) induced by any reflection in \(D_{n}\). Then \((\sigma^{*}g_{\alpha})\) is another such deformation which coincides with \((\bar{g}_{\alpha})\) for \(\alpha=\frac{\pi}{n}\). By the uniqueness of Theorem 6.1, up to isotoping \(\bar{g}_{\alpha}\), we can thus assume that \(\sigma^{*}\bar{g}_{\alpha}=\bar{g}_{\alpha}\) for all \(\alpha\).2 Since \(\sigma\) is the reflection along \(\overline{H}_{1}\cup\overline{H}_{2}\), we deduce that \(\overline{H}_{1}\) and \(\overline{H}_{2}\) are totally geodesic for \(\bar{g}_{\alpha}\) and meet along \(S\) with an angle \(\alpha\). Then the pull-back \(g_{\alpha}\) of \(\bar{g}_{\alpha}\) to \(M^{\frac{k}{n}}\) satisfies the required conditions.
Footnote 2: This point is not actually immediate since the uniqueness of the metric is only up to isotopy, but one can keep track of the involution \(\sigma\) in Hodgson–Kerckhoff’s proof to verify that, starting with a \(\sigma\)-invariant metric, it does produce a \(\sigma\)-invariant deformation of the metric.
Note that the metric \(g_{\frac{\pi}{n}}\) is the original cone metric of the Gromov-Thurston manifold \(M^{\frac{k}{n}}\) while the metric \(g_{\frac{\pi}{k}}\) is a smooth hyperbolic metric. This proves in particular that Gromov-Thurston \(3\)-manifolds are hyperbolic.
### Deformation to Fuchsian manifolds
We can now conclude the proof of Theorems 1.18 and 1.19 by showing how, in dimension \(3+1\), the folded convex AdS (resp. hyperbolic) structures that we considered in Section 5 on a manifold \(M^{\frac{k}{n}}\) with \(k>n\) (resp. \(k<n\)) can be deformed to the structure associated to a Fuchsian AdS \(4\)-manifold (resp. a Fuchsian hyperbolic end).
Proof of Theorem 1.18.: Assume \(k>n\). Let \(I\) denote the interval \((\frac{\pi}{k}-\epsilon,\frac{\pi}{n}]\). For every \(\alpha\in I\) there is a metric \(g_{\alpha}\) on \(M^{\frac{k}{n}}\) for which the hypersurfaces \(H_{i}\) are totally geodesic and form angles \(\alpha\) along \(S\). Repeating the proof of Theorem 1.2, we can associate to every spacelike polygon \(p\in\mathcal{P}^{\alpha}_{2k}(\mathrm{dS})\) a folded spacelike AdS structure on \((M^{\frac{k}{n}},g_{\alpha})\), which defines a GHMC AdS manifold \(N_{p}\) by Theorem 3.33. We thus get a map from
\[\mathcal{P}^{I}_{2k}(\mathrm{dS})\stackrel{{\mathrm{def}}}{{=}} \bigsqcup_{\alpha\in I}\mathcal{P}^{\alpha}_{2k}(\mathrm{dS})\]
to the deformation space of quasifuchsian AdS \(4\)-manifolds homeomorphic to \(M^{\frac{k}{n}}\times\mathbb{R}\).
When the polygon \(p\) is convex, the image of the folded spacelike embedding of \((M^{\frac{k}{n}},g_{\alpha})\) in \(N_{p}\) is the future boundary of its convex core, and we deduce that the map \(p\mapsto N_{p}\) is injective in restriction to convex polygons, as in the proof of Theorem 1.2.
Now, by Proposition 4.11, the set \(\mathcal{P}^{I}_{2k}(\mathrm{dS})\) is a manifold of dimension \(2k-2\), which contains the codimension \(1\) submanifold \(\mathcal{P}^{\frac{\pi}{2k}}_{2k}(\mathrm{dS})\). By Proposition 4.12, there is a continuous path \((p_{\alpha})_{\alpha\in I}\) in \(\mathcal{P}^{I}_{2k}(\mathrm{dS})\) such that \(p_{\alpha}\) is convex with side length \(\alpha\). For \(\alpha=\frac{\pi}{k}\), the polygon \(p_{\alpha}\) is a spacelike geodesic in \(\mathrm{dS}^{2}\) divided in \(2k\) segments of length \(\frac{\pi}{k}\), the corresponding folded spacelike embedding is totally geodesic, and the GHMC AdS \(4\)-manifold \(N_{p_{\frac{\pi}{k}}}\) is thus Fuchsian.
Hence the family \(\big{(}N_{p},p\in\mathcal{P}^{I}_{2k}(\mathrm{dS})\big{)}\) satisfies the required properties, proving Theorem 1.18.
Proof of Theorem 1.19.: The proof proceeds in the same way as the proof of Theorem 1.18, except that we now associate hyperbolic ends \(N_{p}\) to equilateral spherical polygons
\[p\in\mathcal{P}^{I}_{2k}(\mathrm{S})\stackrel{{\mathrm{def}}}{{=}} \bigsqcup_{\alpha\in I}\mathcal{P}^{\alpha}_{2k}(\mathrm{S})\,\]
for \(\alpha\) ranging in the interval
\[I=\left[\frac{\pi}{n},\frac{\pi}{k}+\epsilon\right)\.\]
### Integration of bending deformations
Restricting the map \(p\mapsto N_{p}\) to polygons with a central symmetry, one proves Theorems 1.21 and 1.20:
Proof of Theorem 1.20.: The geodesic polygon \(p_{0}\) with side length \(\frac{\pi}{k}\) and angles \(0\) belongs to the interior of \(\mathcal{P}^{I}_{2k}(\mathrm{dS})\), which thus contains an open neighbourhood \(O\) of \(p_{0}\) in \(\mathcal{P}^{sym}_{2k}(\mathrm{dS})\). By Proposition 4.14, the map
\[\begin{array}{ccc}\Theta:&O&\to&\mathbb{R}^{k}\\ &p&\mapsto&(\theta_{i}(p))_{1\leq i\leq k}\end{array}\]
is, up to restricting \(O\), a diffeomorphism onto an open neighbourhood \(U\) of \(0\) in \(\mathbb{R}^{k}\).
For \(\theta=(\theta_{1},\dots,\theta_{k})\), define
\[N_{\theta}=N_{\Theta^{-1}(\theta_{1},\dots,\theta_{k})}\.\]
Then, by construction, \(N_{\theta}\) is a GHMC AdS \(4\)-manifold containing a Cauchy hypersurface homeomorphic to \(M^{a}\) which is piecewise totally geodesic, folded along the \(H_{i}\), and such that, for all \(1\leq i\leq k\), the folding angle at \(H_{i}\) and \(H_{i+k}\) is \(\theta_{i}\). The family \(N_{\theta}\) thus satisfies the required properties.
Proof of Theorem 1.21.: The proof is identical to the proof of Theorem 1.20.
Let us now recall how the families of hyperbolic ends constructed in Theorem 1.21 relate to the "bending deformations" constructed by Johnson and Millson.
Let \(M\) be a hyperbolic manifold of dimension \(d\) containing a smooth totally geodesic connected separating hypersurface \(H\). In [27], Johnson and Millson construct a \(1\)-parameter deformation of the Fuchsian representation \(\rho_{0}:\pi_{1}(M)\to\mathrm{SO}_{\circ}(d,1)\) into \(\mathrm{SO}_{\circ}(d+1,1)\):
Let \(M_{1}\) and \(M_{2}\) denote the two components of \(M\backslash H\). By Van Kampen's theorem, one can write the fundamental group of \(M\) as an amalgamated product
\[\pi_{1}(M)=\pi_{1}(M_{1})\ast_{\pi_{1}(H)}\pi_{1}(M_{2})\.\]
Now, up to conjugation, \(\rho_{0}(\pi_{1}(H))\) is contained in \(\mathrm{SO}(d-1,1)\) and is thus centralized by a rotation subgroup isomorphic to \(\mathrm{SO}(2)\). There is thus a unique representation
\[\rho_{H,t}:\pi_{1}(M)\to\mathrm{SO}_{\circ}(d+1,1)\]
such that
\[\rho_{H,t_{|\pi_{1}(M_{1})}}=\rho_{0_{|\pi_{1}(M_{1})}}\]
and
\[\rho_{H,t_{|\pi_{1}(M_{2})}}=r_{t}\rho_{0_{|\pi_{1}(M_{2})}}r_{-t}\,\]
with \(r_{t}\) the rotation of angle \(t\) commuting with \(\mathrm{SO}_{\circ}(d-1,1)\). This deformation has a geometric interpretation: the representation \(\rho_{H,t}\) is the holonomy of the folded hyperbolic embedding structure on \(M\) which is isometric and totally geodesic on \(M_{1}\) and \(M_{2}\) and is folded along \(H\) with an angle \(t\).
The same construction can be used to deform \(\rho_{0}\) into \(\mathrm{SO}_{\circ}(d,2)\). This time, the subgroup centralizing \(\rho_{0}(\pi_{1}(H))\) is \(\mathrm{SO}(1,1)\) and the deformations are holonomies of folded spacelike embedding structures on \(M\).
Let us now return to the setting of Theorem 1.20. The Gromov-Thurston \(3\)-manifold \(M^{a}\) equipped with the smooth metric \(g_{\frac{\pi}{k}}\) admits \(k\) separating totally geodesic hypersurfaces \(\hat{H}_{i}\stackrel{{\mathrm{def}}}{{=}}H_{i}\cup H_{i+k}\), \(1\leq i\leq k\). For each of these one gets a Johnson-Millson deformation \(\rho_{\hat{H}_{i},t}\) which is nothing but the holonomy of the quasifuchsian AdS manifold \(N_{t\theta^{i}}\) where \(\theta^{i}_{j}=\delta_{i,j}\). Theorem 1.20 thus gives an example where Johnson-Millson's bending deformations along \(k\) intersecting hypersurfaces fit into a \(k\)-parameter deformation family.
### Regularity of the map \(p\mapsto N_{p}\)
Until now, we have been very vague about the question of the regularity of our "families of GHMC AdS manifolds", i.e. about the regularity of the map \(p\mapsto N_{p}\). Here we give a little bit more precisions on that.
Fix a Gromov-Thurston manifold of dimension \(d\). Let us say that a quasifuchsian AdS manifold of dimension \(d+1\) is _marked by \(M^{a}\)_ if it is equipped with an isomorphism \(\pi:\pi_{1}(M^{a})\to\pi_{1}(N)\) induced by a spacelike embedding of \(M^{a}\) in \(N\). Via the holonomy representation, the set of quasifuchsian AdS \(d+1\) manifolds marked by \(M^{a}\) identifies with the open domain of convex-cocompact representations in \(\operatorname{Hom}(\pi_{1}(M^{a}),\operatorname{SO}_{\circ}(d,2))/\mathrm{SO}_ {\circ}(d,2)\).3 This gives a natural topology to the space of quasifuchsian AdS \(d+1\) manifolds marked by \(M^{a}\), and even an analytic structure.
Footnote 3: While the quotient \(\operatorname{Hom}(\pi_{1}(M^{a}),\mathrm{SO}_{\circ}(d,2))/\mathrm{SO}_{ \circ}(d,2)\) might not be Hausdorff, one can prove that the subset of convex cocompact representations is Hausdorff.
Now, in the proof of Theorem 1.2, Theorem 1.18 and Theorem 1.20, one associates to every spacelike de Sitter polygon \(p\) in some family \(\mathcal{P}\) (which is proven to be an analytic manifold in Section 4) a certain spacelike AdS structure on \(M^{a}\). Looking closely at the construction (see the proof of Lemma 5.6), one can verify that it can be defined by local charts that depend smoothly on the polygon \(p\). Looking back at the proof of Corollary 3.32, one deduces that this family of spacelike AdS structures is given by a family of pairs \((\mathrm{dev}_{p},\rho_{p})\) where \(\mathrm{dev}_{p}\) is a developing map and \(\rho_{p}\) a holonomy representation depending smoothly on \(p\). The representation \(\rho_{p}\) is the holonomy of the corresponding marked quasifuchsian AdS manifold \(N_{p}\). In this sense we can say that the map \(p\mapsto N_{p}\) is smooth.
Let \(p\) be a point in \(\mathcal{P}\). Denote by \(\mathrm{Ad}_{\rho_{p}}\) the composition of \(\rho_{p}:\pi_{1}(M^{a})\to\mathrm{SO}_{\circ}(d,2)\) with the adjoint representation of \(\mathrm{SO}_{\circ}(d,2)\). The cohomology group
\[\mathrm{H}^{1}(\pi_{1}(M^{a}),\mathrm{Ad}_{\rho_{p}})\]
is the tangent space to the character stack \(\operatorname{Hom}(\pi_{1}(M^{a}),\mathrm{SO}_{\circ}(d,2))/\mathrm{SO}_{ \circ}(d,2)\) at \(\rho_{p}\). In particular, the derivative of the map \(p\mapsto\rho_{p}\) defines a linear map from \(T_{p}\mathcal{P}\) to \(\mathrm{H}^{1}(\pi_{1}(M^{a}),\mathrm{Ad}_{\rho_{p}})\).
Let us now specify the discussion of the previous paragraph to the situation of Theorem1.20. We take as \(\mathcal{P}\) a neighbourhood of \(p_{0}\) in the set of equilateral polynomials with a central symmetry, which is diffeomorphic to a small neighbourhood \(U\) of \(0\) in \(\mathbb{R}^{k}\). We thus obtain a smooth map
\[\begin{array}{cccc}\mathrm{R}:&U&\to&\operatorname{Hom}(\pi_{1}(M^{a}), \mathrm{SO}_{\circ}(3,2))/\mathrm{SO}_{\circ}(3,2)\\ &\theta&\mapsto&\rho_{\theta}\stackrel{{\mathrm{def}}}{{=}}\rho_{ p_{\theta}}\.\end{array}\]
Its derivative at \(0\) is a linear map
\[\mathrm{d}_{0}\mathrm{R}:\mathbb{R}^{k}\to\mathrm{H}^{1}(\pi_{1}(M^{a}), \mathrm{Ad}_{\rho_{0}})\,\]
where \(\rho_{0}\) is the Fuchsian representation of \(\pi_{1}(M^{a})\).
Noting as above \(\theta^{i}\in\mathbb{R}^{k}\) the vector such that \(\theta^{i}_{j}=\delta_{i,j}\), we have in particular that \(\mathrm{d}_{0}\mathrm{R}(\theta^{i})\) is the derivative of Johnson-Millson's bending deformation along \(\hat{H}_{i}\). The image of \(\mathrm{d}_{0}R\) is the subspace of \(\mathrm{H}^{1}(\pi_{1}(M^{a}),\mathrm{Ad}_{\rho_{0}})\) spanned by these infinitesimal bending deformations.
In general, the character stack \(\operatorname{Hom}(\pi_{1}(M^{a}),\mathrm{SO}_{\circ}(d,2))/\mathrm{SO}_{ \circ}(d,2)\) needs not be smooth at \(\rho_{0}\), and not every vector in \(H^{1}(\pi_{1}(M^{a}),\mathrm{Ad}_{\rho_{0}})\) need to be the derivative of an actual deformation of \(\rho_{0}\) in \(\mathrm{SO}_{\circ}(d,2)\). At one extreme, one could imagine that \(\operatorname{Hom}(\pi_{1}(M^{a}),\mathrm{SO}_{\circ}(d,2))/\mathrm{SO}_{\circ }(d,2)\) is a union of \(k\) curves intersecting at \(\rho_{0}\), corresponding to the \(k\) bending deformations.
However, Theorem 1.20 shows that it is not the case in dimension \(3+1\). The existence of the map \(\Phi\) shows that any linear combination of the infinitesimal bending deformations along the hypersurfaces \(\hat{H}_{i}\) can be integrated into an actual deformation.
## 7. Initial singularities
### Dualities
Before proving Theorems 1.22 and 1.23, we recall a basic notion of duality (or polarity) in \(\mathrm{AdS}^{d+1}\), or between \(\mathbb{H}^{d+1}\) and \(\mathrm{dS}^{d+1}\). A good description of the duality for hyperbolic polyhedra can be found in [26], and an extension to other constant curvature pseudo-Riemannian spaces can be found e.g. in [37].
_The duality between \(\mathbb{H}^{d+1}\) and \(\mathrm{dS}^{d+1}\)._ Recall that the hyperbolic \(d+1\)-dimensional space can be defined as a quadric in the Minkowski space of dimension \(d+2\), denoted here as \(\mathbb{R}^{d+1,1}\). This Minkowski space is \(\mathbb{R}^{d+2}\) equipped with the bilinear symmetric form:
\[\langle x,y\rangle_{d+1,1}=\sum_{i=1}^{d+1}x_{i}y_{i}-x_{d+2}y_{d+2}\.\]
The hyperbolic space can then be defined as
\[\mathbb{H}^{d+1}=\{x\in\mathbb{R}^{d+1,1}\ |\ \langle x,x\rangle_{d+1,1}=-1\,\ x_ {d+2}>0\}\,\]
equipped with the induced metric. In the same space, we can consider the de Sitter space, defined as
\[\mathrm{dS}^{d+1}=\{x\in\mathbb{R}^{d+1,1}\ |\ \langle x,x\rangle_{d+1,1}=1\}\,\]
again with the induced metric. It is a geodesically complete Lorentzian space of constant curvature \(1\), simply connected if \(d\geq 2\).
Let \(x\in\mathbb{H}^{d+1}\), and let \(x^{\perp}\) be the hyperplane in \(\mathbb{R}^{d+1,1}\) orthogonal to \(x\). Since \(x\) is timelike, its orthogonal \(x^{\perp}\) is spacelike, and its intersection with \(\mathrm{dS}^{d+1}\) is a totally geodesic, spacelike hyperplane, which we denote by \(x^{*}\). Conversely, any spacelike hyperplane \(H\) in \(dS^{d+1}\) is the intersection of \(\mathrm{dS}^{d+1}\) with a hyperplane \(\bar{H}\) of \(\mathbb{R}^{d+1,1}\) containing \(0\). This hyperplane \(\bar{H}\) is orthogonal to a unique unit, future-oriented timelike vector, which we denote \(H^{*}\). This construction provides a one-to-one correspondence between points in \(\mathbb{H}^{d+1}\) and (un-oriented) spacelike totally geodesic hyperplanes in \(\mathrm{dS}^{d+1}\).
Similarly, given \(y\in\mathrm{dS}^{d+1}\), the intersection \(y^{\perp}\cap\mathbb{H}^{d+1}\) is an _oriented_ totally geodesic hyperplane in \(\mathbb{H}^{d+1}\), which we denote by \(y^{*}\). And conversely, if \(H\subset\mathbb{H}^{d+1}\) is any oriented totally geodesic hyperplane, then it is the intersection with \(\mathbb{H}^{d+1}\) of an oriented hyperplane in \(\mathbb{R}^{d+1,1}\) containing \(0\). The oriented unit normal to this hyperplane is a point in \(\mathrm{dS}^{d+1}\), which we denote by \(H^{*}\).
This duality relation has several important consequences.
* Two oriented hyperplanes \(H,H^{\prime}\subset\mathbb{H}^{d+1}\) intersect if and only if the dual points \(H^{*},H^{\prime*}\) are connected by a spacelike geodesic segment. The angle between \(H\) and \(H^{\prime}\) is then the length of the segment connecting \(H^{*}\) to \(H^{\prime*}\).
* The intersection angle between two hyperplanes \(H,H^{\prime}\subset\mathrm{dS}^{d+1}\) is equal to the distance between the dual points \(H^{*},H^{\prime*}\subset\mathbb{H}^{d+1}\).
* For all \(x\in\mathbb{H}^{d+1}\), \((x^{*})^{*}=x\), and similarly for \(y\in\mathrm{dS}^{d+1}\).
This duality relation extends to convex polyhedra (see [26]) and to smooth, strictly convex surfaces (see e.g. [37]).
_The duality between points and hyperplanes in \(\mathrm{AdS}^{d+1}\)._ The \(d+1\)-dimensional anti-de Sitter space \(\mathrm{AdS}^{d+1}\) can be defined as a "pseudo-sphere" in the flat space \(\mathbb{R}^{d,2}\) of signature \((d,2)\). Specifically, \(\mathbb{R}^{d,2}\) can be defined as \(\mathbb{R}^{d+2}\) equipped with the bilinear symmetric form
\[\langle x,y\rangle_{d,2}=\sum_{i=1}^{d}x_{i}y_{i}-x_{d+1}y_{d+1}-x_{d+2}y_{d+2 }\,\]
and
\[\mathrm{AdS}^{d+1}=\{x\in\mathbb{R}^{d,2}\ |\ \langle x,x\rangle_{d,2}=-1\}\.\]
It is a geodesically complete Lorentzian space of constant curvature \(-1\).
Let \(x\in\mathrm{AdS}^{d+1}\). Its orthogonal \(x^{\perp}\) is an oriented hyperplane in \(\mathbb{R}^{d,2}\) of signature \((d,1)\), which therefore intersects \(\mathrm{AdS}^{d+1}\) along a spacelike totally geodesic oriented hyperplane, denoted by \(x^{*}\). As above, the same construction works, conversely, to associate to any totally geodesic spacelike oriented hyperplane a dual point.
There is an "intrinsic" definition of this duality: the hyperplane \(x^{*}\) dual to a point \(x\) is the totally geodesic plane composed of points at time distance \(\pi/2\) from \(x\) in the future.
This duality has the same properties as the duality between \(\mathbb{H}^{d+1}\) and \(\mathrm{dS}^{d+1}\).
### Initial singularities of de Sitter spacetimes
GHMC de Sitter spacetimes can be defined in the same way as GHMC anti-de Sitter spacetimes (see section 3.5). We briefly describe the duality between GHMC de Sitter spacetimes and hyperbolic ends (additional details and proofs can be found in [31, 36]), and then outline how this duality leads to the proof of Theorem 1.23.
Let us give more details about the correspondence established in [31] between flat conformal structures on a manifold \(M\) with non virtually Abelian fundamental group and hyperbolic ends with pleated boundary homeomorphic to \(M\) that was mentioned in Section 3.8. Consider first a hyperbolic end \(E\) with pleated boundary homeomorphic to \(M\). Then the ideal boundary \(\partial_{\infty}E\) of \(E\) is diffeomorphic to \(M\) and equipped with a flat conformal structure \(c\). It is proved in [31] that (in dimension \(d\geq 3\)) \(E\) is uniquely determined by \(c\). More specifically, the pleated boundary \(\partial_{0}E\) of \(E\) is equipped with a stratification in ideal polyhedra of varying dimensions between \(1\) and \(d\), while \((M,c)\) also has a natural stratification, with each point contained in the interior of the "convex hull" of the boundary of a unique maximal round ball. There is a natural map from \(\partial_{\infty}E\) to \(\partial_{0}E\), preserving the stratification, in the sense that each strata of the stratification of \(\partial_{\infty}E\) is sent homeomorphically to a strata of \(\partial_{0}E\) (but many strata of \(\partial_{\infty}E\) can be sent to the same strata of \(\partial_{0}E\)). In fact, round balls in \(\partial_{\infty}\mathbb{H}^{d+1}\) are in one-to-one correspondence with oriented hyperplanes in \(\mathbb{H}^{d+1}\), and the strata of \((M,c)\) are in one-to-one correspondence with the support hyperplanes of \(\partial_{0}E\).
A similar construction is provided by Scannell [36, Section 4] for GHMC de Sitter spacetimes. Namely, if \(N\) is such a spacetime, diffeomorphic to \(M\times\mathbb{R}\), where \(M\) is again a closed \(d\)-dimensional manifold, then its future asymptotic boundary is equipped with a flat conformal structure \(c\), and this flat conformal structure again uniquely determines the GHMC structure on \(N\). If the fundamental group of \(M\) is not virtually Abelian, GHMC de Sitter structures on \(N\) are therefore in one-to-one correspondence with hyperbolic ends diffeomorphic to \(M\times\mathbb{R}\).
The stratification of \((M,c)\) is directly related to the initial singularity of \(N\), which we denote by \(\partial_{0}N\). Round disks in \(\partial_{\infty}\mathbb{H}^{d+1}\) are in one-to-one correspondence with points in dS\({}^{d+1}\), and each stratum of \((M,c)\) determines a unique point in the initial singularity of \(N\). The points that are obtained in this way are exactly those where the boundary of \(M\) admits a spacelike supporting hyperplane, see [36, Section 4].
Summing up the relations, each hyperbolic end structure on \(M\times\mathbb{R}\) is determined uniquely by a flat conformal structure on \(M\), which in turns determines a unique GHMC de Sitter spacetime. Hyperbolic ends are therefore in one-to-one correspondence with GHMC de Sitter spactimes. Moreover, each support hyperplane of \(\partial_{0}E\) corresponds to a point of \(\partial_{0}N\) where it admits a spacelike support plane, and conversely.
This correspondence is somewhat easier to visualize when the developing dev of the conformal structure is injective. Then
\[E=(\mathbb{H}^{d+1}\setminus CH(\partial_{\infty}\mathbb{H}^{d+1}\setminus \operatorname{dev}(\widetilde{M})))/\rho(\pi_{1}M)\,\]
where \(\rho:\pi_{1}M\to SO(d+1,1)\) is the holonomy representation of \((M,c)\), while \(\widetilde{N}\) is a _domain of dependence_, that is, the intersection of the half-spaces containing \(\mathrm{S}^{d}\) and bounded by a hyperplane tangent to \(\mathrm{S}^{d}\) at a point of \(\Lambda_{\rho}\), the limit set of \(\rho\). The initial singularity of \(\widetilde{N}\) is then the set of points dual to the support hyperplanes of \(CH(\partial_{\infty}\mathbb{H}^{d+1}\setminus\operatorname{dev}(\widetilde{M }))\).
Suppose now that \(E\) is a hyperbolic end such that \(\partial_{0}E\) is folded, that is, it is the union of \(2k\)\(d\)-dimensional totally geodesic polyhedra meeting pairwise along a \((d-1)\)-dimensional face and which all share a \((d-2)\)-dimensional face \(S\). Let \(N\) be the dual convex GHMC de Sitter spacetime. The initial singularity of \(N\) is then particularly simple:
* each \(d\)-dimensional polyhedron in \(\partial_{0}E\) corresponds to a vertex of \(\partial_{0}N\),
* each \((d-1)\)-dimensional intersection hypersurface in \(\partial_{0}E\) corresponds to an edge of \(\partial_{0}N\),
* the "spine" \(S\) corresponds to a \(2\)-dimensional face of \(\partial_{0}N\).
Theorem 1.23 follows from Theorem 1.12, and of the construction used in its proof, through this correspondence.
### Initial singularities of AdS spacetimes
A similar description applies in the anti-de Sitter setting. It is somewhat simpler because, in the AdS case, any quasifuchsian AdS spacetime is the quotient of a domain of dependence by the image of a representation into \(\mathrm{SO}_{\circ}(d,2)\), as proved by Mess [34].
A quasifuchsian AdS spacetime \(N\) contains a smallest non-empty closed convex subset, its convex core \(C(N)\). The past of the future boundary \(\partial_{+}C(N)\) of the convex core is the union of the timelike geodesic segments of length \(\pi/2\) orthogonal to support planes of \(C(N)\) along \(\partial_{+}C(N)\) towards the past. Similarly the future of the past boundary \(\partial_{-}C(N)\) is the union of timelike segments of length \(\pi/2\) orthogonal to support planes of \(C(N)\) along \(\partial_{-}C(N)\) towards the future.
Moreover, the universal cover \(\widetilde{N}\) of \(N\) is isometric to a convex domain in \(\mathrm{AdS}^{d+1}\) which is a domain of dependence, that is, the set of points \(x\) in \(\mathrm{AdS}^{d+1}\) such that all timelike geodesics through \(x\) intersect the lift of any Cauchy hypersurface in \(N\). In this picture, \(\widetilde{C}(N)\) is the convex hull of the asymptotic boundary of the lift to \(\mathrm{AdS}^{d+1}\) of any Cauchy surface.
As a consequence, \(\partial_{+}C(N)\) is dual to the initial singularity of \(N\), while \(\partial_{-}C(N)\) is dual to the final singularity of \(N\). As in the de Sitter case, the description is simpler when \(\partial_{+}C(N)\) is folded, since in this case the intial singularity of \(N\) is a \(2\)-dimensional complex with vertices corresponding to the maximal dimension faces of \(\partial_{+}C(N)\), edges corresponding to the hypersurfaces along which the maximal dimension faces meet, and one totally geodesic \(2\)-dimensional face dual to the codimension \(2\) "spine".
Theorem 1.22 follows from Theorem 1.1, and of the construction used in its proof, through this correspondence.
## 8. Compact Clifford-Klein forms
In this section we explain why quasifuchsian AdS manifolds of dimension \(2d+1\) provide compact quotients of the pseudo-Riemannian symmetric space \(\mathrm{O}(2d,2)/\mathrm{U}(d,1)\). We prove that these compact quotients admit a smooth fibration over a manifold of dimension \(2d\), with fibers isomorphic to the compact subspace \(\mathrm{O}(2d)/\mathrm{U}(d)\).
### From GHC manifolds to compact quotients
Benoist [8] and Kobayashi [30] independently gave a necessary and sufficient criterion for a discrete subgroup \(\Gamma\) of a semisimple Lie group \(G\) to act properly discontinuously on a reductive homogeneous space \(G/H\), in terms of the Cartan projections of \(\Gamma\) and \(H\). This criterion bares a strong resemblance with the _Anosov property_ of the group \(\Gamma\) as reformulated by Gueritaud-Guichard-Kassel-Wienhard [23] and Kapovich-Leeb-Porti [29]. As an application, the first group of authors remarked the following:
**Theorem 8.1**.: _Let \(\Gamma\backslash\Omega\) be an \(\mathrm{AdS}\) quasifuchsian spacetime of dimension \(2d+1\). Then the group \(\Gamma\) acts properly discontinuously and cocompactly on the pseudo-Riemannian symmetric space \(\mathrm{O}(2d,2)/\mathrm{U}(d,1)\)._
Outline of the proof.: The group \(\Gamma\) is a projective Anosov subgroup of \(\mathrm{O}(2d,2)\) (see [6]). In this situation, it implies that \(\Gamma\) satisfies the Benoist-Kobayashi criterion and thus acts properly discontinuously on \(\mathrm{O}(2d,2)/\mathrm{U}(d,1)\). The cocompactness comes from a cohomological dimension argument: the space \(\mathrm{O}(2d,2)/\mathrm{U}(d,1)\) has dimension \(d(d+1)\) and deformation retracts on the compact symmetric space \(\mathrm{O}(2d)/\mathrm{U}(d)\), of dimension \(d(d-1)\). By a classical application of the Leray-Serre spectral sequence, it follows that a group acting properly discontinuously on \(\mathrm{O}(2d,2)/\mathrm{U}(d,1)\) has virtual cohomological dimension at most \(2d\), with equality if and only if its action is cocompact. On the other hand, \(\Gamma\) acts properly discontinuously and cocompactly on a complete spacelike hypersurface in \(\mathrm{AdS}^{2d+1}\), which is diffeomorphic to a disc of dimension \(2d\). Hence \(\Gamma\) has cohomological dimension \(2d\) and thus acts cocompactly on \(\mathrm{O}(2d,2)/\mathrm{U}(d,1)\).
Theorem 8.1 provides a motivation to better understand the relationship between the \(\mathrm{AdS}\) quasifuchsian manifold \(\Gamma\backslash\Omega\) and the corresponding compact quotient \(\Gamma\backslash\mathrm{O}(2d,2)/\mathrm{U}(d,1)\). In the following subsections, we explain that \(\Gamma\backslash\mathrm{O}(2d,2)/\mathrm{U}(d,1)\) can be seen as a fiber bundle with "geometric fibers" over a strongly convex Cauchy hypersurface of \(\Gamma\backslash\Omega\).
### Geodesic Killing fields
We first start by understanding the relationship between \(\operatorname{AdS}^{2d+1}\) and the space \(\operatorname{O}(2d,2)/\mathrm{U}(d,1)\).4 For this, let us recall the definition and basic properties of a _geodesic Killing field_.
Footnote 4: Note that, since \(\operatorname{SO}_{\circ}(2d,2)\) has finite index in \(\operatorname{O}(2d,2)\), acting properly discontinuously and cocompactly on \(\operatorname{O}(2d,2)/\mathrm{U}(d,1)\) and on \(\operatorname{SO}_{\circ}(2d,2)/\mathrm{U}(d,1)\) are equivalent.
Let \((M,g)\) be a smooth connected pseudo-Riemannian manifold and denote by \(\nabla\) its Levi-Civita connection. Recall that a vector field \(X\) on \(M\) is a _Killing field_ if its flow preserves the metric \(g\). Killing fields are characterized by the following property:
**Proposition 8.2**.: (see for instance [35, Proposition 9.25]) _A vector field \(X\) on \((M,g)\) of class \(\mathcal{C}^{1}\) is a Killing field if and only if the tensor \(\nabla X\in\operatorname{End}(TM)\) is antisymmetric with respect to \(g\), i.e._
\[g(Y,\nabla_{Z}X)=-g(X,\nabla_{Z}Y)\]
_for all vector fields \(Y\) and \(Z\)._
A vector field \(X\) on \((M,g)\) is _geodesic_ if the orbits of its flow are geodesics. Equivalently, \(X\) is geodesic if it satisfies
\[\nabla_{X}X=0\.\]
For Killing fields, we have the following characterization:
**Proposition 8.3**.: _Let \(X\) be a Killing field on \((M,g)\). Then \(X\) is geodesic if an only if \(g(X,X)\) is constant on \(M\)._
Proof.: For any vector field \(Y\), we have
\[d_{Y}g(X,X) = 2g(X,\nabla_{Y}X)\quad\text{since $\nabla$ preserves $g$}\] \[= -2g(Y,\nabla_{X}X)\quad\text{since $X$ is Killing}.\]
Thus \(g(X,X)\) is constant if and only if \(\nabla_{X}X=0\).
Let us now turn to the case where \((M,g)\) is the anti-de Sitter space of dimension \(2d+1\). In that case, there is a natural isomorphism
\[u\mapsto X^{u}\]
between the Lie algebra \(\mathfrak{so}(2d,2)\) of \(\operatorname{O}(2d,2)\) and the Lie algebra of Killing fields on \(\operatorname{AdS}^{2d+1}\). In concrete terms, we see \(\mathfrak{so}(2d,2)\) as a Lie subalgebra of the space of square matrices of size \(2d+2\). Every \(u\in\mathfrak{so}(2d,2)\) defines a linear vector field \(\hat{X}^{u}\) on \(\mathbb{R}^{2d,2}\), defined by:
\[\hat{X}^{u}(x)=u\cdot x\.\]
This vector field is tangent to the quadric
\[\{\mathbf{q}(x)\stackrel{{\mathrm{def}}}{{=}}x_{1}^{2}+\ldots+ x_{2d}^{2}-x_{2d+1}^{2}-x_{2d+2}^{2}=-1\}\stackrel{{\mathrm{def}}}{{=}} \operatorname{AdS}^{2d+1}\]
and thus restricts to a vector field \(X^{u}\) on \(\operatorname{AdS}^{2d+1}\)
We say that a Killing field \(X\) on \(\operatorname{AdS}^{2d+1}\) is _timelike unitary_ if it satisfies \(g(X,X)=-1\) (it is then necessarily geodesic by Proposition 8.3). The main purpose of this subsection is the following description of timelike unitary Killing fields.
**Lemma 8.4**.: _Let \(u\) be a vector in the Lie algebra of \(\mathfrak{so}(2d,2)\). Then the corresponding Killing field \(X^{u}\) is timelike unitary if and only if \(u^{2}=-\mathrm{Id}\). The space of timelike unitary Killing fields is therefore equivariantly isomorphic to the homogeneous space \(\operatorname{O}(2d,2)/\mathrm{U}(d,1)\)._
Proof.: Let \(\hat{\nabla}\) denote the standard flat connection on \(\mathbb{R}^{2d,2}\), that is, the Levi-Civita connection of the flat pseudo-Riemannian metric \(\mathbf{q}\). For any \(u,v\in\mathfrak{so}(2d,2)\), we have
\[\hat{\nabla}_{\hat{X}^{u}}\hat{X}^{v}(x) = \left.\frac{\mathrm{d}}{\mathrm{d}t}\right|_{t=0}\hat{X}^{v}(x+tu \cdot x)\] \[= \left.\frac{\mathrm{d}}{\mathrm{d}t}\right|_{t=0}v\cdot x+tvu\cdot x\] \[= vu\cdot x\] \[= \hat{X}^{vu}(x)\.\]
Now, since \(\mathrm{AdS}^{2d+1}\) is a submanifold equipped with the restricted metric, its Levi-Civita connexion \(\nabla\) is the orthogonal projection of \(\hat{\nabla}\) to \(T\mathrm{AdS}^{2d+1}\). Since \(T_{x}\mathrm{AdS}^{2d+1}=x^{\perp}\), we get that\(\nabla_{X^{u}}X^{u}(x)=0\) if and only if \(\hat{\nabla}_{\hat{X}^{u}}\hat{X}^{u}(x)=\hat{X}^{u^{2}}(x)=u^{2}\cdot x\) is colinear to \(x\).
Since \(u\) is linear, \(u^{2}\cdot x\) is colinear to \(x\) for every \(x\) if and only if \(u^{2}\in\mathbb{R}\mathrm{Id}\), and we conclude that the Killing field \(X^{u}\) is a geodesic if and only if
\[u^{2}=\lambda\mathrm{Id}\]
for some \(\lambda\in\mathbb{R}\).
If \(u^{2}=\lambda\mathrm{Id}\), then at every point \(x\in\mathrm{AdS}^{2d+1}\) we have
\[g_{\mathrm{AdS}}(X^{u}(x),X^{u}(x)) = \langle u\cdot x,u\cdot x\rangle_{2d,2}\] \[= -\langle u^{2}\cdot x,x\rangle_{2d,2}\] \[= -\lambda\langle x,x\rangle_{2d,2}\] \[= \lambda\.\]
Hence \(X^{u}\) is timelike and unitary if and only if \(\lambda=-1\), i.e. \(u^{2}=-\mathrm{Id}\).
Each such \(u\) defines a complex structure on \(\mathbb{R}^{2d,2}\) compatible with the metric \(\mathbf{q}\). There is thus a unique pseudo-Hermitian form \(\mathbf{h}_{u}\) on \((\mathbb{R}^{2d,2},u)\) such that \(\Re(\mathbf{h}_{u})=\mathbf{q}\). This Hermitian form has complex signature \((d,1)\), and the subgroup of \(\mathrm{O}(2d,2)\) commuting with \(u\) is the the group \(\mathrm{U}(d,1)\).
Finally, given \(u,v\in\mathfrak{so}(2d,2)\) with \(u^{2}=v^{2}=-\mathrm{Id}\), the pseudo-Hermitian spaces \((\mathbb{R}^{2d+2},u,\mathbf{h}_{u})\) and \((\mathbb{R}^{2d+2},v,\mathbf{h}_{v})\) are isomorphic (since they have the same dimension and signature). Hence there exists \(g\in\mathrm{GL}(2d+2,\mathbb{R})\) such that \(gug^{-1}=v\) and \(g^{*}\mathbf{h}_{v}=\mathbf{h}_{u}\). In particular, \(g^{*}\mathbf{q}=\mathbf{q}\), hence \(g\in\mathrm{O}(2d,2)\).
In conclusion, \(\mathrm{O}(2d,2)\) acts transitively on the space of timelike unitary Killing fields, and the centralizer of such a Killing field is isomorphic to \(\mathrm{U}(d,1)\). The space of timelike unitary Killing fields is thus isomorphic to the homogeneous space \(\mathrm{O}(2d,2)/\mathrm{U}(d,1)\).
_Remark 8.5_.: Elaborating on the above proof, one could give a complete classification of geodesic Killing fields on \(\mathrm{AdS}^{d+1}\):
* Lightlike Killing fields are given by elements \(u\in\mathfrak{so}(d,2)\) satisfying \(u^{2}=0\) and exist for all \(d\geq 1\),
* Timelike geodesic Killing fields are given by elements \(u\in\mathfrak{so}(d,2)\) satisfying \(u^{2}=\lambda\mathrm{Id}\), \(\lambda<0\) and only exist for even \(d\),
* Spacelike geodesic Killing fields are given by elements \(u\in\mathfrak{so}(d,2)\) satisfying \(u^{2}=\lambda\mathrm{Id}\), \(\lambda>0\) and only exist for \(d=2\).
### Killing fields orthogonal to a strongly convex hypersurface
Let \(\mathcal{H}\) be a complete spacelike hypersurface in \(\mathrm{AdS}^{2d+1}\) and \(N\) its future-pointing unit normal. Let \(g\) denote both the metric of \(\mathrm{AdS}^{2d+1}\) and its restriction to \(\mathcal{H}\). Recall that the _second fundamental form_ of \(\mathcal{H}\) is given by
\[\mathrm{II}(\cdot,\cdot)=g(\nabla.N,\cdot)\.\]
Recall from Definition 3.23 that \(\mathcal{H}\) is uniformly strongly (past) convex if there exists a constant \(c>0\) such that \(\mathrm{II}+cg\) is negative definite.
**Lemma 8.6**.: _Let \(H\) be a uniformly strongly convex complete spacelike hypersurface in \(\mathrm{AdS}^{2d+1}\) and \(X\) a unitary timelike geodesic Killing field. Then \(X\) is orthogonal to \(\mathcal{H}\) at exactly one point._
Proof.: Up to multiplying \(X\) by \(-1\), we can assume that \(X\) is future pointing, so that \(g(X,N)\leq-1\) with equality exactly where \(X\) is orthogonal to \(\mathcal{H}\).
Let us decompose \(X\) along \(\mathcal{H}\) as
\[X=\bar{X}+fN\]
where \(\bar{X}\) is tangent to \(\mathcal{H}\) and \(f=-g(X,N)\geq 1\). Since \(X\) is unitary, we have
\[g(\bar{X},\bar{X})=f^{2}-1\.\]
Our goal is to prove that \(f:\mathcal{H}\to[1,+\infty)\) achieves the value \(1\) at a unique point. It will follow from the following three facts:
1. the function \(f\) is proper,
2. if \(x\) is a critical point of \(f\), then \(f(x)=1\),
3. the points where \(f=1\) are isolated.
We will then conclude by looking at the gradient flow of \(f\).
* Proof of (a): Fix a point \(x_{0}\in\mathcal{H}\), let \(x\) be a point at distance \(T\) from \(x_{0}\) (for the restricted metric \(g\)) and let \(\gamma:[0,T]\to\mathcal{H}\) be a unit speed geodesic from \(x_{0}\) to \(x\). The equation of geodesics on \(\mathcal{H}\) can be written as: \[\nabla_{\dot{\gamma}}\dot{\gamma}=\mathrm{II}(\dot{\gamma},\dot{\gamma})N\,\] where \(\nabla\) is the ambient Levi-Civita connection of \(\mathrm{AdS}^{2d+1}\). Consider the function \[\begin{array}{cccc}h:&[0,T]&\to&\mathbb{R}\\ &t&\mapsto&g_{\gamma(t)}(\dot{\gamma},X)=g_{\gamma(t)}(\dot{\gamma},\bar{X})\.\end{array}\] By the Cauchy-Schwarz inequality on \(T\mathcal{H}\), we have (5) \[h(t)^{2}\leq g_{\gamma(t)}(\bar{X},\bar{X})=f^{2}(\gamma(t))-1\,\] which we can also write (6) \[f(\gamma(t))\geq\sqrt{h(t)^{2}+1}\,\] Deriving \(h\) gives \[h^{\prime}(t) = g(\nabla_{\dot{\gamma}}\dot{\gamma},X)+g(\dot{\gamma},\nabla_{ \dot{\gamma}}X)\] \[= g(\nabla_{\dot{\gamma}}\dot{\gamma},X)\quad\text{since $X$ is a Killing field}\] \[= \mathrm{II}(\dot{\gamma},\dot{\gamma})g(N,X)\] \[= -f(\gamma(t))\mathrm{II}(\dot{\gamma},\dot{\gamma})\] \[\geq cf(\gamma(t))\quad\text{by uniform strict convexity of $\mathcal{H}$}\] \[\geq c\sqrt{h(t)^{2}+1}\.\] We deduce that \(h(t)\) is greater or equal to the solution of the ordinary differential equation \[y^{\prime}=c\sqrt{y^{2}+1}\] whit initial condition \(y(0)=h(0)\), i.e. \[h(t)\geq\sinh\left(ct+\sinh^{-1}(h(0)\right)\.\] Using (6) and (5), we conclude that \[f(x)=f(\gamma(T))\geq\cosh(cT-c^{\prime})\,\] where \[c^{\prime}=\sinh^{-1}(\sqrt{f(\gamma(0))^{2}-1})\.\] This shows that \(f\) is proper.
* Proof of (b): let us compute the derivative of \(f=-g(X,N)\) along \(\bar{X}\). We have \[df(\bar{X})=-g(\nabla_{\bar{X}}X,N)-g(X,\nabla_{\bar{X}}N)\.\] On one side, we have \[g(\nabla_{\bar{X}}X,N)=g(\nabla_{X}X,N)-fg(\nabla_{N}X,N)=0\] since \(X\) is Killing and geodesic. On the other side, we have \[g(X,\nabla_{\bar{X}}N)=g(\bar{X},\nabla_{\bar{X}}N)=\mathrm{II}(\bar{X},\bar{X})\] since \(d_{\bar{X}}g(N,N)=0\). At a critical point of \(f\), we thus have \(\bar{X}=0\), i.e. \(X\) is orthogonal to \(\mathcal{H}\) and \(f=1\).
* Proof of (c): Let \(\bar{\nabla}\) denote the Levi-Civita connection of the induced metric on \(\mathcal{H}\). We have \(\bar{\nabla}\bar{X}=\pi(\nabla\bar{X})\) where \(\pi\) denotes the orthogonal projection to \(T\mathcal{H}\). For every vector \(Y\) tangent to \(\mathcal{H}\), we have \[g_{x}(\bar{\nabla}_{Y}\bar{X},Y) = g_{x}(\nabla_{Y}\bar{X},Y)\] \[= g_{x}(\nabla_{Y}X,Y)-fg(\nabla_{Y}N,Y)-\mathrm{d}f(Y)g(N,Y)\.\] The first and third term vanish since \(X\) is a Killing field and \(Y\) is tangent to \(\mathcal{H}\). We conclude that \[g(\bar{\nabla}_{Y}\bar{X},Y)=-f\mathrm{II}(Y,Y)\geq cg(Y,Y)\.\] Hence \(g(\bar{\nabla}.\bar{X},\cdot)\) is symmetric and positive definite, which implies that \(\bar{\nabla}\bar{X}\in\mathrm{End}(T\mathcal{H})\) is invertible at every point. In particular, the zeros of \(\bar{X}\), which are exactly the points where where \(f=1\), are isolated.
* Conclusion of the proof: Consider the gradient flow of \(f\), i.e. the flow of the vector field \(-\mathrm{Grad}_{g}f\). Since \(f\) is proper, every trajectory of the flow converges to a critical point of \(f\). Since each critical point is a minimum and is isolated, its basin of attraction is non-empty and open. Since \(\mathcal{H}\) is connected and decomposes as the disjoint union of these basins of attraction, there is exactly one such critical point, at which \(f=1\). This is the unique point where \(X\) is orthogonal to \(\mathcal{H}\).
### Fiber bundle over convex Cauchy hypersurfaces
Let now \(\Gamma\backslash\Omega\) be an AdS-quasifuchsian manifold of dimension \(2d+1\). Let \(\mathcal{H}\) be a smooth strongly convex Cauchy hypersurface in \(\Omega\). Then \(\mathcal{H}\) is the quotient of a \(\Gamma\)-invariant uniformy strongly convex hypersurface \(\widetilde{\mathcal{H}}\) in \(\mathrm{AdS}^{2d+1}\).
Recall that the homogeneous space \(\mathrm{O}(2d,2)/\mathrm{U}(d,1)\) identifies with the space \(\mathrm{Kill}_{-1}(\mathrm{AdS}^{2d+1})\) of timelike unitary Killing fields in \(\mathrm{AdS}^{2d+1}\). To shorten notations, we will write
* \(G=\mathrm{O}(2d,2)\),
* \(H=\mathrm{U}(d,1)\),
* \(K=\mathrm{O}(2d)\times\mathrm{O}(2)\),
* \(L=H\cap K=\mathrm{U}(d)\times\mathrm{U}(1)\).
Define
\[\widetilde{M}=\{(x,X)\in\widetilde{\mathcal{H}}\times\mathrm{Kill}_{-1}( \mathrm{AdS}^{2d+1})\mid X\text{ orthogonal to }\widetilde{\mathcal{H}}\text{ at }x\}\,\]
And let \(p_{1}\) and \(p_{2}\) denote respectively the projections from \(\widetilde{M}\) to \(\widetilde{\mathcal{H}}\) and \(\mathrm{Kill}_{-1}(\mathrm{AdS}^{2d+1})\). The projections \(p_{1}\) and \(p_{2}\) are clearly both \(\Gamma\)-equivariant.
**Proposition 8.7**.: _Let \(x\) be a point in \(\widetilde{\mathcal{H}}\). Then \(p_{2}(p_{1}^{-1}(x))=gK/L\) for some \(g\in G\)._
Proof.: Since \(G\) acts transitively on pairs \((x,H)\) consisting of a point \(x\in\operatorname{AdS}^{2d+1}\) and a spacelike hyperplane \(H\in T_{x}\operatorname{AdS}^{2d+1}\), it is enough to prove that the space of timelike unitary Killing fields orthogonal at \(x_{0}=(0,\dots,0,1)\) to the hyperplane \(H_{0}=\{(x_{1},\dots,x_{d},0,0)\}\) identifies with \(K/L\) inside \(G/H\).
Let \(X^{u}\) be a timelike unitary Killing field orthogonal to \(H_{0}\) at \(x_{0}\), given by a matrix \(u\in\mathfrak{so}(2d,2)\) satisfying \(u^{2}=-\mathrm{Id}\) and \(u(x_{0})\in H_{0}^{\perp}\). Then \(u\) preserves \(H_{0}^{\perp}\) and \(H_{0}\). It is thus conjugated by an element of \(K\) to the image of the standard complex structure on \(\mathbb{R}^{2d,2}\), centralized by \(H\). Thus \(X^{u}\) lies in the \(K\)-orbit of the basepoint of \(G/H\). The converse is similar.
We can now conclude the proof of the following theorem.
**Theorem 8.8**.: _Let \(\Gamma\backslash\Omega\) be an \(\operatorname{AdS}\) quasifuchsian manifold of dimension \(2d+1\) and let \(\mathcal{H}\) be a smooth strongly convex Cauchy hypersurface in \(\Gamma\backslash\Omega\). Then there exists a \(\Gamma\)-equivariant fibration from \(G/H\) to \(\widetilde{\mathcal{H}}\), the fibers of which are translates of \(K/L\)._
_In particular, \(\Gamma\) acts freely, properly discontinuously and cocompactly on \(G/H\) and the quotient manifold \(\Gamma\backslash G/H\) is a smooth fiber bundle over \(\mathcal{H}\) with fibers diffeomorphic to \(K/L\)._
Proof.: By Lemma 8.6, every timelike unitary geodesic Killing field is orthogonal to \(\widetilde{H}\) at a single point. It follows that \(p_{2}:\widetilde{M}\to G/H\) is a \(\Gamma\)-equivariant homeomorphism. Set
\[p\stackrel{{\mathrm{def}}}{{=}}p_{1}\circ p_{2}^{-1}:G/H\to \widetilde{H}\.\]
Then \(p\) is \(\Gamma\)-equivariant.
Let \(C\) be a closed ball in \(\widetilde{\mathcal{H}}\). Choose continuously, for every \(x\in C\), an element \(g_{x}\in G\) such that \(g\cdot x_{0}=x\) and \(g\cdot H_{0}=T_{x}\widetilde{\mathcal{H}}\) (with the notations of Proposition 8.7). By Proposition 8.7, we have that
\[p^{-1}(C)=\bigsqcup_{x\in C}g_{x}K/L\.\]
This shows that \(p\) is a topological fibration whose fibers are translates of the compact subspace \(K/L\). In particular, \(p\) is proper.
Now, since \(\Gamma\) acts freely, properly discontinuously and cocompactly on \(\widetilde{\mathcal{H}}\), we deduce that \(\Gamma\) acts freely, properly discontinuously, and cocompactly on \(G/H\). The equivariant fibration \(p\) then factors to a fibration \(\Gamma\backslash G/H\to\mathcal{H}\) with fibers homeomorphic to \(K/L\).
_Remark 8.9_.: Though we didn't discuss the regularity of \(p_{1}\) and \(p_{2}\), a little extra care would easily show that \(p_{1}\circ p_{2}^{-1}\) is a smooth submersion as soon as \(\widetilde{H}\) is smooth.
_Remark 8.10_.: In particular we gave a proof that \(\Gamma\) acts properly discontinuously on \(G/H\) which is independent of that of Gueritaud-Guichard-Kassel-Wienhard, based on the intrinsic geometry of AdS quasifuchsian manifolds.
|
2302.04470
|
Pego theorem on compact groups
|
The Pego theorem characterizes the precompact subsets of the
square-integrable functions on $\mathbb{R}^n$ via the Fourier transform. We
prove the analogue of the Pego theorem on compact groups (not necessarily
abelian).
|
Manoj Kumar
|
2023-02-09T07:21:38Z
|
http://arxiv.org/abs/2302.04470v1
|
# Pego theorem on compact groups
###### Abstract.
The Pego theorem characterizes the precompact subsets of the square-integrable functions on \(\mathbb{R}^{n}\) via the Fourier transform. We prove the analogue of the Pego theorem on compact groups (not necessarily abelian).
Key words and phrases:Compact group, Fourier transform, compactness 2010 Mathematics Subject Classification: Primary 43A30, 43A77; Secondary 22C05
## 1. Introduction
Characterizing precompact subsets is one of the classical topics in function space theory. It is well known that the Arzela-Ascoli theorem characterizes a precompact subset of space of continuous functions over compact Hausdorff space. Further, the celebrated Riesz-Kolmogorov theorem provides a characterization of precompact subsets of \(L^{p}(\mathbb{R}^{n}).\) We refer [8] for a historical account of it. Weil [14, Pg. 52] extended it to the Lebesgue spaces over locally compact groups. See [7] for its extension to the Banach function spaces over locally compact groups.
In 1985, Pego [13] used the Riesz-Kolmogorov theorem to find a characterization of precompact subsets of \(L^{2}(\mathbb{R}^{n})\) via certain decay of the Fourier transform. It states as follows:
**Theorem 1.1**.: _[_13_, Theorems 2 and 3]_ _Let \(K\) be a bounded subset of \(L^{2}(\mathbb{R}^{n}).\) Then, the following are equivalent:_
1. \(K\) _is precompact._
2. \(\int_{|x|>r}|f(x)|^{2}\,dx\to 0\) _and_ \(\int_{|\xi|>r}|\widehat{f}(\xi)|^{2}\,d\xi\to 0\) _as_ \(r\to\infty,\) _both uniformly for_ \(f\) _in_ \(K.\)__
3. \(\int_{\mathbb{R}^{n}}|f(x+y)-f(x)|^{2}\,dx\to 0\) _as_ \(y\to 0,\) _and_ \(\int_{\mathbb{R}^{n}}|\widehat{f}(\xi+\omega)-\widehat{f}(\xi)|^{2}\,d\xi\to 0\) _as_ \(\omega\to 0,\) _both uniformly for_ \(f\) _in_ \(K.\)__
An application of this theorem to information theory has also been provided in [13].
Pego type theorem has also been studied via the short-time Fourier and wavelet transforms [2], the Laplace transform [11] and the Laguerre and Hankel transforms [10]. The Pego theorem has been extended to the locally compact abelian groups with some technical assumptions [5]. Using the Pontryagin duality and the Arzela-Ascoli theorem, the authors in [6] showed that the technical assumptions are redundant. For \(L^{1}\)-space analogue of Pego type theorem over locally compact abelian groups, see [12].
In Section 2, we present preliminaries on compact groups. In Section 3, using Weil's compactness theorem, we extend the Theorem 1.1 to compact groups (not necessarily abelian); see Theorem 3.6.
## 2. Fourier analysis on compact groups
Let \(G\) be a compact Hausdorff group. Assume that \(m_{G}\) denotes the normalized positive Haar measure on \(G.\) Let \(L^{p}(G)\) denote the \(p\)th Lebesgue space w.r.t. the measure \(m_{G}.\) The norm on the space \(L^{p}(G)\) is denoted by \(\|\cdot\|_{p}.\)
We denote by \(\widehat{G}\) the space consisting of all irreducible unitary representations of \(G\) up to the unitary equivalence. The set \(\widehat{G}\) is known as the unitary dual of \(G\) and is equipped with the discrete topology. Note that the representation space \(\mathcal{H}_{\pi}\) of \(\pi\in\widehat{G}\) is a complex Hilbert space and finite-dimensional. Denote by \(d_{\pi}\) the dimension of \(\mathcal{H}_{\pi}.\)
Let \(\Lambda\subset\widehat{G}.\) Assume that \(\{(X_{\pi},\|\cdot\|_{\pi}):\pi\in\Lambda\}\) is a family of Banach spaces. For \(1\leq p<\infty,\) we denote by \(\ell^{p}\)- \(\underset{\pi\in\wedge}{\oplus}X_{\pi}\) the Banach space
\[\left\{(x_{\pi})\in\underset{\pi\in\wedge}{\Pi}X_{\pi}:\underset{\pi\in \wedge}{\sum}d_{\pi}\|x_{\pi}\|_{\pi}^{p}<\infty\right\}\]
endowed with the norm \(\|(x_{\pi})\|_{\ell^{p\text{-}}\underset{\pi\in\wedge}{\oplus}X_{\pi}}:= \left(\underset{\pi\in\wedge}{\sum}d_{\pi}\|x_{\pi}\|_{\pi}^{p}\right)^{1/p}.\) Denote by \(\ell^{\infty}\)- \(\underset{\pi\in\wedge}{\oplus}X_{\pi}\) the Banach space
\[\left\{(x_{\pi})\in\underset{\pi\in\wedge}{\Pi}X_{\pi}:\underset{\pi\in \wedge}{\sup}\|x_{\pi}\|_{\pi}<\infty\right\}\]
endowed with the norm \(\|(x_{\pi})\|_{\ell^{\infty\text{-}}\underset{\pi\in\wedge}{\oplus}X_{\pi}}:= \underset{\pi\in\wedge}{\sup}\|x_{\pi}\|_{\pi}.\) Similarly, denote by \(c_{0}\)- \(\underset{\pi\in\wedge}{\oplus}X_{\pi}\) the space consisting of \((x_{\pi})\) from \(\ell^{\infty\text{-}}\underset{\pi\in\wedge}{\oplus}X_{\pi}\) such that \(x_{\pi}\to 0\) as \(\pi\rightarrow\infty,\) i.e., for any given \(\epsilon>0\) there exists a finite set \(\Lambda_{\epsilon}\subset\Lambda\) such that \(\|x_{\pi}\|_{\pi}<\epsilon\) for all \(\pi\in\Lambda\setminus\Lambda_{\epsilon}.\) Note that \(c_{0}\)- \(\underset{\pi\in\wedge}{\oplus}X_{\pi}\) is a closed subspace of \(\ell^{\infty\text{-}}\underset{\pi\in\wedge}{\oplus}X_{\pi}.\)
For \(1\leq p<\infty,\) let \(\mathcal{B}_{p}(\mathcal{H}_{\pi})\) denote the space of all bounded linear operators \(T\) on \(\mathcal{H}_{\pi}\) such that \(\|T\|_{\mathcal{B}_{p}(\mathcal{H}_{\pi})}:=(\text{tr}(|T|^{p}))^{1/p}<\infty.\) The space \(\mathcal{B}_{2}(\mathcal{H}_{\pi})\) is called the space of the Hilbert-Schmidt operators on the Hilbert space \(\mathcal{H}_{\pi}.\) The space \(\mathcal{B}_{2}(\mathcal{H}_{\pi})\) is Hilbert space endowed with the inner product
\[\langle T,S\rangle_{\mathcal{B}_{2}(\mathcal{H}_{\pi})}:=\text{tr}(TS^{*}).\]
Further, let \(\mathcal{B}(\mathcal{H}_{\pi})\) denote the space consisting of all bounded linear operators on \(\mathcal{H}_{\pi}\) endowed with the operator norm.
Let \(f\in L^{1}(G).\) The Fourier transform of \(f\) is defined by
\[\widehat{f}(\pi)=\int_{G}f(t)\pi(t)^{*}\,dm_{G}(t),\ \pi\in\widehat{G}.\]
The Fourier transform operator \(f\mapsto\widehat{f}\) from \(L^{1}(G)\) into \(\ell^{\infty_{-}}\underset{\pi\in\widehat{G}}{\oplus}\mathcal{B}(\mathcal{H}_{ \pi})\) is injective and bounded. By the Riemann-Lebesgue Lemma, we know that \(\widehat{f}\in c_{0^{-}}\underset{\pi\in\widehat{G}}{\oplus}\mathcal{B}( \mathcal{H}_{\pi}).\) The convolution of \(f,g\in L^{1}(G)\) is given by
\[f*g(x)=\int_{G}f(xy^{-1})g(y)\,dm_{G}(y).\]
Then, \(\widehat{f*g}(\pi)=\widehat{g}(\pi)\widehat{f}(\pi),\ \pi\in\widehat{G}.\) For \(y\in G,\) the right translation \(R_{y}\) of \(f\in L^{p}(G)\) is given by \(R_{y}(f)(x)=f(xy),\ x\in G.\) Then, \(\widehat{R_{y}f}(\pi)=\pi(y)\widehat{f}(\pi),\ \pi\in\widehat{G}.\)
For more information on compact groups, we refer to [4, 9].
Throughout the paper, \(G\) will denote a compact Hausdorff group (not necessarily abelian). The identity of \(G\) is denoted by \(e.\) We will denote by \(I_{d_{\pi}}\) the \(d_{\pi}\times d_{\pi}\) identity matrix.
## 3. Pego theorem on compact groups
In this section, we discuss the characterization of precompact subsets of square-integrable functions on \(G\) in terms of the Fourier transform. We need the following definitions.
Let \(K\subset L^{p}(G).\) Define \(\widehat{K}:=\{\widehat{f}:f\in L^{p}(G)\}.\)\(K\) is said to be _uniformly \(L^{p}(G)\)-equicontinuous_ if for any given \(\epsilon>0\) there exists an open neighbourhood \(O\) of \(e\) such that
\[\|R_{y}f-f\|_{p}<\epsilon,\ f\in K\ \text{and}\ y\in O.\]
Let \(F\subset\ell^{p}\)-\(\underset{\pi\in\widehat{G}}{\oplus}\mathcal{B}_{p}(\mathcal{H}_{\pi}).\)\(F\) is said to have _uniform \(\ell^{p}\)-\(\underset{\pi\in\widehat{G}}{\oplus}\mathcal{B}_{p}(\mathcal{H}_{\pi})\)-decay_ if for any given \(\epsilon>0\) there exists a finite set \(A\subset\widehat{G}\) such that
\[\|\phi\|_{\ell^{p}\text{-}\underset{\pi\in\widehat{G}\setminus A}{\oplus} \mathcal{B}_{p}(\mathcal{H}_{\pi})}<\epsilon,\ \phi\in F.\]
Let us begin with some important lemmas.
**Lemma 3.1**.: _Let \(K\subset L^{p}(G),\) where \(p\in[1,2].\) If \(K\) is uniformly \(L^{p}(G)\)-equicontinuous then \(\widehat{K}\) has uniform \(\ell^{p^{\prime}}\)-\(\underset{\pi\in\widehat{G}}{\oplus}\mathcal{B}_{p^{\prime}}(\mathcal{H}_{ \pi})\)-decay._
Proof.: Let \((e_{U})_{U\in\Lambda}\) be a Dirac net on \(G\); see [1, Pg. 28]. By the Riemann-Lebesgue Lemma [9, Theorem 28.40], \(\widehat{e_{U}}\in c_{0^{-}}\underset{\pi\in\widehat{G}}{\oplus}\mathcal{B}( \mathcal{H}_{\pi}).\) Then, there exists a finite set \(A\subset\widehat{G}\) such that
\[\|\widehat{e_{U}}(\pi)\|_{\mathcal{B}(\mathcal{H}_{\pi})}\leq\frac{1}{2},\ \pi\in \widehat{G}\setminus A.\]
Let \(f\in K.\) We denote by \(\widehat{e_{U}}\widehat{f}\) the pointwise product of \(\widehat{e_{U}}\) and \(\widehat{f}.\) Now,
\[\|\widehat{f}\|_{\ell^{p^{\prime}}\text{-}\underset{\pi\in \widehat{G}\setminus A}{\oplus}\mathcal{B}_{p^{\prime}}(\mathcal{H}_{\pi})}\leq \|\widehat{f}-\widehat{e_{U}}\widehat{f}\|_{\ell^{p^{\prime}}\text {-}\underset{\pi\in\widehat{G}\setminus A}{\oplus}\mathcal{B}_{p^{\prime}}( \mathcal{H}_{\pi})}+\|\widehat{e_{U}}\widehat{f}\|_{\ell^{p^{\prime}}\text{-} \underset{\pi\in\widehat{G}\setminus A}{\oplus}\mathcal{B}_{p^{\prime}}( \mathcal{H}_{\pi})}\] \[\leq \|\widehat{f}-\widehat{f*e_{U}}\|_{\ell^{p^{\prime}}\text{-} \underset{\pi\in\widehat{G}\setminus A}{\oplus}\mathcal{B}_{p^{\prime}}( \mathcal{H}_{\pi})}+\|\widehat{f}\|_{\ell^{p^{\prime}}\text{-}\underset{\pi \in\widehat{G}\setminus A}{\oplus}\mathcal{B}_{p^{\prime}}(\mathcal{H}_{\pi})} \underset{\pi\in\widehat{G}\setminus A}{\oplus}\|\widehat{e_{U}}(\pi)\|_{ \mathcal{B}(\mathcal{H}_{\pi})}\]
\[\leq \|f\widehat{f\ast e_{U}}\|_{\ell^{p^{\prime}-}\underset{\pi\in\widehat{G} \setminus A}{\oplus}\mathcal{B}_{p^{\prime}}(\mathcal{H}_{\pi})}+\frac{1}{2}\|f \|_{\ell^{p^{\prime}-}\underset{\pi\in\widehat{G}\setminus A}{\oplus} \mathcal{B}_{p^{\prime}}(\mathcal{H}_{\pi})}.\]
Then, applying the Hausdorff-Young inequality [9, Theorem 31.22], we get
\[\|\widehat{f}\|_{\ell^{p^{\prime}-}\underset{\pi\in\widehat{G} \setminus A}{\oplus}\mathcal{B}_{p^{\prime}}(\mathcal{H}_{\pi})}\leq 2\|f\widehat{f\ast e_{U}}\|_{\ell^{p^{\prime}-}\underset{\pi\in \widehat{G}}{\oplus}\mathcal{B}_{p^{\prime}}(\mathcal{H}_{\pi})}\] \[\leq 2\|f-f\ast e_{U}\|_{p}\] \[= 2\left(\int_{G}|f(x)-f\ast e_{U}(x)|^{p}\,dm_{G}(x)\right)^{1/p}\] \[= 2\left(\int_{G}\left|\int_{G}(f(x)-f(xy^{-1}))e_{U}(y)\,dm_{G}( y)\right|^{p}\,dm_{G}(x)\right)^{1/p}.\]
Therefore, using the Minkowski integral inequality, we obtain
\[\|\widehat{f}\|_{\ell^{p^{\prime}-}\underset{\pi\in\widehat{G} \setminus A}{\oplus}\mathcal{B}_{p^{\prime}}(\mathcal{H}_{\pi})}\leq 2\int_{G}\left(\int_{G}|f(x)-f(xy^{-1})|^{p}\,dm_{G}(x)\right)^ {1/p}e_{U}(y)\,dm_{G}(y)\] \[\leq 2\sup_{y\in U}\left(\int_{G}|f(x)-f(xy^{-1})|^{p}\,dm_{G}(x) \right)^{1/p}.\]
Let \(\epsilon>0.\) Since \(K\) is uniformly \(L^{p}(G)\)-equicontinuous, there exists an open neighbourhood \(O\) of \(e\) such that
\[\|R_{y}f-f\|_{p}<\frac{\epsilon}{2},\ f\in K\ \text{and}\ y\in O.\]
By [4, Proposition 2.1 (b)], we get that there exists \(U\in\Lambda\) such that
\[\|R_{y}f-f\|_{p}<\frac{\epsilon}{2},\ f\in K\ \text{and}\ y\in U.\]
Hence,
\[\|\widehat{f}\|_{\ell^{p^{\prime}-}\underset{\pi\in\widehat{G} \setminus A}{\oplus}\mathcal{B}_{p^{\prime}}(\mathcal{H}_{\pi})}<\epsilon,\ f\in K.\qed\]
**Lemma 3.2**.: _Let \(K\subset L^{p^{\prime}}(G),\) where \(p\in[1,2].\) If \(\widehat{K}\) has uniform \(\ell^{p}\)-\(\underset{\pi\in\widehat{G}}{\oplus}\mathcal{B}_{p}(\mathcal{H}_{\pi})\)-decay then \(K\) is uniformly \(L^{p^{\prime}}(G)\)-equicontinuous._
Proof.: Let \(\epsilon>0.\) Since \(\widehat{K}\) has uniform \(\ell^{p}\)-\(\underset{\pi\in\widehat{G}}{\oplus}\mathcal{B}_{p}(\mathcal{H}_{\pi})\)-decay, there exists a finite set \(A\subset\widehat{G}\) such that
\[\|\widehat{f}\|_{\ell^{p-}\underset{\pi\in\widehat{G}\setminus A}{\oplus} \mathcal{B}_{p}(\mathcal{H}_{\pi})}<\frac{\epsilon}{4},\ f\in K.\]
Let \(f\in K\) and \(y\in G.\) Then, applying [9, Corollary 31.25], we obtain
\[\|R_{y}f-f\|_{p^{\prime}}\leq \|R_{y}\widehat{f}-f\|_{\ell^{p-}\underset{\pi\in\widehat{G}}{ \oplus}\mathcal{B}_{p}(\mathcal{H}_{\pi})}\] \[= \left(\sum_{\pi\in\widehat{G}}d_{\pi}\|\widehat{R_{y}}\widehat{f} (\pi)-\widehat{f}(\pi)\|_{\mathcal{B}_{p}(\mathcal{H}_{\pi})}^{p}\right)^{1/p}\]
\[\leq \left(\sum_{\pi\in A}d_{\pi}\|\pi(y)\widehat{f}(\pi)-\widehat{f}(\pi) \|_{\mathcal{B}_{p}(\mathcal{H}_{\pi})}^{p}\right)^{1/p}\] \[+\left(\sum_{\pi\in\widehat{G}\setminus A}d_{\pi}\|\pi(y)\widehat{ f}(\pi)-\widehat{f}(\pi)\|_{\mathcal{B}_{p}(\mathcal{H}_{\pi})}^{p}\right)^{1/p}\] \[\leq \sup_{\pi\in A}\|\pi(y)-I_{d_{\pi}}\|_{\mathcal{B}(\mathcal{H}_{ \pi})}\left(\sum_{\pi\in A}d_{\pi}\|\widehat{f}(\pi)\|_{\mathcal{B}_{p}( \mathcal{H}_{\pi})}^{p}\right)^{1/p}\] \[+\sup_{\pi\in\widehat{G}\setminus A}\|\pi(y)-I_{d_{\pi}}\|_{ \mathcal{B}(\mathcal{H}_{\pi})}\left(\sum_{\pi\in\widehat{G}\setminus A}d_{ \pi}\|\widehat{f}(\pi)\|_{\mathcal{B}_{p}(\mathcal{H}_{\pi})}^{p}\right)^{1/p}\] \[\leq M\sup_{\pi\in A}\|\pi(y)-I_{d_{\pi}}\|_{\mathcal{B}(\mathcal{H}_{ \pi})}+\frac{\epsilon}{2},\]
where \(M\) is a positive number such that \(\left(\sum_{\pi\in A}d_{\pi}\|\widehat{f}(\pi)\|_{\mathcal{B}_{p}(\mathcal{H} _{\pi})}^{p}\right)^{1/p}\leq M.\)
Let \(\pi\in A.\) Using continuity of \(\pi\), we obtain that there exists a neighbourhood \(O_{\pi}\) of \(e\) such that
\[\|\pi(y)-I_{d_{\pi}}\|_{\mathcal{B}(\mathcal{H}_{\pi})}<\frac{\epsilon}{2M},\ y\in O_{\pi}.\]
Assume that \(O=\cap_{\pi\in A}O_{\pi}.\) Then,
\[\|\pi(y)-I_{d_{\pi}}\|_{\mathcal{B}(\mathcal{H}_{\pi})}<\frac{\epsilon}{2M}, \ \pi\in A\ \text{and}\ y\in O.\]
Hence,
\[\|R_{y}f-f\|_{p^{\prime}}<\epsilon,\ f\in K\ \text{and}\ y\in O.\qed\]
The following corollary is a generalization of [13, Theorem 1] studied on \(\mathbb{R}^{n}\), and [5, Theorem 1] and [3, Lemma 2.5] studied on locally compact abelian groups. This is also an improvement of the corresponding result on compact abelian groups in the sense that we do not assume boundedness of the subset of \(L^{2}(G).\)
**Corollary 3.3**.: _Let \(K\subset L^{2}(G).\) Then, \(K\) is uniformly \(L^{2}(G)\)-equicontinuous if and only if \(\widehat{K}\) has uniform \(\ell^{2}\)-\(\underset{\pi\in\widehat{G}}{\oplus}\mathcal{B}_{2}(\mathcal{H}_{\pi})\)-decay._
Proof.: This is a direct consequence of Lemma 3.1 and Lemma 3.2.
Now, we prove some propositions that we use to establish the Pego theorem on compact groups.
**Proposition 3.4**.: _Let \(K\subset L^{2}(G).\) If \(K\) is precompact then \(\widehat{K}\) has uniform \(\ell^{2}\)-\(\underset{\pi\in\widehat{G}}{\oplus}\mathcal{B}_{2}(\mathcal{H}_{\pi})\)-decay._
Proof.: By the Weil theorem [14, Pg. 52] (or see [7, Theorem 3.3]), \(K\) is uniformly \(L^{2}(G)\)-equicontinuous. Then, applying Corollary 3.3, we get that \(\widehat{K}\) has uniform \(\ell^{2}\)- \(\underset{\pi\in\widehat{G}}{\oplus}\mathcal{B}_{2}(\mathcal{H}_{\pi})\)-decay.
The following is the converse to the previous proposition.
**Proposition 3.5**.: _Let \(K\) be a bounded subset of \(L^{2}(G).\) If \(\widehat{K}\) has uniform \(\ell^{2}\)- \(\underset{\pi\in\widehat{G}}{\oplus}\mathcal{B}_{2}(\mathcal{H}_{\pi})\)-decay then \(K\) is precompact._
Proof.: Using Corollary 3.3, we have that \(K\) is uniformly \(L^{2}(G)\)-equicontinuous. For any given \(\epsilon>0\) we have that
\[\sup_{f\in K}\|f\chi_{G\setminus G}\|_{2}=0<\epsilon.\]
Since \(K\) is bounded, it follows by the Weil theorem [14, Pg. 52] (or see [7, Theorem 3.1]) that \(K\) is precompact.
Now, we present an analogue of the Pego theorem over compact groups. Combining Corollary 3.3, Proposition 3.4 and Proposition 3.5 gives the following theorem.
**Theorem 3.6**.: _Let \(K\) be a bounded subset of \(L^{2}(G).\) Then, the following are equivalent:_
1. \(K\) _is precompact._
2. \(\widehat{K}\) _has uniform_ \(\ell^{2}\)_-_ \(\underset{\pi\in\widehat{G}}{\oplus}\mathcal{B}_{2}(\mathcal{H}_{\pi})\)_-decay._
3. \(K\) _is uniformly_ \(L^{2}(G)\)_-equicontinuous._
The following gives an example of a set \(K\subset L^{2}(G)\) which is not precompact but \(K\) is uniformly \(L^{2}(G)\)-equicontinuous and \(\widehat{K}\) has uniform \(\ell^{2}\)- \(\underset{\pi\in\widehat{G}}{\oplus}\mathcal{B}_{2}(\mathcal{H}_{\pi})\)-decay.
**Example 3.7**.: _Consider the set \(K=\{n\chi_{G}:n\in\mathbb{N}\}\subset L^{2}(G)\) as given in [7, Example 4.2]. Since \(K\) consists of only constant functions, it is clear that \(K\) is uniformly \(L^{2}(G)\)-equicontinuous. By Corollary 3.3, \(\widehat{K}\) has uniform \(\ell^{2}\)-\(\underset{\pi\in\widehat{G}}{\oplus}\mathcal{B}_{2}(\mathcal{H}_{\pi})\)-decay. Since \(K\) is not bounded, \(K\) is not precompact._
Now, with the help of our main result Theorem 3.6, we show that certain subsets of \(L^{2}(G)\) are precompact.
**Example 3.8**.:
1. _Let_ \(r\in\mathbb{R}.\) _Consider the set_ \(K=\{\frac{r}{n}\chi_{G}:n\in\mathbb{N}\}\subset L^{2}(G).\) _Since_ \(\{\frac{r}{n}:n\in\mathbb{N}\}\) _is bounded and_ \(K\) _consists of only constant functions, it follows that_ \(K\) _is bounded and uniformly_ \(L^{2}(G)\)_-equicontinuous. Therefore, by Theorem_ 3.6_,_ \(K\) _is precompact._
2. _Let_ \(A\) _be a finite subset of_ \(\widehat{G}.\) _Assume that_ \(K\) _is a bounded subset of the linear span of the set consisting of matrix entries_ _[_4_, Pg. 139]_ _of elements in_ \(A.\) _Since the matrix
entries are bounded functions in \(L^{2}(G),\)\(K\) is bounded subset of \(L^{2}(G).\) For \(f\in K,\) using the Schur orthogonality relations [4, Theorem 5.8] we obtain that_
\[\|\widehat{f}\|_{\ell^{2}-\underset{\pi\in\widehat{G}\cap A}{\oplus}\mathcal{B} _{2}(\mathcal{H}_{\pi})}=0.\]
_Thus, \(\widehat{K}\) has uniform \(\ell^{2}\)-\(\underset{\pi\in\widehat{G}}{\oplus}\mathcal{B}_{2}(\mathcal{H}_{\pi})\)-decay. Hence, by Theorem 3.6, \(K\) is precompact. In particular, the convex hull of the set consisting of matrix entries of elements in \(A\) is precompact._
## Acknowledgment
The author is supported by the NBHM post-doctoral fellowship with Ref. number: 0204/3/2021/R&D-II/7356 from the Department of Atomic Energy (DAE), Government of India. The author is grateful to Sundaram Thangavelu for his useful comments.
|
2308.11740
|
Exploration of superconducting multi-mode cavity architectures for
quantum computing
|
Superconducting radio-frequency (SRF) cavities coupled to transmon circuits
have proven to be a promising platform for building high-coherence quantum
information processors. An essential aspect of this realization involves
designing high quality factor three-dimensional superconducting cavities to
extend the lifetime of quantum systems. To increase the computational
capability of this architecture, we are exploring a multimode approach. This
paper presents the design optimization process of a multi-cell SRF cavity to
perform quantum computation based on an existing design developed in the scope
of particle accelerator technology. We perform parametric electromagnetic
simulations to evaluate and optimize the design. In particular, we focus on the
analysis of the interaction between a nonlinear superconducting circuit known
as the transmon and the cavity. This parametric design optimization is
structured to serve as a blueprint for future studies on similar systems.
|
Alessandro Reineri, Silvia Zorzetti, Tanay Roy, Xinyuan You
|
2023-08-22T19:02:23Z
|
http://arxiv.org/abs/2308.11740v1
|
# Exploration of superconducting multi-mode cavity architectures for quantum computing
###### Abstract
Superconducting radio-frequency (SRF) cavities coupled to transmon circuits have proven to be a promising platform for building high-coherence quantum information processors. An essential aspect of this realization involves designing high quality factor three-dimensional superconducting cavities to extend the lifetime of quantum systems. To increase the computational capability of this architecture, we are exploring a multi-mode approach. This paper presents the design optimization process of a multi-cell SRF cavity to perform quantum computation based on an existing design developed in the scope of particle accelerator technology. We perform parametric electromagnetic simulations to evaluate and optimize the design. In particular, we focus on the analysis of the interaction between a nonlinear superconducting circuit known as the transmon and the cavity. This parametric design optimization is structured to serve as a blueprint for future studies on similar systems.
Quantum, Multi-mode, Cavity, Qubit, Simulation, Energy-participation Ratio
## I Introduction
Superconducting radio-frequency (SRF) cavity and transmon coupled systems have the potential to be a primary architecture for the realization of high-coherence quantum information processors [1]. Moreover, such systems show good scalability toward the metric of quantum volume [2, 3]. The information is typically encoded in the lowermost \(N\) states of a cavity mode, forming a more complex object called "qudit". The control of the cavity states is performed by the transmon [4], a superconducting nonlinear oscillator. One way of scaling up the amount of information encoded in these systems is to use higher order modes of the cavity, which usually show a lower coherence time than the fundamental mode, because of the difference in the modes' quality factor. A better approach is to use a multi-cell cavity containing several modes with nearly the same quality factor, where the mentioned modes are found within a bandwidth. Multi-cell cavities were originally developed in high-energy physics for particle acceleration purposes. Consisting of a repetition of the single-cell cavity geometry, the multi-cell cavity is an intrinsic multi-modal resonator with a number of high quality factor modes equal to the number of cells. Additionally, such a resonator geometry allows for a bigger Hilbert space to implement more complex quantum algorithms.
At Fermilab, the superconducting multi-cell architecture has been studied and manufactured since the year 2000 [6]. Among the most widespread multi-cell designs, the TESLA-shape cavity has shown astonishing \(Q_{0}\) values for all its fundamental modes, up to \(10^{10}\)[5], where \(Q_{0}\) stands for the cavity's internal quality factor, i.e. the number of oscillations a cavity mode's electric or magnetic field undergoes before being dissipated. However, since the mentioned cavity geometry has been developed for particle acceleration purposes, the strength of interaction between transmon and different cavity modes varies by orders of magnitude. This ultimately poses a major challenge in controlling the transmon-cavity coupled system with the available driving protocols [8, 9]. Therefore, the multi-cell cavity shape has to be modified to decrease the difference in transmon-mode interactions among all the cavity fundamental \(\mathrm{TM}_{010}\) modes. Yet, given the novelty of the architecture, there is no established cavity design optimization process for multi-mode resonator shapes to be used in the field of quantum computation.
In this paper, we present an example of multi-mode cavity design along with the optimization workflow. The goal is to find the most suitable design for quantum computation purposes. The optimization, based on finite-element electromagnetic simulations implemented with the software CST Studio Suite(r) and Ansys(r) High-Frequency Structure Solver (HFSS(tm)), aims to determine the cavity geometric parameters of interest and their influence on the cavity-transmon interaction strength for each fundamental eigenmode. At first, by carefully varying the identified parameters, the workflow allows us to obtain a multi-cell resonator shape that meets the desired requirement. Then, we evaluate the interaction between the fields and the transmon by inserting a representative sample of a transmon chip inside the cavity and performing a second series of electromagnetic simulations.
For a better explanation of the design optimization workflow we also include in the paper a preliminary example of multi-cell cavity design developed using the shown process. Specifically, the presented design is found starting from the aforementioned, already-established TESLA-shape structure. We chose to limit our investigation to a relatively small 3-cell cavity to keep the number of modifiable geometric parameters low, though the process can be generalized to resonators with more complex geometry.
The cavity-transmon interaction parameters we find with this second set of simulations are compatible with some of the available control protocols [8, 9]. Moreover, the mentioned parameters' ranges also cover the values recently obtained in an experimental study using the same transmon positioning method, though with a different cavity design [14].
## II The TESLA-shape geometry
The starting cavity geometry we considered is the multi-cell superconducting TESLA shape. It is based on the single-cell structure originally developed in the field of particle accelerator technology [6]. The single-cell outline is comprised of two ellipses' arcs joined together by their common tangent. The cell is then realized by rotating the contour around the beam axis and adding the resulting half-cell to another half-cell (Fig. 1a - 1b). This particular cell shape allows for extremely high quality factor values, up to \(10^{10}\) for the fundamental \(\mathrm{TM}_{010}\) mode of a niobium single-cell cavity [7]. The multi-cell cavity is then realized by combining together several single-cell structures through geometric interfaces called _irises_ (Fig. 1c).
From the electromagnetic point of view, the multi-cell cavity can be modeled as a series of LC circuits linked with coupling capacitances (Fig. 1d). This way, the eigenvalues problem is analytically solved yielding the following expression for the \(n\)-th mode
\[\left(\frac{\nu_{n}}{\nu_{0}}\right)^{2}=1+2k_{cc}\left[1-\cos\left(\frac{n \pi}{N}\right)\right], \tag{1}\]
where \(\nu_{0}\) is the resonant frequency of the LC circuit representing a single cell, \(N\) is the number of single LC elements and the constant \(k_{cc}\), called _cell-to-cell coupling_, is equal to the ratio of each LC circuit's capacitance over the coupling one \(k_{cc}=\frac{C}{C_{b}}\). Equation (1) well describes the actual cavity's fundamental \(\mathrm{TM}_{010}\) band: the single, high quality factor mode of the single-cell architecture splits into \(N\) high quality factor modes, as many as the number of cells, which still show the same electric and magnetic field orientations of \(\mathrm{TM}_{010}\) modes [10].
At the cavity level, the frequencies are determined by acting on the half-cell geometric parameters. In particular, the last mode's frequency, often referred to as \(\pi\)-mode, is related to the half-cell length by
\[l=\frac{c}{2\nu_{\pi}}, \tag{2}\]
\(c\) being the speed of light in vacuum. The \(\pi\)-mode frequency also determines the half-cell equatorial radius through the empirical relation
\[r_{0}=\frac{c}{\nu_{\pi}}. \tag{3}\]
On the other hand, the fundamental modes' bandwidth is related to the cell-to-cell coupling factor \(k_{cc}\) via
\[k_{cc}=\frac{\nu_{\pi}^{2}-\nu_{0}^{2}}{\nu_{0}^{2}}. \tag{4}\]
This parameter, at the cavity level, gauges the amount of electromagnetic energy exchanged between each cell. Its magnitude depends on the geometric shape of the iris connecting two subsequent cells, i.e. on its radius \(r_{i}\). For iris radius values of \(0.4\)\(r_{0}\), characterizing the TESLA-shape design, \(k_{cc}\) is very small, around \(0.02\). Consequently, the spectral width of the fundamental band of cavities operating in the GHz regime is limited to few tens of MHz.
Moreover, the bandwidth remains constant as the number of cells is increased, resulting in more fundamental modes being inserted in the same frequency range, further reducing the modes' spacing. A very small frequency separation between the modes is not ideal: when resonantly driving one of the cavity modes, the same drive could also off-resonantly drive other modes, causing unwanted classical cross-talk [8].
By inserting a single transmon inside a multi-cell TESLA-shape cavity it is possible to realize a multi-qudit processor. However, the coupling strengths between the transmon and the cavity modes show sizable differences in magnitude. This is
Fig. 1: TESLA-shaped SRF cavities. (a) A single-cell cavity. (b) The shape is determined by a set of geometric parameters defining the half-cell outline. (c) Individual cells are joined using irises to make a multi-cell cavity. (d) Electrical circuit model of the cavity, consisting of capacitively-coupled LC circuits.
due to the variation of the electric field magnitude among the modes at the ends of the cavity, where the transmon is usually placed. As the transmon interacts with the cavity through an electric dipole term, differences in the electric field component parallel to the transmon dipole moment are reflected and amplified into variations among the parameters characterizing each transmon-cavity mode interaction. The differences in the interaction strengths can even span orders of magnitude, resulting in the inability to drive all the eligible modes of the transmon-cavity coupled system with the available control protocols.
Contrarily to the case of the bandwidth and modes' separation, there is no direct connection, in literature, between a target mode's electric field distribution within a portion of the cavity volume and a certain cavity geometric parameter. Nonetheless, through several simulation iterations, it has been noticed a slight dependence of the electric field variation toward the cell-to-cell coupling, as will be discussed in the next section.
## III Cavity design optimization process
In order to meet the aforementioned goals for the multi-cell cavity design improvement we started a design optimization process. The process is based on finite-element electromagnetic simulations involving the cavity alone and it is divided into two parts. In the first one, through the use of a CAD model of the cavity with all the half-cells parameterized independently, the simulations are set up with an eigenmode solver and prompted to compute the cavity's first \(N\) eigenmodes for the considered geometry. Then, to study the effect of the iris radius modification on the bandwidth and modes' separation, the \(r_{i}\) of all irises, all equal, are changed by the same amount and the eigenmode solver is rerun. In the second part of the optimization process, another series of eigenmode simulations are performed to evaluate the distribution of the electric field component parallel to the transmon dipole moment for all the \(\mathrm{TM}_{010}\) modes. The analysis focuses on the neighborhood of the cavity entrances, along a line in which the transmon is typically inserted (Fig. 3a - 3b). Then, by modifying the two ellipses' semi-axis oriented in the \(\hat{\mathrm{u}}_{z}\) direction by the same quantity for every half-cell, the electric field component is re-evaluated and compared to the former TESLA-shape design to assess eventual improvements in its variation among the modes.
All these simulations are performed by considering the cavity walls made up of a perfect conductor, effectively not taking into account any electromagnetic loss. Additionally, the choice of modifying the geometric parameters of interest by the same quantity for all the half-cells is not a mandatory requirement. Indeed, it is possible to change each variable differently, all the half-cells being parameterized independently. However, having done so allows us to keep the cavity geometry simple and maintain the symmetry of the electric field distribution at both ends of the cavity, yet achieving the desired improvements.
We start the optimization process considering a three-cell TESLA-shape design sized to have a \(\mathrm{TM}_{010}\)\(\pi\)-mode frequency of \(\nu_{\pi}=6\ \mathrm{GHz}\), i.e. with a half-cell length of \(l=12.5\ \mathrm{mm}\), an equatorial radius \(r_{0}=23.55\ \mathrm{mm}\), an iris radius \(r_{i}=10\ \mathrm{mm}\) and a radius ratio of \(\frac{r_{i}}{r_{0}}=0.42\). With this choice of parameters, the frequencies of the fundamental modes become 5.77 GHz, 5.88 GHz and 6.08 GHz, resulting in a 200 MHz wide \(\mathrm{TM}_{010}\) passband with a maximum modes' separation of 110 MHz. By increasing the iris radius value relatively to \(r_{0}\) and evaluating the fundamental band bandwidth, we notice that \(\Delta\nu_{\mathrm{TM}_{010}}\) follows a parabolic relation towards \(\frac{r_{i}}{r_{0}}\) (Fig. 2a). Consequently, with the number of modes remaining constant, the modes' separation increases as the iris is enlarged. In addition to that, for every iris radius value analyzed, all the fundamental \(\mathrm{TM}_{010}\) modes keep their electromagnetic features, always showing the electric field oriented parallel to \(\hat{\mathrm{u}}_{z}\). To not end up with a too-sharp and pointy cavity design that would affect its quality factor, we choose a radius ratio value of 0.63 which still provides a sizable improvement in the bandwidth, becoming 900 MHz large, and in the modes' separation, respectively of 270 MHz between the \(\frac{\pi}{3}\) and the \(\frac{2}{3}\pi\) modes and of 630 MHz between the \(\frac{2}{3}\pi\) and the \(\pi\)-mode (Table I).
For the second step of the optimization process, as mentioned before, we act on the parameters \(r_{0}\) and \(a_{2}\), slightly modifying them from the original values of the TESLA-shape design to see how the electric field's \(z\)-component distribution among the modes is affected. With the chosen radius ratio value from the first optimization step, prompting \(r_{0}=25.5\ \mathrm{mm}\) for all the half-cells, \(a_{2}=2.7\ \mathrm{mm}\) for the half-cells next to the beam stubs and \(a_{2}=3.8\ \mathrm{mm}\) for the other half-cells, we notice a reduction of the 35% in the \(E_{z}\) variation among all the \(\mathrm{TM}_{010}\) modes in the designated neighborhoods of the cavity entrances along the line \(l=(-4,0,z)\) compared to the initial TESLA-shape design (Fig. 3). We refer to the optimized design with _TESLA-like_ for its resemblance to the original TESLA-shape one.
In modifying the cavity geometry, not only the \(\mathrm{TM}_{010}\) band and modes are affected but also the higher order \(\mathrm{TE}_{111}\) ones. In particular, the two bands, while being fairly apart in the case of the initial TESLA-shape design, intersect one another in the case of the TESLA-like design, resulting in the \(\mathrm{TE}_{111}\)
Fig. 2: \(\mathrm{TM}_{010}\) bandwidth and inter-mode spacing as a function of the \(\frac{r_{i}}{r_{0}}\) ratio. (a) The bandwidth shows a parabolic dependence. (b) Minimum and maximum modes’ separation as a function of \(\frac{r_{b}}{r_{0}}\).
\(\frac{2}{3}\pi\)-mode to be found between the \(\mathrm{TM}_{010}\)\(\frac{2}{3}\pi\)-mode and the \(\pi\)-mode (Fig. 4). Despite that, the unwanted \(\mathrm{TE}_{111}\) modes are not expected to interfere with the correct functioning of the cavity when the transmon is inserted. That is both due to a still fairly large separation between the \(\mathrm{TE}_{111}\)\(\frac{2}{3}\pi\) mode and the nearest \(\mathrm{TM}_{010}\) mode, around 100 MHz (Table I), and to the fact that, being transverse-electric modes, their electric field is oriented almost-perpendicularly to the transmon dipole. There might be a small \(E_{z}\) component that couples with the transmon, for the cavity geometry makes the fields bend slightly in the vicinity of the cavity walls, though it is not expected to have a sizable effect on the system.
The found design meets the initial requirements of larger bandwidth and larger modes' separation. Moreover, we also obtain some improvement in the \(E_{z}\) component variation among the modes. All the achievements did not modify the nature of the \(\mathrm{TM}_{010}\) modes which still show their electric field original orientation. Figure 5 shows that the electric field orientation is the same along the \(x=0\) cutting plane and that each mode keeps the number of field intensity maxima unchanged between the two designs.
## IV Transmon-cavity interaction evaluation
The last step of the multi-cell cavity design optimization process involves assessing the interaction between the optimized TESLA-like design and the transmon. In particular, we want to evaluate all the parameters defining the Hamiltonian in the dispersive regime which, for an intrinsic multi-modal electromagnetic environment, reads [11]
\[\begin{split}\hat{H}_{\mathrm{disp}}\simeq&\sum_{m} \hbar\left(\omega_{m}+\gamma_{m}\right)\hat{a}_{m}^{\dagger}\hat{a}_{m}\\ &+\frac{1}{2}\sum_{m}\hbar K_{m}\left(\hat{a}_{m}^{\dagger} \right)^{2}\hat{a}_{m}^{2}\\ &+\sum_{m>n}\hbar\chi_{m,n}\hat{a}_{m}^{\dagger}\hat{a}_{m}\hat{ a}_{n}^{\dagger}\hat{a}_{n},\end{split} \tag{5}\]
where \(\left\{\hat{a}_{m},\hat{a}_{m}^{\dagger}\right\}_{m\in\mathbb{N}}\) are the \(m\)-th mode's annihilation and creation operators (including the transmon ones) and \(\left\{\omega_{m}\right\}_{m\in\mathbb{N}}\) are the uncoupled system's eigenfrequencies (including the transmon one). The knowledge of the dispersive regime parameters \(\left\{\gamma_{m}\right\}_{m\in\mathbb{N}}\), \(\left\{K_{m}\right\}_{m\in\mathbb{N}}\) and \(\left\{\chi_{m,n}\right\}_{m,n\in\mathbb{N},m\neq n}\), named respectively _linear corrections_,
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**Mode** & \(\boldsymbol{\nu}\) **(TESLA-like) [GHz]** & \(\boldsymbol{\nu}\) **(TESLA-shape) [GHz]** \\ \hline \(\mathrm{TM}_{010}\)\(\frac{\pi}{3}\) & 5.1813 & 5.7666 \\ \hline \(\mathrm{TM}_{010}\)\(\frac{2}{3}\pi\) & 5.4597 & 5.8780 \\ \hline \(\mathrm{TM}_{010}\)\(\pi\) & 6.0780 & 5.9519 \\ \hline \(\mathrm{TE}_{111}\)\(\frac{\pi}{3}\) & 4.8957 & 6.6127 \\ \hline \(\mathrm{TE}_{111}\)\(\frac{2}{3}\pi\) & 5.9663 & 6.8589 \\ \hline \(\mathrm{TE}_{111}\)\(\pi\) & 7.6470 & 7.4106 \\ \hline \end{tabular}
\end{table} TABLE I: Band data comparison between TESLA-like and TESLA-shape designs.
Fig. 4: Band diagram comparison between TESLA-shape and TESLA-like cavity designs. The \(\mathrm{TM}_{010}\) and \(\mathrm{TE}_{111}\) bands, well separated in the first case, intersect one the other in the second case.
Fig. 5: Electric field distribution for the \(\mathrm{TM}_{010}\) modes along x=0 plane. (a), (b), (c) show the three fundamental modes for the TESLA-shape cavity and (d), (e), (f) show the same modes for the TESLA-like design. All the \(\mathrm{TM}_{010}\) modes maintain their electromagnetic properties, i.e. their electric and magnetic field orientations, and the number of antinodes remain the same.
Fig. 3: (a) TESLA-shape and (b) TESLA-like CAD models. The black lines with the red segments show the axis \(l=(-4,0,z)\) along which the comparison of \(E_{z}\) variation among the \(\mathrm{TM}_{010}\) modes has been made. The plots (c) and (d) show that, although the \(E_{z}\) mean value stays almost the same, there is a sizable reduction of the component variation in the optimized design case at both ends of the cavity, around 35%.
_self-Kerrs_ and _cross-Kerr_ allows for a complete assessment of the transmon-cavity interaction in the dispersive regime working region [4]. The parameters are not all independent one another; instead, the cross-Kerr interaction can be used to express the other quantities with the following
\[\gamma_{m} =\frac{1}{2}\sum_{n}\chi_{m,n}, \tag{6}\] \[K_{m} =\frac{\chi_{m,m}}{2}. \tag{7}\]
In addition to that, the cross-Kerr interactions evaluation is fundamental to implementing quantum computation algorithms on a transmon-cavity coupled system [8, 9]. A last, yet important parameter that does not pertain directly to the dispersive regime approximation is called _Rabi coupling_. It gives a measure of the dipole interaction strength between each cavity \(\mathrm{TM}_{010}\) mode and the transmon mode and can be calculated from the parameters in the equation (5) as [11]
\[g_{m}=\sqrt{\frac{-\chi_{0,m}\Delta_{m}\left(\Delta_{m}-\frac{E_{C}}{h}\right)} {\frac{E_{C}}{h}}}. \tag{8}\]
where \(\Delta_{m}\) is the frequency difference between the transmon and the \(m\)-th cavity mode, \(\chi_{0,m}\) is the cross-Kerr interaction strength between the transmon and the \(m\)-th mode and \(E_{C}\) is the transmon capacitive energy, related to the transmon self-Kerr interaction which, in literature, is called _anharmonicity_[4].
To obtain all the mentioned parameters we use the energy-participation ratio analysis method [12]. This protocol is based on finite-element eigenmode simulations and requires inserting a transmon element inside the cavity volume. It evaluates the fraction of electromagnetic energy stored in the transmon for each system's eigenmode from the \(\mathbf{E}\) ad \(\mathbf{H}\) distributions in the volume and, through that, returns all the parameters characterizing equation (5). The simulations are built by inserting a transmon at one end of the cavity on a loss-free silicon rod. The transmon geometry is similar to the ones already in literature and consists of a pair of antenna pads, a pair of qubit pads and a rectangle for the actual transmon junction, all modeled as perfect conductors (Fig. 6). An additional boundary condition is set up on the junction rectangle by defining an integration line across it and assigning it a lumped inductance value that corresponds to the transmon linear Josephson inductance \(L_{J_{0}}\). The chosen value \(L_{J_{0}}=11.57~{}\mathrm{nH}\) for this series of simulations corresponds to a transmon frequency of \(\nu_{0}=4.5~{}\mathrm{GHz}\).
Afterwards, we initialize the eigenmode solver with the number of desired eigenmodes we want the electric and magnetic fields to be calculated. At this stage, it is of the utmost importance to seed a mesh refinement both on the entire CAD model and locally around the transmon junction, for its small physical dimensions can make the solver not detect its impedance and result in failing the simulations. Finally, we set up a parametric sweep over the relative position between the cavity end and the transmon to see the behavior of the parameters around the neighborhood on which \(E_{z}\) variation reduction among the modes is observed from the TESLA-shape to the TESLA-like design.
With the parameter values obtained from the cavity-transmon relative position sweep we plot Fig. 7 showing each mode's parameter behavior toward the transmon position. Starting from the transmon-modes cross-Kerr interactions, we notice that the parameters \(\chi_{01}\) and \(\chi_{02}\) corresponding to the \(\mathrm{TM}_{010}\)\(\frac{\pi}{3}\)-mode and \(\frac{2}{3}\pi\)-mode, are very close with each other for the considered transmon positions. The cross-Kerr \(\chi_{03}\), instead, is much lower than the other two, about one order of magnitude less for each transmon position. As expected, all the cross-Kerr magnitudes decrease in absolute value as the transmon is moved out of the cavity, due to a smaller electric field. Moreover, all the modes' cross-Kerr remain negative within the entire position sweep. The performed sweep also shows that the cavity-transmon interaction can be tuned by repositioning the nonlinear circuit element with respect to the cavity entrance. This way, the coupled system can be rather flexible toward the applicable driving protocol by simply moving the transmon chip and retaining the multi-cell cavity design: the cross-Kerr interactions go from the order of MHz to tens of kHz, modifying the transmon-cavity interaction from strong dispersive regime to weak dispersive regime. This allows the system to be driven by either control protocols developed for strongly coupled [8] or weakly coupled systems [9].
The second set of parameters we evaluate as a function of the transmon position is the self-Kerr set, including the transmon anharmonicity. Since, as expected, the anharmonicity is considerably larger than the cavity modes' self-Kerrs, we report their absolute values in a logarithmic plot (Fig. 8). As highlighted before for the cross-Kerr interactions, the self-Kerr
Fig. 6: CAD model of the transmon inserted at one side of the cavity for EPR simulations.
parameters of the first two \(\mathrm{TM}_{010}\) modes are very similar for every transmon position, whereas the third mode's self-Kerr is roughly one order of magnitude less than the other two throughout the whole position sweep. All the self-Kerr magnitudes and the transmon anharmonicity, although reported in absolute value in the plots, are negative for any transmon position and decrease in absolute value as the transmon is moved outward. The cavity modes' self-Kerr magnitudes are considerably less than the corresponding modes' cross-Kerr values. Consequently, their effect on the coupled system behavior is very weak per electromagnetic excitation of a cavity mode, as stated by equation (5).
Proceeding with the energy-participation ratio analysis, the third ensemble of dispersive regime parameters we examine towards the transmon position is the linear corrections set. As for the self- and cross-Kerr cases, the first two \(\mathrm{TM}_{010}\) modes' linear corrections are fairly similar throughout the entire position sweep (Fig. 9). This supports the validity of Eq. (6) linking together all the dispersive regime parameters. Moreover, from the plot it is possible to notice that the corrections are almost equal to half the sum of self-Kerr and transmon-mode cross-Kerr magnitudes for each mode, implying that the cross-Kerr interactions between the cavity modes are negligible.
Finally, the cross-Kerr interaction values are used to calculate the Rabi couplings through equation (8) and their behavior towards the transmon position is plotted. Contrarily to all the previous parameters, all the Rabi couplings are different from each other throughout the entire transmon position sweeps (Fig. 10). However, this does not go against the similarities between the first two modes' parameters, as from equation (8) it can be seen that the difference in frequency \(\Delta_{m}\) can compensate the \(g_{m}\) variation, yielding an almost equal cross-Kerr magnitude for the two mentioned modes. This fact also justifies the difference in the parameters' magnitude characterizing the \(\pi\)-mode: its Rabi coupling is not enough to compensate for the frequency difference towards the transmon. Consequently, the \(\pi\)-mode parameters of cross-Kerr, self-Kerr and linear correction are smaller than the ones characterizing the other \(\mathrm{TM}_{010}\) modes' interactions.
## V Conclusions
To summarize, we optimize the geometry of a high-coherence multi-mode cavity to improve its performance as a quantum processor. First of all, we systematically broaden the spectral width of the fundamental \(\mathrm{TM}_{010}\) band, consequently increasing the spacing between modes and potentially
Fig. 8: Absolute values of the self-Kerr interactions as a function of transmon position for (a) cavity’s \(\mathrm{TM}_{010}\) modes and (b) transmon.
Fig. 10: Rabi coupling magnitude as a function of transmon position for all \(\mathrm{TM}_{010}\) modes.
Fig. 7: Cross-Kerr interaction between transmon and cavity modes as a function of transmon position for all cavity \(\mathrm{TM}_{010}\) modes. The transmon position is given considering the system of coordinate origin at the beginning of the cavity beam stub so that larger values of \(d\) imply the transmon being deeper inside the cavity.
Fig. 9: Linear correction magnitude to cavity modes as a function of transmon position for all cavity \(\mathrm{TM}_{010}\) modes.
resolving the issue of frequency crowding. Afterward, we establish a qualitative connection between some of the cavity geometric parameters and the electric field distribution of the fundamental modes. The results allow for more efficient optimization of the cavity design. The parameters found with the transmon position sweep in the second set of simulations allow a certain flexibility in the driving protocol choice, modifying the transmon-cavity interaction from strongly dispersive to weakly dispersive while retaining the similar cavity design. Future work will focus on the experiments to test the developed geometry with a prototype TESLA-like 3-cell cavity, which will provide guidance to further scale up the quantum processor.
|
2306.16583
|
On qualitative aspects of the quantitative subspace theorem
|
We deduce Diophantine arithmetic inequalities for big linear systems and with
respect to finite extensions of number fields. Our starting point is the
Parametric Subspace Theorem, for linear forms, as formulated by Evertse and
Ferretti \cite{Evertse:Ferretti:2013}. Among other features, this viewpoint
allows for a partitioning of the linear scattering, for the Diophantine
Exceptional set, that arises in the Subspace Theorem. Our perspective builds on
our work \cite{Grieve:points:bounded:degree}, combined with earlier work of
Evertse and Ferretti, \cite{Evertse:Ferretti:2013},
Evertse and Schlickewei, \cite{Evertse:Schlickewei:2002}, and others. As an
application, we establish a novel linear scattering type result for the
Diophantine exceptional set that arises in the main Diophantine arithmetic
inequalities of Ru and Vojta \cite{Ru:Vojta:2016}. This result expands, refines
and complements our earlier works (including \cite{Grieve:2018:autissier} and
\cite{Grieve:points:bounded:degree}). A key tool to our approach is the concept
of \emph{linear section} with respect to a linear system. This was defined in
\cite{Grieve:points:bounded:degree}. Another point, which we develop in this
article, is a notion of logarithmic \emph{twisted height functions} for local
Weil functions and linear systems. As an additional observation, which is also
of an independent interest, we use the theory of Iitaka fibrations to determine
the asymptotic nature of such linear sections.
|
Nathan Grieve
|
2023-06-28T22:28:08Z
|
http://arxiv.org/abs/2306.16583v1
|
# On qualitative aspects of the quantitative subspace theorem
###### Abstract.
We deduce Diophantine arithmetic inequalities for big linear systems and with respect to finite extensions of number fields. Our starting point is the Parametric Subspace Theorem, for linear forms, as formulated by Evertse and Ferretti [6]. Among other features, this viewpoint allows for a partitioning of the linear scattering, for the Diophantine Exceptional set, that arises in the Subspace Theorem. Our perspective builds on our work [13], combined with earlier work of Evertse and Ferretti, [6], Evertse and Schlickewei, [7], and others. As an application, we establish a novel linear scattering type result for the Diophantine exceptional set that arises in the main Diophantine arithmetic inequalities of Ru and Vojta [26]. This result expands, refines and complements our earlier works (including [10] and [13]). A key tool to our approach is the concept of _linear section_ with respect to a linear system. This was defined in [13]. Another point, which we develop in this article, is a notion of logarithmic _twisted height functions_ for local Weil functions and linear systems. As an additional observation, which is also of an independent interest, we use the theory of Iitaka fibrations to determine the asymptotic nature of such linear sections.
_Mathematics Subject Classification (2020): 11J87, 14G05, 11G50. Key Words: Parametric Subspace Theorem, Weil and height functions, Diophantine approximation, Geometry of Numbers, Linear Series._
I hold grants RGPIN-2021-03821 and DGECR-2021-00218 from the Natural Sciences and Engineering Research Council of Canada.
Date: June 30, 2023. File: qual-quant-subspace-scattering-22-June-2023.tex
## 1. Introduction
Our starting point here is the refinement of the Quantitative Subspace Theorem, which was given by Evertse and Ferretti [6]. It improved on earlier work of Evertse and Schlickewei, [7], and Evertse, [5], and was derived as a consequence of the Absolute Parametric Subspace Theorem ([6, Theorems 2.1, 2.2 and 2.3]).
In this article, our purpose is to improve upon qualitative aspects of these works. We are motivated by the recent progress in our understanding of Diophantine approximation and K-stability for projective varieties. (See for example [1], [23], [26], [9], [27], [10], [19], [22], [12], [11], [13], [14], [16], [15], [18] and the references therein.)
Building on the viewpoint of Schmidt [28] and Evertse [5], here, we formulate a concept of _density_ for rational points with respect to a linear system. Briefly, a collection of rational
points is _dense_, with respect to a given linear system, if it is contained in no proper finite union of the linear system's linear sections. We refer to Section 4, see also [13, Definition 3.1], for details in regards to the notion of linear sections and density of rational points with respect to a given linear system. In Section 2.2, we give a construction of local Weil functions, with respect to a given extension of number fields, via presentations of Cartier divisors. This builds on [10, Section 2], [11, Section 3] and [2, Chapter 2].
Our first result is a novel logarithmic form of the Parametric Subspace Theorem (see Theorem 1.2). It gives inequalities that involve _twisted logarithmic height functions_ for big line bundles on projective varieties. The main context that we consider is Setting 1.1 below. It resembles that of [26, Theorem 2.10], [1, Proposition 4.2] and [10, Proposition 2.1].
**Setting 1.1**.: Let \(\mathbf{K}\) be a number field, \(M_{\mathbf{K}}\) its set of places and \(S\subset M_{\mathbf{K}}\) a finite set. Let \(\overline{\mathbf{K}}\) be an algebraic closure of \(\mathbf{K}\) and \(\mathbf{F}/\mathbf{K}\) a finite extension field \(\mathbf{K}\subseteq\mathbf{F}\subseteq\overline{\mathbf{K}}\).
Let \(L\) be a big line bundle on a geometrically irreducible projective variety \(X\). Assume that both \(X\) and \(L\) are defined over \(\mathbf{K}\). Let
\[V:=\mathrm{H}^{0}(X,L)\]
and set \(n:=\dim V-1\). Unless stated otherwise, we always assume that \(n\geqslant 1\).
Respectively, denote by \(X_{\mathbf{F}}\) and \(L_{\mathbf{F}}\) the base change of \(X\) and \(L\) with respect to the field extension \(\mathbf{F}/\mathbf{K}\). Put
\[V_{\mathbf{F}}:=V\otimes_{\mathbf{K}}\mathbf{F}=\mathrm{H}^{0}(X_{\mathbf{F}},L_{\mathbf{F}}).\]
If \(v\in M_{\mathbf{K}}\) and if \(D\) is a Cartier divisor on \(X\) and defined over \(\mathbf{F}\), then \(\lambda_{\mathcal{D}}(\cdot,v)\) denotes a local Weil function with respect to the place \(v\) and with respect to a fixed choice of presentation \(\mathcal{D}\) defined over \(\mathbf{F}\). For the particular case that \(s\in V_{\mathbf{F}}\) and \(D=\operatorname{div}(s)\) we often write \(\lambda_{s}(\cdot,v)\) in place of \(\lambda_{\mathcal{D}}(\cdot,v)\). (We refer to Section 2.2 for more details.)
We deduce Theorems 1.3 and 1.4 from the following novel logarithmic formulation of the Parametric Subspace Theorem for big linear systems and with respect to a finite extension of number fields. This is the content of Theorem 1.2. We prove it in Section 5. It is a consequence of a more robust relative formulation of [6, p. 515]. (See Theorem 3.1.)
**Theorem 1.2** (Parametric Subspace Theorem for big linear systems).: _Consider the situation of Setting 1.1 and for each \(v\in S\) choose a collection of linearly independent global sections_
\[s_{v0},\dots,s_{vn}\in V_{\mathbf{F}}.\]
_Suppose that \(\epsilon>0\) is a fixed positive real number. Fix a collection of real numbers \(c_{vi}\in\mathbb{R}\) which has the property that_
\[\sum_{i=0}^{n}c_{vi}=0\text{ for all }v\in S.\]
_Then there exist a real number \(Q_{0}>1\) and a finite collection of proper linear sections_
\[\Lambda_{1},\dots,\Lambda_{t}\subsetneq X\]
_with respect to the linear series \(|V|\), which are defined over \(\mathbf{K}\) and which have the property that for all \(Q\geqslant Q_{0}\) there is a linear section_
\[\Lambda_{j_{Q}}\in\{\Lambda_{1},\ldots,\Lambda_{t}\}\]
_which contains all \(\mathbf{K}\)-rational points_
\[x\in\left(X\setminus\left(\operatorname{Bs}(|V|)\bigcup\bigcup \bigcup_{\begin{subarray}{c}v\in S\\ i=0,\ldots,n\end{subarray}}\operatorname{Supp}(s_{vi})\right)\right)( \mathbf{K})\]
_that satisfy the inequalities that_
\[\sum_{v\in S}\left(\lambda_{s_{vi}}(x,v)+c_{vi}\cdot\log(Q)\right)\geqslant h _{L}(x)+\epsilon\cdot\log(Q)+\operatorname{O}(1)\]
_for all \(i=0,\ldots,n\)._
_In particular, the collection of such points is not dense with respect to \(|V|\)._
As an application of Theorem 1.2, we deduce a complementary form of a celebrated theorem of Faltings and Wustholz [8, Theorem 8.1].
**Theorem 1.3** (Faltings and Wustholz inequalities for big linear systems).: _Consider the situation of Setting 1.1 and for each \(v\in S\) choose a collection of linearly independent global sections_
\[s_{v0},\ldots,s_{vn}\in V_{\mathbf{F}}.\]
_Finally, fix a collection of real numbers \(d_{vi}\), for all \(v\in S\) and all \(i=0,\ldots,n\), which have the property that_
\[\sum_{v\in S}\sum_{i=0}^{n}d_{vi}>n+1.\]
_Then the set of solutions_
\[x\in\left(X\setminus\left(\operatorname{Bs}(|V|)\bigcup\bigcup \bigcup_{\begin{subarray}{c}v\in S\\ i=0,\ldots,n\end{subarray}}\operatorname{Supp}(s_{vi})\right)\right)( \mathbf{K})\]
_of the system of inequalities_
\[\lambda_{s_{vi}}(x,v)-d_{vi}\cdot h_{L}(x)+\operatorname{O}_{v}(1)\geqslant 0\]
_for all \(i=0,\ldots,n\) and all \(v\in S\) is not dense with respect to \(|V|\)._
As an application of Theorem 1.3, we obtain a _qualitative linear scattering_ type result which improves upon our understanding of the Subspace Theorem. It provides a qualitative perspective to the quantitative work of Evertse [5] and Evertse and Schlickewei [7]. We
refer to [5, p. 240] for the concept of _linear scattering_ in the classical case of the Subspace Theorem.
The proof of Theorem 1.4, see Section 7, expands on the approach of [7, Section 21]. It follows the suggestion given in [6, p. 514]. A key role is played by Lemma 7.1 which is a logarithmic form of [4, Lemma 4].
**Theorem 1.4** (Linear scattering and the Subspace Theorem for big linear systems).: _Consider the situation of Setting 1.1 and for each \(v\in S\) fix a collection of linearly independent global sections_
\[s_{v0},\ldots,s_{vn}\in V_{\mathbf{F}}.\]
_Fix a positive and sufficiently small real number \(\epsilon>0\) and consider the set of solutions_
\[x\in\left(X\setminus\left(\operatorname{Bs}(|V|)\bigcup\bigcup_{\begin{subarray} {c}v\in S\\ i=0,\ldots,n\end{subarray}}\operatorname{Supp}(s_{vi})\right)\right)(\mathbf{ K})\]
_with sufficiently large height_
\[h_{L}(x)\gg 0\]
_to the inequality_
\[\sum_{v\in S}\sum_{i=0}^{n}\lambda_{s_{vi}}(x,v)\geqslant(n+1+\epsilon)h_{L}( x)+\operatorname{O}(1). \tag{1.1}\]
_Then this solution set is not dense with respect to \(|V|\)._
_In more specific terms, the solution set admits a decomposition into finitely many subsets such that for each subset there exists a collection of real numbers \(e_{vi}\), for each \(v\in S\) and all \(i=0,\ldots,n\), such that_
\[\sum_{v\in S}\sum_{i=0}^{n}e_{vi}>n+1\]
_and such that all solutions in this subset satisfy the inequalities that_
\[\lambda_{s_{vi}}(x,v)-e_{vi}\cdot h_{L}(x)+\operatorname{O}_{v}(1)\geqslant 0\]
_for all \(v\in S\) and all \(i=0,\ldots,n\)._
_In particular, the collection of solutions to (1.1) that are in the subset corresponding to the weights \(e_{vi}\) is contained in a finite union of proper linear sections of \(X\) with respect to the linear series \(|V|\)._
Theorem 1.4 implies a more general result, which is in the spirit of Vojta's formulation of the Subspace Theorem for hyperplanes in general position. (See for instance [2, Theorem 7.2.9].) We formulate this form of the Subspace Theorem, for big linear systems and global sections in _linearly general position_, as Corollary 1.5 below.
**Corollary 1.5** (Subspace Theorem for big linear systems and global sections in linearly general position).: _Consider the situation of Setting 1.1 and for all \(v\in S\) let_
\[s_{v0},\ldots,s_{vn_{v}}\in V_{\mathbf{F}}\]
_be a collection of global sections of \(L_{\mathbf{F}}\), with \(n_{v}\geqslant n\), which have the property that all subsets of cardinality not exceeding \(n+1\) are \(\mathbf{F}\)-linearly independent. Fix a positive and sufficiently small real number \(\epsilon>0\) and consider the set of solutions_
\[x\in\left(X\setminus\left(\mathrm{Bs}(|V|)\bigcup\bigcup_{\begin{subarray}{c} v\in S\\ i=0,\ldots,n\end{subarray}}\mathrm{Supp}(s_{vi})\right)\right)(\mathbf{K})\]
_with sufficiently large height_
\[h_{L}(x)\gg 0\]
_to the inequality_
\[\sum_{v\in S}\sum_{i=0}^{n_{v}}\lambda_{s_{vi}}(x,v)\geqslant(n+1+\epsilon)h_ {L}(x)+\mathrm{O}(1). \tag{1.2}\]
_Then this solution set is not dense with respect to \(|V|\)._
_In more specific terms, the solution set is a finite union of proper linear sections with respect to \(|L|\). Furthermore, it admits a decomposition into finitely many subsets such that for each subset there exist real numbers \(e_{vi}\), for each \(v\in S\) and all \(i=0,\ldots,n\), such that_
\[\sum_{v\in S}\sum_{i=0}^{n}e_{vi}>n+1\]
_and such that for some linearly independent collection of sections_
\[\{s_{vj_{0}},\ldots,s_{vj_{n}}\}\subseteq\{s_{v0},\ldots,s_{vn_{v}}\}\]
_all except for perhaps finitely many solutions in this subset satisfy the inequalities that_
\[\lambda_{s_{vj_{i}}}(x,v)-e_{vi}\cdot h_{L}(x)+\mathrm{O}_{v}(1)\geqslant 0\]
_for all \(v\in S\) and all \(i=0,\ldots,n\)._
_In particular, the collection of solutions to (1.2) that are in the subset corresponding to the weights \(e_{vi}\) are contained in a finite union of proper linear sections of \(X\) with respect to the linear series \(|V|\)._
As an application of Theorem 1.4, in the form of Corollary 1.5, we deduce qualitative scattering information that arises in the conclusion of the Ru-Vojta Arithmetic General Theorem ([26, p. 964]). This is the content of Theorem 1.6. In its conclusion, the description of the Diophantine exceptional set refines, for the case of \(\mathbf{K}\)-rational points, and builds on our main result from [13]. The statement of Theorem 2.3 requires the notion of stable base locus. We refer to Section 2.3 for further details.
**Theorem 1.6** (Arithmetic General Theorem with linear scattering).: _Working over a base number field \(\mathbf{K}\), fix a finite set of places \(S\subset M_{\mathbf{K}}\). Let \(L\) be a big line bundle on a geometrically irreducible projective variety \(X\). Assume that \(X\) and \(L\) are both defined over \(\mathbf{K}\). Let \(D_{1},\ldots,D_{q}\) be a collection of nonzero properly intersecting effective Cartier divisors on \(X\) and defined over \(\mathbf{F}\). Then there are optimal constants \(\eta(L,D_{i})\), for \(i=1,\ldots,q\), which are such that if \(\epsilon>0\) is a sufficiently small real number, then the inequality_
\[\sum_{i=1}^{q}\eta(L,D_{i})m_{S}(x,D_{i})\leqslant(1+\epsilon)h_{L}(x)\]
_holds true for all \(x\in X(\mathbf{K})\) outside of a Zariski closed subset \(Z\subsetneq X\). Here, \(m_{S}(\cdot,D_{i})\), for \(i=1,\ldots,q\), is the proximity function of \(D_{i}\) with respect to \(S\)._
_Moreover, the Diophantine exceptional set \(Z\) may be described as_
\[Z=\operatorname{Bs}(L)\bigcup\left(\bigcup_{i=1}^{q}\operatorname{Supp}(D_{i })\right)\bigcup\left(\Lambda_{1}\bigcup\ldots\bigcup\Lambda_{\ell}\right)\]
_for \(\operatorname{Bs}(L)\) the stable base locus of \(L\) and \(\Lambda_{1},\ldots,\Lambda_{\ell}\) linear sections of the complete linear system \(|L^{\otimes m}|\) for some suitably large positive integer \(m\)._
_Finally, all but perhaps finitely many points in the union of the linear sections \(\Lambda_{1}\bigcup\ldots\bigcup\Lambda_{\ell}\), admit a linear scattering type decomposition into finitely many subsets such that the following is true: for each subset there exist real numbers \(e_{vi}\), for all \(v\in S\) and all \(i=0,\ldots,n_{m}\), such that_
\[\sum_{v\in S}\sum_{i=0}^{n_{m}}e_{vi}>n_{m}+1\]
_and for each \(v\in S\), linearly independent sections_
\[s_{vj_{0}},\ldots,s_{vj_{nm}}\in\operatorname{H}^{0}(X_{\mathbf{F}},L_{ \mathbf{F}}^{\otimes m})\]
_which are such that all solutions in this subset satisfy the inequalities that_
\[\lambda_{s_{vj_{i}}}(x,v)-e_{vi}h_{L^{\otimes m}}(x)+\operatorname{O}_{v}(1) \geqslant 0\]
_for all \(v\in S\) and all \(i=0,\ldots,n_{m}\)._
Theorem 1.6 complements our results from [13]. Its proof is given in Section 8 and is based on the Ru-Vojta filtration construction (which has origins in the work of Corvaja-Zannier [3], Levin [21], Autissier [1], Ru [25] and others). In [13], this is exposed in detail and expanded upon to treat the case of points of bounded degree. Here, our novel description of the Diophantine exceptional set that arises in its conclusion is made possible by applying the Subspace Theorem, in the form of Theorem 1.4, which we derive here from its parametric formulation (Theorem 1.2).
It is also important to make note of the bigness assumption in the statement of Theorems 1.3, 1.4 and 1.6 and Corollary 1.5. Indeed, as is indicated in the proof of these results, the
Northcott property for big line bundles plays an important role. On the other hand, recall that there exists Subspace Theorem inequalities for linear systems, without the assumption of bigness for the given linear system. (See for instance [26, Theorem 2.10] and [13, Theorem 3.3].) Here the bigness assumption is used to deduce the linear scattering Subspace Theorem result (Theorem 1.4) from Theorem 1.2.
As some final observations, and to help place matters into perspective, in Section 9 we apply the theory of Iitaka fibrations to determine the _asymptotic nature_ of the _linear sections_ that are associated to a linear series. We refer to Section 4, see Definition 4.1, for a precise definition of our concept of linear section with respect to a linear system. It builds on [13, Definition 3.1].
### Acknowledgements
I thank the Natural Sciences and Engineering Research Council of Canada for their support via my grants RGPIN-2021-03821 and DGECR-2021-00218. This work benefited from trips to BIRS, Banff, during the Summer of 2022. It is my pleasure to thank colleagues for their interest, encouragement and engagement on related topics. Finally, I thank anonymous referees for carefully reading my manuscript and for offering helpful suggestions.
## 2. Preliminaries
In this article, our conventions and notations closely resemble those of [2] and [20]. We briefly indicate some of the main points here.
### Number fields, absolute values and multiplicative projective heights
Let \(\mathbf{K}\) be a number field with set of places \(M_{\mathbf{K}}\) and fix an algebraic closure \(\overline{\mathbf{K}}\).
If \(v\in M_{\mathbf{K}}\), then \(|\cdot|_{v}\) is its normalized absolute value. Thus, as in [2, p. 11], if \(v\in M_{\mathbf{K}}\) lies above \(p\in M_{\mathbb{Q}}\), then the restriction of \(|\cdot|_{v}\) to \(\mathbb{Q}\) is \(|\cdot|_{p}^{[\mathbf{K}_{v}:\mathbb{Q}_{p}]/[\mathbf{K}:\mathbb{Q}]}\). Here, \(\mathbf{K}_{v}\) and \(\mathbb{Q}_{p}\) are the respective completions of \(\mathbf{K}\) and \(\mathbb{Q}\) at \(v\) and \(p\). By these conventions, the product formula holds true with multiplicities equal to one. Explicitly
\[\prod_{v\in M_{\mathbf{K}}}|\alpha|_{v}=1\text{ for }\alpha\in\mathbf{K}^{\times}.\]
If \(\mathbf{F}/\mathbf{K}\) is a finite extension field, \(\mathbf{K}\subseteq\mathbf{F}\subseteq\overline{\mathbf{K}}\), and \(v\in M_{\mathbf{K}}\), then choose \(w\in M_{\mathbf{F}}\) with \(w\mid v\) and put
\[|\cdot|_{v,\mathbf{K}}=|\cdot|_{v,\mathbf{F}/\mathbf{K}}:=\left|\mathrm{N}_{ \mathbf{F}_{w}/\mathbf{K}_{v}}(\cdot)\right|_{v}^{\frac{1}{[\mathbf{F}_{w}: \mathbf{K}_{v}]}}.\]
Then \(|\cdot|_{v,\mathbf{K}}\) is an extension of \(|\cdot|_{v}\) to \(\mathbf{F}\).
In terms of multiplicative projective heights, recall, that if
\[\mathbf{x}=[x_{0}:\cdots:x_{n}]\in\mathbb{P}^{n}(\mathbf{K})\]
then its _multiplicative height_ with respect to the tautological line bundle \(\mathcal{O}_{\mathbb{P}^{n}}(1)\) is defined to be
\[H_{\mathcal{O}_{\mathbb{P}^{n}}(1)}(\mathbf{x})=\prod_{v\in M_{\mathbf{K}}}| \mathbf{x}|_{v}\text{ where }|\mathbf{x}|_{v}=\max_{i=0,\ldots,n}|x_{i}|_{v}.\]
### Local Weil and logarithmic height functions
Consider the situation of Setting 1.1. Building on the approach of [10, Section 2] and [11, Section 3] we develop further the theory of local Weil functions with respect to the field extension \(\mathbf{F}/\mathbf{K}\).
Let \(D\) be a Cartier divisor on \(X_{\mathbf{F}}\) with line bundle \(\mathcal{O}_{X_{\mathbf{F}}}(D)\) and meromorphic section \(s=s_{D}\). By a slight abuse of notation we also say that \(D\) is a Cartier divisor on \(X\) and defined over \(\mathbf{F}\). Further we understand the set \((X\setminus\operatorname{Supp}(D))(\mathbf{K})\) to mean the set of those \(\mathbf{K}\)-rational points \(x\in X(\mathbf{K})\) whose image in \(X_{\mathbf{F}}(\mathbf{F})\) do not lie in the support of \(D\). When no confusion is likely, in what follows, we employ variants of this notation.
Fixing globally generated line bundles \(N\) and \(M\) on \(X_{\mathbf{F}}\), with the property that
\[\mathcal{O}_{X_{\mathbf{F}}}(D)\simeq N\otimes M^{-1}\]
together with a choice of respective generating sections
\[\mathbf{s}:=(s_{0},\ldots,s_{k})\text{ and }\mathbf{t}:=(t_{0},\ldots,t_{\ell})\]
yields the data of a _presentation_ for \(D\) (and defined over \(\mathbf{F}\)). (Compare with [2, SS2.2.1].)
For example, as in [17, Exercise II.7.5], if \(N\) is very ample and if for some positive integer \(m>0\) the line bundle
\[N^{\otimes m}\otimes\mathcal{O}_{X_{\mathbf{F}}}(D)\]
is globally generated, then setting
\[M:=N^{\otimes(m+1)}\]
we may write
\[\mathcal{O}_{X_{\mathbf{F}}}(D)\simeq M^{-1}\otimes\left(N^{\otimes(m+1)} \otimes\mathcal{O}_{X_{\mathbf{F}}}(D)\right);\]
in doing so, we obtain an expression of \(\mathcal{O}_{X_{\mathbf{F}}}(D)\) as a difference of two very ample line bundles.
Denoting the data of such a presentation as
\[\mathcal{D}=(s_{D};N,\mathbf{s};M,\mathbf{t}) \tag{2.1}\]
the corresponding _local Weil function for \(D\)_ with respect to a place \(v\in M_{\mathbf{K}}\) is given by
\[\lambda_{\mathcal{D}}(x,v)=\lambda_{s}(x,v):=\max_{j=0,\ldots,k}\min_{i=0, \ldots,\ell}\left|\frac{s_{j}}{t_{i}s_{D}}(x)\right|_{v,\mathbf{K}} \tag{2.2}\]
for
\[x\in\left(X\setminus\operatorname{Supp}(D)\right)(\mathbf{K}). \tag{2.3}\]
In (2.2), we have fixed \(w\in M_{\mathbf{F}}\) with \(w\mid v\).
Note that in (2.3) if \(D\) is strictly defined over \(\mathbf{F}\), in the sense that \(D\) does not descend with respect to the field extension \(\mathbf{F}/\mathbf{K}\), then
\[\left(X\setminus\operatorname{Supp}(D)\right)(\mathbf{K})=X(\mathbf{K}).\]
In either case, the intuitive sense for (2.2) is to get a measure of the \(v\)-adic size of, or rather of the negative logarithmic \(v\)-adic distance to, the meromorphic function \(s_{D}\), which is defined over \(\mathbf{F}\) and not necessarily over \(\mathbf{K}\), when evaluated at \(X\)'s \(\mathbf{K}\)-points.
Recall, the finite set of places \(S\subset M_{\mathbf{K}}\). The _proximity function_ of \(D\) with respect to \(S\) is defined to be
\[m_{S}(x,D):=\sum_{v\in S}\lambda_{\mathcal{D}}(x,v).\]
As in [2, Theorem 2.2.11], if \(\mathcal{D}^{\prime}\) and \(\mathcal{D}\) are two presentations of \(D\), then the corresponding local Weil functions with respect to a place \(v\in M_{\mathbf{K}}\) are related by
\[\lambda_{\mathcal{D}^{\prime}}(\cdot,v)=\lambda_{\mathcal{D}}(\cdot,v)+ \operatorname{O}(1).\]
Further, as in [2, Proposition 2.3.9], if \(D\) is an effective Cartier divisor on \(X_{\mathbf{F}}\), then there exists a presentation \(\mathcal{D}\) which has the property that
\[\lambda_{\mathcal{D}}(x,v)\geqslant 0\text{ for all }x\in\left(X\setminus \operatorname{Supp}(D)\right)(\mathbf{K}).\]
The concept of presentation for Cartier divisors is significant from the viewpoint of locally bounded metrics and local Weil functions. (See [2, SS2.2-2.7] for more details.) This is illustrated by the following example, which is important to what we do here.
**Example 2.1** (Compare with [2, Examples 2.7.4 and 2.7.7]).: Fix a place \(v\in M_{\mathbf{K}}\). On projective \(n\)-space \(\mathbb{P}^{n}\), the tautological line bundle \(\mathcal{O}_{\mathbb{P}^{n}}(1)\) has the standard metric \(||\cdot||_{v,\mathbf{K}}\). It is _locally bounded_, in the sense of [2, Definition 2.7.1], and is defined by the condition that
\[||\ell(\mathbf{x})||_{v,\mathbf{K}}=||\ell(\mathbf{x})||_{v,\mathbf{F}/ \mathbf{K}}:=\frac{|\ell(\mathbf{x})|_{v,\mathbf{K}}}{\max\limits_{0\leqslant j \leqslant n}|x_{j}|_{v,\mathbf{K}}} \tag{2.4}\]
for each linear form
\[\ell(x)\in\mathbf{F}[x_{0},\dots,x_{n}]. \tag{2.5}\]
Each such linear form (2.5) determines a presentation
\[\mathcal{H}:=(\ell(x);\mathcal{O}_{\mathbb{P}^{n}_{\mathbf{F}}}(1),(x_{0}, \dots,x_{n});\mathcal{O}_{\mathbb{P}^{n}_{\mathbf{F}}},(1)) \tag{2.6}\]
of the hyperplane
\[H:=\operatorname{div}(\ell(x))\]
that it defines. There is a _local Weil function_
\[\lambda_{\mathcal{H}}(\mathbf{x},v)=\lambda_{\ell(x)}(\mathbf{x},v)\]
for \(H\) with respect to the presentation (2.6) and the place \(v\). It has domain the set of points
\[\mathbf{x}\in\left(\mathbb{P}^{n}\setminus\mathrm{Supp}(H)\right)(\mathbf{K})\]
and is defined by the condition that
\[\lambda_{\mathcal{H}}(\mathbf{x},v) =-\log||\ell(\mathbf{x})||_{v,\mathbf{K}}\] \[=\max_{j=0,\ldots,n}\log\left|\frac{x_{j}}{\ell(\mathbf{x})} \right|_{v,\mathbf{K}}. \tag{2.7}\]
The local Weil function (2.7) is the local Weil function that is determined by \(||\cdot||_{v,\mathbf{K}}\), the locally bounded metric (2.4) on the tautological line bundle \(\mathcal{O}_{\mathbb{P}^{n}}(1)\), with respect to the presentation (2.6).
Returning to general considerations, recall the construction of logarithmic height functions from the viewpoint of local Weil functions [2, SS2.3.3]. Let \(D\) be a Cartier divisor on \(X\) defined over \(\mathbf{K}\) and with fixed presentation
\[\mathcal{D}:=(s_{D};N,\mathbf{s};M,\mathbf{t})\]
defined over \(\mathbf{K}\). Then for each \(\mathbf{K}\)-point \(x\in X(\mathbf{K})\) there exist global sections
\[s_{i}\in\mathrm{H}^{0}(X,N)\text{ and }t_{j}\in\mathrm{H}^{0}(X,M)\]
which have the property that
\[s_{i}(x)\neq 0\text{ and }t_{j}(x)\neq 0.\]
As a consequence, the line bundle \(\mathcal{O}_{X}(D)\) admits a meromorphic section \(s:=s_{i}\otimes t_{j}^{-1}\), so that if \(D(s):=\mathrm{div}(s)\) is the Cartier divisor that corresponds to \(s\), then \(x\not\in\mathrm{Supp}(D(s))\).
In particular,
\[\mathcal{D}(s):=(s;N,\mathbf{s};M,\mathbf{t})\]
is a presentation of the Cartier divisor \(D(s)\).
In this way, up to a constant term \(\mathrm{O}(1)\), the _logarithmic height function_\(h_{\mathcal{O}_{X}(D)}(\cdot)\) may be described as
\[h_{\mathcal{O}_{X}(D)}(x):=\sum_{v\in M_{\mathbf{K}}}\lambda_{\mathcal{D}(s)} (x,v)+\mathrm{O}(1).\]
We conclude this subsection by recalling the description of logarithmic projective heights via the viewpoint of presentations of Cartier divisors.
**Example 2.2** ([2, Example 2.3.2]).: The coordinate hyperplane
\[H:=\{\mathbf{x}\in\mathbb{P}^{n}(\mathbf{K}):x_{0}=0\}\subseteq\mathbb{P}_{ \mathbf{K}}^{n}\]
admits the presentation
\[\mathcal{H}=(x_{0};\mathcal{O}_{\mathbb{P}^{n}}(1),(x_{0},\ldots,x_{n}); \mathcal{O}_{\mathbb{P}^{n}},(1)).\]
Thus, if \(\mathbf{x}\in\mathbb{P}^{n}(\mathbf{K})\) and \(x_{0}\neq 0\), then
\[h_{\mathcal{O}_{\mathbb{P}^{n}}(1)}(\mathbf{x}) =\sum_{v\in M_{\mathbf{K}}}\max_{k}\log\left|\frac{x_{k}}{x_{0}} \right|_{v}\] \[=\sum_{v\in M_{\mathbf{K}}}\lambda_{\mathcal{H}}(\mathbf{x},v).\]
### Asymptotics of linear series
We fix some notation and conventions and recall a few concepts that pertain to asymptotic aspects of linear series. Our approach follows that of [20] closely.
In what follows \(X\) is a geometrically irreducible projective variety over a number field \(\mathbf{K}\). We let \(L\) denote a line bundle on \(X\) and defined over \(\mathbf{K}\). We denote by \(X_{\overline{\mathbf{K}}}\) and \(L_{\overline{\mathbf{K}}}\) their base change to \(\overline{\mathbf{K}}\). Note that the context that we consider here is slightly more general than that of Setting 1.1.
Recall the _semigroup_ of \(L\)
\[\mathrm{N}(X,L):=\{m\geqslant 0:\mathrm{H}^{0}(X,L^{\otimes m})\neq 0\}.\]
When \(\mathrm{N}(X,L)\neq(0)\), all sufficiently large elements of \(\mathrm{N}(X,L)\) are multiples of a largest single natural number \(e=e(L)\geqslant 1\); this is \(L\)'s _exponent_.
Let
\[\pi\colon Y\to X_{\overline{\mathbf{K}}}\]
be the normalization of \(X_{\overline{\mathbf{K}}}\). Then the _Iitaka dimension_ of \(L\) can be described as
\[\kappa(X,L)=\kappa(Y,\pi^{*}L):=\max_{m\in\mathrm{N}(Y,\pi^{*}L)}\{\dim\phi_{| \pi^{*}L^{\otimes m}|}(Y)\}.\]
Here
\[\phi_{m}=\phi_{|\pi^{*}L^{\otimes m}|}\colon Y\dasharrow\mathbb{P}^{n_{m}} \tag{2.8}\]
is the rational map that is defined by the complete linear series \(|\pi^{*}L^{\otimes m}_{\overline{\mathbf{K}}}|\). In what follows, let \(Y_{m}\) be the closure of the image of the rational map (2.8).
Recall that \(L\) is said to be _big_ when
\[\kappa(X,L)=\dim X.\]
When \(\kappa(X,L)\geqslant 0\), given \(m\geqslant 1\), let \(\mathrm{Bs}(|mL_{\overline{\mathbf{K}}}|)\) be the base locus of \(|L^{\otimes m}_{\overline{\mathbf{K}}}|\). (If \(|L^{\otimes m}|=\varnothing\), then \(\mathrm{Bs}(|mL_{\overline{\mathbf{K}}}|)=X_{\overline{\mathbf{K}}}\).) The _stable base locus_ of \(L\) is the Zariski closed subset
\[\mathrm{Bs}(L)=\bigcap_{m\geqslant 1}\mathrm{Bs}(|mL_{\overline{\mathbf{K}}}|);\]
there exists a positive integer \(m_{0}>0\) which has the property that
\[\mathrm{Bs}(L)=\mathrm{Bs}(|mm_{0}L_{\overline{\mathbf{K}}}|)\]
for all \(m\gg 0\), [20, Proposition 2.1.21].
For later use, we recall the main theorem about _Iitaka fibrations_. (See, for instance [20, Theorem 2.1.33] or [24, Lemma 1.2].) In particular, if \(\kappa(X,L)>0\), then for all sufficiently large \(m\in\mathrm{N}(X,L)\), the rational mappings
\[\phi_{m}=\phi_{|\pi^{*}L^{\otimes m}|}\colon Y\dasharrow Y_{m}\subseteq \mathbb{P}^{n_{m}}\]
are all birationally equivalent to some _algebraic fibre space_
\[\phi_{\infty}\colon X_{\infty}\to Y_{\infty}\]
between normal projective varieties \(X_{\infty}\) and \(Y_{\infty}\). Especially, the morphism \(\phi_{\infty}\) is surjective and has connected fibres. It is unique up to birational equivalence and is called the _Iitaka fibration_ of \(L\).
## 3. A formulation of the Parametric Subspace Theorem
Fix a finite subset \(S\subset M_{\mathbf{K}}\) and for each place \(v\in S\) fix a collection of \(\mathbf{F}\)-linearly independent linear forms
\[\ell_{v0}(x),\ldots,\ell_{vn}(x)\in\mathbf{F}[x_{0},\ldots,x_{n}]. \tag{3.1}\]
Fix a real number \(Q\geqslant 1\) together with a collection of real numbers \(c_{vi}\in\mathbb{R}\), for all \(v\in S\) and \(i=0,\ldots,n\), which have the property that
\[\sum_{i=0}^{n}c_{vi}=0.\]
For points of projective \(n\)-space \(\mathbf{x}\in\mathbb{P}^{n}(\mathbf{K})\) put
\[H_{Q}(\mathbf{x}):=\left(\prod_{v\in S}\left(\max_{0\leqslant i\leqslant n}|| \ell_{vi}(\mathbf{x})||_{v,\mathbf{K}}\cdot Q^{-c_{vi}}\right)\right)\cdot H _{\mathcal{O}_{\mathbb{P}^{n}}(1)}(\mathbf{x}). \tag{3.2}\]
This is the _twisted multiplicative height_ of \(\mathbf{x}\in\mathbb{P}^{n}(\mathbf{K})\), with respect to the linear forms (3.1) and the real numbers \(Q\), \(c_{vi}\), for \(v\in S\) and \(i=0,\ldots,n\). Here \(||\ell_{vi}(\mathbf{x})||_{v,\mathbf{K}}\) is defined as in (2.4).
Note that the twisted multiplicative height (3.2) may also be described as
\[H_{Q}(\mathbf{x})=\prod_{v\in S}\left(\max_{0\leqslant i\leqslant n}|\ell_{vi }(\mathbf{x})|_{v,\mathbf{K}}\cdot Q^{-c_{vi}}\right)\cdot\prod_{v\not\in S}| \mathbf{x}|_{v}. \tag{3.3}\]
(Compare with [6, Equation (1.3), p. 514].)
That the twisted height function (3.3) can be expressed in the form (3.2) is a key point to what follows.
In the proof of our main results, our starting point is the following expanded form of the Parametric Subspace Theorem from [6, p. 515].
**Theorem 3.1** (Parametric Subspace Theorem [6]).: _With the notation and hypothesis as above let \(\epsilon>0\). Then there exists a real number \(Q_{0}>1\) and a finite collection of proper linear subspaces_
\[T_{1},\ldots,T_{t}\subsetneq\mathbb{P}^{n}_{\mathbf{K}}\]
_which are such that for all \(Q\geqslant Q_{0}\) there is a subspace_
\[T_{j_{Q}}\in\{T_{1},\ldots,T_{t}\}\]
_which contains all \(\mathbf{x}\in\mathbb{P}^{n}(\mathbf{K})\) which verify the twisted height inequality_
\[H_{Q}(\mathbf{x})\leqslant Q^{-\epsilon}. \tag{3.4}\]
_Here, \(H_{Q}(\mathbf{x})\) is the twisted height of \(\mathbf{x}\) as defined in (3.2)._
Proof.: The case that each of the linear forms \(\ell_{vi}(x)\) has coefficients in \(\mathbf{K}\) follows as a special case of the Absolute Parametric Subspace Theorem [6, p. 515]. To treat the more general case, where the \(\ell_{vi}(x)\) have coefficients in \(\mathbf{F}\), we argue as in [2, Remark 7.2.3]. (See also the arguments given in [10, Proposition 2.1] and [9, Proof of Theorem 5.2].)
First, there is no loss in generality by assuming that \(\mathbf{F}/\mathbf{K}\) is a Galois extension. Let \(S^{\prime}\subset M_{\mathbf{F}}\) be defined by the condition that
\[S^{\prime}:=\{w\in M_{\mathbf{F}}:w\mid v\text{ and }v\in S\}.\]
For each \(v\in S\), fix \(v^{\prime}\in S^{\prime}\) with \(v^{\prime}\mid v\) and consider the absolute value
\[|\cdot|_{v^{\prime},\mathbf{K}}:=|\operatorname{N}_{\mathbf{F}_{v^{\prime}}/ \mathbf{K}_{v}}(\cdot)|_{v}^{\frac{1}{|\mathbf{F}_{v^{\prime}}|^{*}\mathbf{K} _{v}|}}.\]
Now, if \(w\in S^{\prime}\) and \(w\mid v\), then there exists \(\sigma\in\operatorname{Gal}(\mathbf{F}/\mathbf{K})\) with \(w=v^{\prime}\circ\sigma^{-1}\); set
\[s^{\prime}_{wi}:=\sigma(s_{vi})\text{ for }i=0,\ldots,n.\]
Fix \(\epsilon>0\) and consider for real numbers \(Q\geqslant 1\) the twisted height function
\[H^{\prime}_{Q}(\mathbf{x})=\prod_{w\in S^{\prime}}\left(\max_{0\leqslant i \leqslant n}|\ell_{vi}(\mathbf{x})|_{w}\cdot Q^{-c_{wi}}\right)\cdot\prod_{w \not\in S^{\prime}}|\mathbf{x}|_{w}\]
defined for points \(\mathbf{x}\in\mathbb{P}^{n}(\mathbf{F})\).
Working over \(\mathbf{F}\) the conclusion of the Parametric Subspace Theorem, [6, p. 515], is that there exists a real number \(Q_{0}>1\) and a finite collection of proper linear subspaces
\[T^{\prime}_{1},\ldots,T^{\prime}_{t}\subsetneq\mathbb{P}^{n}_{\mathbf{F}}\]
such that for all \(Q\geqslant Q_{0}\) there is a subspace
\[T^{\prime}_{j_{Q}}\in\{T^{\prime}_{1},\ldots,T^{\prime}_{t}\}\]
which contains those \(\mathbf{x}\in\mathbb{P}^{n}(\mathbf{F})\) which have the property that
\[H^{\prime}_{Q}(\mathbf{x})\leqslant Q^{-\epsilon}.\]
On the other hand, if \(\mathbf{x}\in\mathbb{P}^{n}(\mathbf{K})\) then
\[H_{Q}(\mathbf{x})=H_{Q}^{\prime}(\mathbf{x}).\]
The conclusion desired by Theorem 3.1 then follows by replacing each of the linear subspaces
\[T_{i}^{\prime}\subsetneq\mathbb{P}_{\mathbf{F}}^{n}\]
for \(i=1,\ldots,t\), by
\[T_{i}:=\operatorname{span}_{\mathbf{K}}\left\{\mathbf{x}\in\mathbb{P}^{n}( \mathbf{K}):\mathbf{x}\in T_{i}^{\prime}\right\}.\]
## 4. Linear systems, rational maps and \(v\)-adic distances
In this section we give a construction and define auxiliary concepts which we require to formulate our logarithmic Parametric Subspace Theorem for big linear systems (Theorem 1.2). They build on several viewpoints including [1, Proposition 4.2], [26, Theorem 2.10], [10, Proposition 2.1], or [13, Theorem 3.3].
Working over a base number field \(\mathbf{K}\) let
\[0\neq V\subseteq\operatorname{H}^{0}(X,L)\]
be a nonzero subspace for \(L\) an effective line bundle on a geometrically irreducible projective variety \(X\). Let \(\operatorname{Bs}(|V|)\) be the base locus of the linear system \(|V|\) and \(\mathcal{I}\) its ideal sheaf. Fix a basis \(s_{0},\ldots,s_{n}\) for \(V\).
Let
\[\phi\colon X\dasharrow\mathbb{P}_{\mathbf{K}}^{n}\]
be the rational map that is determined by the sections \(s_{0},\ldots,s_{n}\) and
\[\phi^{\prime}\colon X^{\prime}\to\mathbb{P}_{\mathbf{K}}^{n}\]
its extension for
\[\pi\colon X^{\prime}:=\operatorname{Bl}_{\mathcal{I}}(X)\to X\]
the blowing-up of \(X\) along \(\mathcal{I}\).
Note that the sections \(s_{0},\ldots,s_{n}\) generate \(L\) over the Zariski open subset
\[U:=X\setminus\operatorname{Bs}(|V|).\]
If \(x\in X\), then
\[\mathcal{I}_{x}\simeq\mathcal{O}_{X,x}\]
if and only if \(x\in U.\) There is the following commutative diagram
(4.1)
The following concept is from [13].
**Definition 4.1** ([13, Definition 3.1]).: The _proper linear sections_
\[\Lambda\subsetneq X\]
of \(X\) with respect to \(|V|\) are described by
\[\Lambda=\pi\left(\phi^{{}^{\prime}-1}(T)\right)\]
for proper linear subspaces
\[T\subsetneq\mathbb{P}^{n}_{\mathbf{K}}.\]
For later use, we also define the concept of _density for rational points with respect to a given linear system_. It builds on earlier ideas of Schmidt [28, p. 706] and Evertse [5, p. 240].
**Definition 4.2**.: Consider a non-empty subset of \(X(\mathbf{K})\). It is called _dense_ with respect to the linear system \(|V|\) if it is contained in no finite union of \(|V|\)'s proper linear sections.
Returning to the topic of resolving the locus of indeterminacy of the linear system \(|V|\), given a finite extension field \(\mathbf{F}/\mathbf{K}\) consider base change of the commutative diagram (4.1)
(4.2)
Recall that the global sections
\[\pi^{*}s_{i}\in\mathrm{H}^{0}(X^{\prime},\pi^{*}L)\]
generate a line bundle \(L^{\prime}\) on \(X^{\prime}\). It is a coherent subsheaf of \(\pi^{*}L\). In what follows, let \(s^{\prime}_{i}\) be the global section of \(L^{\prime}\) that is determined by \(\pi^{*}s_{i}\).
Now the line bundle \(L^{\prime}\) and the global sections
\[s^{\prime}_{i}\in\mathrm{H}^{0}(X^{\prime},L^{\prime})\]
define the morphism \(\phi^{\prime}\), in (4.1), and determine the morphism \(\phi^{\prime}_{\mathbf{F}}\) in the diagram (4.2). The restriction of \(\phi^{\prime}\) to \(\pi^{-1}(U)\) corresponds to \(\phi\) via the natural isomorphism
\[\pi\colon\pi^{-1}(U)\xrightarrow{\sim}U.\]
As a consequence for local Weil functions, if
\[\ell(x)=a_{0}x_{0}+\ldots+a_{n}x_{n}\in\mathrm{H}^{0}(\mathbb{P}_{\mathbf{F}}^{n },\mathcal{O}_{\mathbb{P}_{\mathbf{F}}^{n}}(1)),\]
for \(a_{i}\in\mathbf{F}\) and \(i=0,\ldots,n\), is a linear form that pulls back to
\[s^{\prime}=a_{0}s^{\prime}_{0}+\ldots+a_{n}s^{\prime}_{n}\in\mathrm{H}^{0}(X^{ \prime}_{\mathbf{F}},L^{\prime}_{\mathbf{F}})\]
and corresponds to
\[s=a_{0}s_{0}+\ldots+a_{n}s_{n}\in V_{\mathbf{F}}\]
then, for each place \(v\in S\)
\[\lambda_{s}(x,v)=\lambda_{\pi_{\mathbf{F}}^{*}s}(x^{\prime},v)=\lambda_{s^{ \prime}}(x^{\prime},v)=\lambda_{\ell(x)}(\phi^{\prime}_{\mathbf{F}}(x^{\prime }),v) \tag{4.3}\]
for all
\[x\in\left(X\setminus\left(\mathrm{Bs}(|V|)\bigcup\mathrm{Supp}(\mathrm{div}(s ))\right)\right)(\mathbf{K}) \tag{4.4}\]
where
\[x^{\prime}=\pi^{-1}(x)\in\left(X^{\prime}\setminus\pi^{-1}\left(\mathrm{Bs}(| V|)\bigcup\mathrm{Supp}(\mathrm{div}(s))\right)\right)(\mathbf{K}). \tag{4.5}\]
Finally, in terms of height functions, for all such \(x\) and \(x^{\prime}\), as above in (4.4) and (4.5), respectively, it holds true that
\[h_{L}(x)=h_{L^{\prime}}(x^{\prime})+\mathrm{O}(1)=h_{\mathcal{O}_{\mathbb{P} ^{n}}(1)}(\phi^{\prime}(x^{\prime}))+\mathrm{O}(1). \tag{4.6}\]
## 5. Proof of Theorem 1.2
The first step in the proof of Theorem 1.2 is to consider the logarithmic form of Theorem 3.1. We express its conclusion in terms of local Weil and logarithmic height functions.
**Proposition 5.1**.: _Fix a finite subset \(S\subset M_{\mathbf{K}}\). For each \(v\in S\) and all \(i=0,\ldots,n\), fix a collection of linearly independent linear forms_
\[\ell_{vi}(x)\in\mathbf{F}[x_{0},\ldots,x_{n}]\]
_together with a collection of real numbers \(c_{vi}\in\mathbb{R}\), for \(v\in S\) and \(i=0,\ldots,n\), which have the property that_
\[\sum_{i=0}^{n}c_{vi}=0\]
_for all \(v\in S\). For each of the linear forms \(\ell_{vi}(x)\), let \(\lambda_{\ell_{vi}(x)}(\cdot,v)\) be the local Weil function with respect to \(v\) given by (2.7)._
_Let \(\delta>0\). Then there exist a real number \(Q_{0}>1\) and a finite collection of proper linear subspaces_
\[T_{1},\ldots,T_{t}\subsetneq\mathbb{P}_{\mathbf{K}}^{n}\]
_such that for all \(Q\geqslant Q_{0}\) there is a subspace_
\[T_{j_{Q}}\in\{T_{1},\ldots,T_{t}\}\]
_which contains all_
\[\mathbf{x}\in\left(\mathbb{P}^{n}(\mathbf{K})\setminus\left(\bigcup_{ \begin{subarray}{c}v\in S\\ i=0,\ldots,n\end{subarray}}\mathrm{Supp}(\ell_{vi})\right)\right)(\mathbf{K})\]
_which verify the logarithmic twisted height inequality_
\[\sum_{v\in S}\min_{0\leqslant i\leqslant n}\left(\lambda_{\ell_{vi}(x)}( \mathbf{x},v)+c_{vi}\cdot\log(Q)\right)\geqslant h_{\mathcal{O}_{\mathbb{P}^{ n}}(1)}(\mathbf{x})+\delta\log(Q).\]
Proof.: Apply \(-\log(\cdot)\) to the multiplicative twisted height inequality (3.4), which is given in Theorem 3.1. The result is that
\[-\log\left(H_{Q}(\mathbf{x})\right)\geqslant\delta\cdot\log(Q)\geqslant 0. \tag{5.1}\]
The conclusion desired by Proposition 5.1 then follows from the conclusion of Theorem 3.1 since
\[-\log\left(H_{Q}(\mathbf{x})\right)=-\log\left(\prod_{v\in S}\left(\max_{0 \leqslant i\leqslant n}\frac{|\ell_{vi}(\mathbf{x})|_{v,\mathbf{K}}}{| \mathbf{x}|_{v,\mathbf{K}}}\cdot Q^{-c_{vi}}\right)\right)-h_{\mathcal{O}_{ \mathbb{P}^{n}}(1)}(\mathbf{x}).\]
Indeed, the quantity
\[-\log\left(\prod_{v\in S}\left(\max_{0\leqslant i\leqslant n}\frac{|\ell_{vi }(\mathbf{x})|_{v,\mathbf{K}}}{|\mathbf{x}|_{v,\mathbf{K}}}\cdot Q^{-c_{vi}} \right)\right)\]
can be rewritten as
\[\sum_{v\in S}\min_{0\leqslant i\leqslant n}\left(-\log\left(\frac{|\ell_{vi }(\mathbf{x})|_{v,\mathbf{K}}}{|\mathbf{x}|_{v,\mathbf{K}}}\right)+c_{vi} \cdot\log(Q)\right).\]
In light of this, the twisted height inequality (5.1) can thus be rewritten in the form
\[\sum_{v\in S}\min_{0\leqslant i\leqslant n}\left(\lambda_{\ell_{vi}(x)}( \mathbf{x},v)+c_{vi}\cdot\log(Q)\right)\geqslant h_{\mathcal{O}_{\mathbb{P}^ {n}}(1)}(\mathbf{x})+\delta\cdot\log(Q).\]
We now use Proposition 5.1 to prove Theorem 1.2.
Proof of Theorem 1.2.: Fix a basis \(s_{0},\ldots,s_{n}\) for \(V\). Let \(\mathrm{Bs}(|V|)\) be the base locus of the linear system \(|V|\). We now apply the considerations of Section 4 within our current context, especially the relations (4.3) and (4.6).
Fix linear forms
\[\ell_{vi}(x)\in\mathrm{H}^{0}(\mathbb{P}^{n}_{\mathbf{F}},\mathcal{O}_{ \mathbb{P}^{n}_{\mathbf{F}}}(1))\]
for all \(v\in S\) and all \(i=0,\ldots,n\), which pull back to \(s^{\prime}_{vi}\) under \(\phi^{\prime}_{\mathbf{F}}\) and correspond to \(\pi^{*}_{\mathbf{F}}s_{vi}\). Then we may also write for each place \(v\in S\) and each \(i=0,\ldots,n\)
\[\lambda_{s_{vi}}(x,v)=\lambda_{\pi^{*}_{\mathbf{F}^{*}s_{vi}}}(x^{\prime},v)= \lambda_{s^{\prime}_{vi}}(x^{\prime},v)=\lambda_{\phi^{\prime*}_{\mathbf{F}^{ *}\ell_{vi}}(x)}(x^{\prime},v) \tag{5.2}\]
where
\[x\in\left(X\setminus\left(\operatorname{Bs}(|V|)\bigcup\operatorname{Supp}( \operatorname{div}(s_{vi}))\right)\right)(\mathbf{K})\]
and
\[x^{\prime}=\pi^{-1}(x)\in\left(X^{\prime}\setminus\pi^{-1}\left(\operatorname{ Bs}(|V|)\bigcup\operatorname{Supp}(\operatorname{div}(s_{vi}))\right)\right)( \mathbf{K}).\]
Theorem 1.2 then follows from the relation (5.2) together with Proposition 5.1 applied to the linear forms
\[\ell_{v0}(x),\ldots,\ell_{vn}(x)\in\mathbf{F}[x_{0},\ldots,x_{n}]\text{ for all }v\in S.\]
## 6. Proof of Theorem 1.3
Similar to the approach of [6, p. 514], the conclusion of Theorem 1.3 is implied by that of Theorem 1.2.
Proof of Theorem 1.3.: Put
\[\epsilon:=-1+\frac{1}{n+1}\left(\sum_{v\in S}\sum_{i=0}^{n}d_{vi}\right).\]
For each \(v\in S\) and all \(i=0,\ldots,n\) set
\[c_{vi}:=-d_{vi}+\frac{1}{n+1}\sum_{j=0}^{n}d_{vj}.\]
Then \(\epsilon>0\) and
\[\sum_{i=0}^{n}c_{vi}=0\text{ for all }v\in S.\]
Let
\[x\in\left(X\setminus\left(\operatorname{Bs}(|V|)\bigcup\bigcup_{ \begin{subarray}{c}v\in S\\ i=0,\ldots,n\end{subarray}}\operatorname{Supp}(s_{vi})\right)\right)( \mathbf{K}) \tag{6.1}\]
be a solution of the system of inequalities
\[\lambda_{s_{vi}}(x,v)-d_{vi}\cdot h_{L}(x)+\operatorname{O}_{v}(1)\geqslant 0\]
for all \(i=0,\ldots,n\) and all \(v\in S\), and put
\[Q=\exp(h_{L}(x)).\]
Then the inequalities
\[\sum_{v\in S}\left(\lambda_{s_{vi}}(x,v)+c_{vi}\cdot\log(Q)\right)\geqslant h _{L}(x)+\epsilon\cdot\log(Q)+\operatorname{O}(1)\]
for \(i=0,\ldots,n\) are valid. Thus, by considering all such solutions \(x\), as above in (6.1), with sufficiently large height
\[h_{L}(x)\geqslant\log(Q_{0})\gg 0\]
for a suitable real number \(Q_{0}>1\), the conclusion of Theorem 1.3 follows from that of Theorem 1.2. Here, we employ the Northcott theorem, for big line bundles, compare with [2, Theorem 2.4.9], in order to conclude that there will be only finitely many solutions \(x\) (and hence finitely many exceptional subspaces to add) which have sufficiently small height.
## 7. Proof of Theorem 1.4
By modifying the arguments of [7, Section 21], as suggested in [6, p. 514], Theorem 1.4 follows from Theorem 1.3, in light of Lemma 7.1 below. Lemma 7.1 is a special case of the logarithmic form of [4, Lemma 4].
The proof of Theorem 1.4 is interesting as it involves a partitioning of the solution set into subsets. Each subset solves a certain simultaneous system of Diophantine arithmetic inequalities. Since only finitely many such subsets are required, Theorem 1.4 thus follows upon repeated application of Theorem 1.3. Such conceptual reasoning will be made more explicit throughout the proof of Theorem 1.4. We prove Corollary 1.5 after first proving Theorem 1.4.
**Lemma 7.1** (Compare with [7, Lemma 21.1] or [4, Lemma 4]).: _Fix a sufficiently small positive real number \(c\), \(0<c\ll 1\), and let \(I\) be a finite set. Then the set_
\[\mathcal{R}(c):=\left\{\mathbf{c}=(c_{i})_{i\in I}:c_{i}\in\mathbb{R}_{\geqslant 0 }\text{ and }\sum_{i\in I}c_{i}=c\right\}\]
_admits a finite subset \(\mathcal{S}(c)\subseteq\mathcal{R}(c)\) which has the property that for all \(\mathbf{b}=(b_{i})_{i\in I}\) with \(b_{i}\in\mathbb{R}_{\geqslant 0}\) there exists \(\mathbf{a}=(a_{i})_{i\in I}\in\mathcal{S}(c)\) which has the property that_
\[b_{j}\geqslant a_{j}\left(\sum_{i\in I}b_{i}\right)\text{ for all }j\in I.\]
Proof.: This is a special case of the logarithmic formulation of [4, Lemma 4].
We now use Lemma 7.1 to establish Theorem 1.4.
Proof of Theorem 1.4.: Fix a sufficiently small positive real number \(\epsilon>0\). Our aim is to ascertain qualitative features, that are expressed in terms of \(|L|\)'s linear sections, of the collection of those solutions
\[x\in\left(X\setminus\left(\operatorname{Bs}(|V|)\bigcup\bigcup_{ \begin{subarray}{c}v\in S\\ i=0,\ldots,n\end{subarray}}\operatorname{Supp}(s_{vi})\right)\right)( \mathbf{K}) \tag{7.1}\]
to the inequality
\[\sum_{v\in S}\sum_{i=0}^{n}\lambda_{s_{vi}}(x,v)\geqslant(n+1+\epsilon)h_{L}(x)+ \operatorname{O}(1) \tag{7.2}\]
which have sufficiently large height \(h_{L}(x)\gg 0\).
In our study of the system (7.2), similar to the approach of [7, Section 21], we distinguish amongst two classes of solutions.
* **Type I**: are those solutions which admit an index \(i\), \(0\leqslant i\leqslant n\), which is such that \[\sum_{v\in S}\lambda_{s_{vi}}(x,v)\geqslant(n+1+\epsilon)h_{L}(x)+ \operatorname{O}(1);\]
* **Type II**: are those solutions which satisfy the inequalities \[\sum_{v\in S}\lambda_{s_{vi}}(x,v)<(n+1+\epsilon)h_{L}(x)+\operatorname{O}(1) \text{ for all }i=0,\dots,n.\]
In our study of the **Type I** and **Type II** solutions, by adjusting the constant \(\operatorname{O}(1)\), if necessary, there is no loss in generality by assuming that
\[\lambda_{s_{vi}}(x,v)\geqslant 0 \tag{7.3}\]
for all \(i=0,\dots,n\), all \(v\in S\) and all solutions \(x\) of the form (7.1).
### Simultaneous Type I inequalities
Fix a **Type I** solution \(x\). Then, there exists an index \(i\), with \(0\leqslant i\leqslant n\) and
\[\sum_{v\in S}\lambda_{s_{vi}}(x,v)\geqslant(n+1+\epsilon)h_{L}(x)+ \operatorname{O}(1).\]
Fix such an index \(i\) and define the tuple
\[\mathbf{b}=(b_{v})_{v\in S} \tag{7.4}\]
by the condition that
\[\lambda_{s_{vi}}(x,v)=b_{v}h_{L}(x)+\operatorname{O}(1).\]
The tuple (7.4), which depends on \(i\) and \(x\), has the property that
\[b_{v}\geqslant 0\text{ for each }v\in S\]
and
\[\sum_{v\in S}b_{v}\geqslant n+1+\epsilon.\]
We now consider consequences of Lemma 7.1 applied to the sufficiently small positive real number
\[c=1-\frac{\epsilon}{4(n+1)}\]
and the finite set \(S\).
Indeed, we deduce from Lemma 7.1 that the collection of **Type I** solutions can be partitioned into finitely many subsets \(\mathcal{S}_{\mathbf{a}}\) where each subset corresponds to a fixed tuple
\[\mathbf{a}=(a_{v})_{v\in S} \tag{7.5}\]
of nonnegative real numbers with the property that
\[\sum_{v\in S}a_{v}=1-\frac{\epsilon}{4(n+1)}. \tag{7.6}\]
A given subset \(\mathcal{S}_{\mathbf{a}}\) with corresponding weight vector (7.5) consists of those **Type I** solutions \(x\) for which their vector (7.4) satisfies the condition that
\[b_{v}\geqslant a_{v}\sum_{w\in S}b_{w}\geqslant a_{v}(n+1+\epsilon).\]
As a consequence, it follows that
\[\lambda_{s_{vi}}(x,v)>a_{v}(n+1+\epsilon)h_{L}(x)+\mathrm{O}(1)\]
for all such **Type I** solutions \(x\) in the subset corresponding to the tuple (7.5).
Now we need to do some rewriting of the sections \(s_{vi}\) working on the blow-up of the base locus \(\mathrm{Bs}(|V|)\), as in Section 4. We view these sections \(s_{vi}\) as having the property that \(\pi^{*}s_{vi}\), their pullbacks to \(X^{\prime}\), correspond with the sections \(s^{\prime}_{vi}\) which are the pullbacks, with respect to \(\phi^{\prime}_{\mathbf{F}}\), of the linear forms
\[\ell_{vi}(x)=a_{vi0}x_{0}+\ldots+a_{vin}x_{n}\in\mathbf{F}[x_{0},\ldots,x_{n}] \text{ for all }v\in S\text{ and }i=0,\ldots,n.\]
Given a pair \((i,v)\), where \(v\in S\) and \(0\leqslant i\leqslant n\), pick \(j=j(i,v)\) with \(0\leqslant j(i,v)\leqslant n\) and
\[\left|a_{vij}\right|_{v}=\max\{|a_{vi0}|_{v},\ldots,|a_{vin}|_{v}\}.\]
In fact, by homogeneity of the linear forms, and by adjusting the constant \(\mathrm{O}(1)\), if necessary, there is no loss in generality by assuming that each form \(\ell_{vi}(x)\) has the property that
\[a_{vij(i,v)}=1.\]
Consider, for each \(v\in S\), the collection of linear forms \(\ell_{vi}(x)\), \(x_{k}\) for \(k=0,\ldots,n\) and \(k\neq j(i,v)\). Relabel this collection of linear forms as
\[m_{v0}(x)=\ell_{vi}(x),m_{v1}(x),\ldots,m_{vn}(x).\]
Then for each tuple (7.5) define the tuple of real numbers
\[(e_{vi})_{\begin{subarray}{c}v\in S\\ i=0,\ldots,n\end{subarray}} \tag{7.7}\]
by the condition that
\[e_{vi}=\begin{cases}a_{v}(n+1+\epsilon)&\text{ for }i=0;\text{ and }\\ 0&\text{ for }i=1,\ldots,n.\end{cases} \tag{7.8}\]
Upon adjusting the constant \(\mathrm{O}(1)\), if necessary, the tuple (7.7) and the linear forms
\[m_{v0}(x),\ldots,m_{vn}(x)\]
are such that all **Type I** solutions in the subset \(\mathcal{S}_{\mathbf{a}}\) satisfy the inequality
\[-\log\left(\frac{|m_{vi}(x)|_{v,\mathbf{K}}}{|x|_{v,\mathbf{K}}}\right)\geqslant e _{vi}h_{\mathcal{O}_{\mathbb{P}^{n}}(1)}(x)+\mathrm{O}(1) \tag{7.9}\]
for each pair \((i,v)\), where \(v\in S\) and \(i=0,\ldots,n\).
Now, observe that, by (7.6), the tuple (7.7) satisfies the condition that
\[\sum_{v\in S}\sum_{i=0}^{n}e_{vi}=(n+1+\epsilon)\left(\sum_{v\in S}a_{v}\right) >n+1.\]
Thus, to summarize, we have shown that the collection of **Type I** solutions \(x\) in (7.1) admits a decomposition into finitely many subsets such that if (7.7) is the tuple corresponding to a given subset \(\mathcal{S}_{\mathbf{a}}\), defined by (7.8), then all solutions to this subset are solutions of the simultaneous system (7.9). The **Type I** solutions are thus described by finitely many applications of Theorem 1.3.
### Simultaneous Type II inequalities
Fix a **Type II** solution \(x\) in (7.1). Recall that these are the solutions for which
\[\sum_{v\in S}\lambda_{s_{vi}}(x,v)<(n+1+\epsilon)h_{L}(x)+\mathrm{O}(1)\text{ for all }i=0,\ldots,n. \tag{7.10}\]
Again, we need to do some rewriting of the sections \(s_{vi}\) working on the blow-up of the base locus \(\mathrm{Bs}(|V|)\) as in Section 4. Thus, as in our study of the **Type I** solutions, we view these sections \(s_{vi}\) as having the property that \(\pi^{*}s_{vi}\), their pullbacks to \(X^{\prime}\) correspond to sections \(s^{\prime}_{vi}\) which are the pullbacks, with respect to \(\phi^{\prime}_{\mathbf{F}}\), of the linear forms
\[\ell_{vi}(x)=a_{vi0}x_{0}+\ldots+a_{vin}x_{n}\in\mathbf{F}[x_{0},\ldots,x_{n}] \text{ for }v\in S\text{ and }i=0,\ldots,n.\]
Now, by adjusting the constant \(\mathrm{O}(1)\), without loss of generality, we may assume that
\[0\leqslant\mathrm{O}(1)<n(n+1+\epsilon)h_{L}(x)\]
for all **Type II** solutions \(x\) of the form (7.1). Moreover, we fix a real number \(A\geqslant 0\), which has the property that
\[\mathrm{O}(1)=n(n+1+\epsilon)A.\]
Further, for each place \(v\in S\), we may choose nonnegative real numbers \(d_{v}\) so that
\[\sum_{v\in S}d_{v}=1.\]
Finally, fix a scaled nonnegative additive decomposition of the constant \(\mathrm{O}(1)\); assume that this decomposition is indexed by the set of places \(S\). In more precise terms
\[\mathrm{O}(1)=\frac{1}{n+1}\sum_{v\in S}\delta_{v}\text{ where }\delta_{v}\geqslant 0.\]
Then we may assume that the nonnegative real numbers \(d_{v}\) satisfy the relation that
\[\delta_{v}=d_{v}n(n+\epsilon+1)A\text{ for each }v\in S.\]
Note that we may assume that
\[0\leqslant A\ll h_{L}(x)\]
for some sufficiently large constant \(A\). (By assumption, we are considering those **Type II** solutions which have sufficiently large height.)
Now, for each place \(v\in S\), put
\[b_{v}=\frac{1}{n+1}d_{v}n(n+1+\epsilon).\]
Then
\[\sum_{v\in S}\sum_{i=0}^{n}b_{v}=n(n+1+\epsilon).\]
Moreover
\[\frac{1}{n+1}\delta_{v}\geqslant b_{v}h_{L}(x)\text{ for all }v\in S.\]
Combined, the above discussion, together with (7.2), implies that for all type **Type II** solutions with sufficiently large height
\[\sum_{v\in S}\sum_{i=0}^{n}\lambda_{s_{vi}}(x,v)\geqslant(n+1)(n+1+\epsilon)h _{L}(x)+\sum_{v\in S}\delta_{v}-\sum_{v\in S}\sum_{i=0}^{n}b_{v}h_{L}(x).\]
Now, consider the implications of Lemma 7.1 applied to the sufficiently small positive real number
\[c=1-\frac{\epsilon}{4(n+1)^{2}}\]
and the finite set \(S\times\{0,\ldots,n\}\). The conclusion is that the collection of **Type II** solutions admits a partition into finitely many subsets so that the following is true:
For each such subset \(\mathcal{S}_{\mathbf{b}}\), there exists a tuple of nonnegative numbers
\[\mathbf{b}=(b_{vi})_{\stackrel{{ v\in S}}{{i=0,\ldots,n}}} \tag{7.11}\]
which has the property that
\[\sum_{v\in S}\sum_{i=0}^{n}b_{vi}=1-\frac{\epsilon}{4(n+1)^{2}}\]
and which is such that if \(x\), as in (7.1), is a **Type II** solution that lies in this subset \(\mathcal{S}_{\mathbf{b}}\) then
\[\lambda_{s_{vi}}(x,v)\geqslant b_{vi}(n+1)(n+1+\epsilon)h_{L}(x)+\frac{1}{n+1} \delta_{v}-b_{v}h_{L}(x)\]
for all \(v\in S\) and all \(i=0,\ldots,n\).
Now, given such a tuple (7.11), define the tuple
\[(e_{vi})_{\begin{subarray}{c}v\in S\\ i=0,\ldots,n\end{subarray}} \tag{7.12}\]
by the condition that
\[e_{vi}=b_{vi}((n+1)(n+1+\epsilon))-b_{v}.\]
Then the **Type II** solutions that lie in the subset \(\mathcal{S}_{\mathbf{b}}\) satisfy the inequality that
\[\lambda_{s_{vi}}(x,v)\geqslant e_{vi}h_{L}(x)+\frac{1}{n+1}\delta_{v}.\]
Moreover, note that, by construction, the tuple (7.12) has the property that
\[\sum_{v\in S}\sum_{i=0}^{n}e_{vi} =(n+1)(n+1+\epsilon)\left(1-\frac{\epsilon}{4(n+1)^{2}}\right)-n (n+1+\epsilon)\] \[>n+1.\]
Thus, after adjusting the given constant \(\mathrm{O}(1)\), if necessary, the above discussion implies that the nature of the collection of **Type II** solutions are thus described by finitely many applications of Theorem 1.3.
Having established the case that \(n=n_{v}\), for all \(v\in S\), in the form of Theorem 1.4, the case that \(n_{v}>n\), then follows by adapting the argument of [2, Proof of Theorem 7.2.9] to the language of linear systems. In particular, Corollary 1.5 is established by successive application of Theorem 1.4.
Proof of Corollary 1.5.: We reduce from the case that \(n_{v}>n\) to finitely many applications of the case that \(n_{v}=n\). This is achieved by partitioning the solutions to the inequality
\[\sum_{v\in S}\sum_{i=0}^{n_{v}}\lambda_{s_{i}}(x,v)\geqslant(n+1+\epsilon)h_{L} (x)+\mathrm{O}(1)\]
into finitely many classes such that after reindexing the sections \(s_{v0},\ldots,s_{vn}\), if needed, and setting \(m_{vi}=s_{vi}\), for \(0\leqslant i\leqslant n\); then there exists a constant \(C\) which is such that
\[\sum_{v\in S}\sum_{i=0}^{n}\lambda_{m_{vi}}(x,v)\geqslant\sum_{v\in S}\sum_{i =0}^{n_{v}}\lambda_{s_{vi}}(x,v)-\log(C)>(n+1+\epsilon)h_{L}(x)-\log(C).\]
By the Northcott property for big line bundles, adjusting \(\epsilon\) and excluding at most a finite number of solutions, if required, the desired conclusion of Theorem 1.4, for each fixed partition described above, follows from the case that \(n=n_{v}\)
## 8. Proof of Theorem 1.6
Proof of Theorem 1.6.: The proof of Theorem 1.6 is based on the Ru-Vojta filtration construction. We briefly recall the most important points here and refer to the presentation given in [13] for further details.
Let \(d:=\dim X\), set
\[\Sigma:=\left\{\sigma\in\{1,\ldots,q\}:\bigcap_{j\in\sigma}\operatorname{Supp}(D _{j})\neq\varnothing\right\}\]
and fix \(m\gg 0\), such that \(\operatorname{Bs}(L)=\operatorname{Bs}(|L^{\otimes m}_{\overline{\mathbf{K}}}|)\).
For each \(i=1,\ldots,q\), and each \(v\in S\), let \(\lambda_{\mathcal{D}_{i}}(\cdot,v)\) be a local Weil function for \(D_{i}\) with respect to \(v\) and a fixed choice of presentation.
Observe now that, as noted in [26, p. 985], each of the quantities \(m_{S}(x,D_{i})/h_{L}(x)\), for \(i=1,\ldots,q\), is bounded for all \(x\in X(\mathbf{K})\) outside of some proper Zariski closed subset. In more precise terms, similar to the reduction step from [13, p. 8], since \(L\) is assumed to be big, it follows from the Northcott property, for big line bundles, that there exist constants \(A\) and \(B\) which are such that for all but perhaps finitely many points
\[x\in X(\mathbf{K})\setminus\left(\operatorname{Bs}(L)\bigcup\bigcup_{i=1}^{ q}\operatorname{Supp}(D_{i})\right)\]
if \(h_{L}(x)>B\), then
\[\sum_{v\in S}\lambda_{\mathcal{D}_{i}}(x,v)<Ah_{L}(x)\text{ for all }i=1, \ldots,q.\]
Thus, to prove Theorem 1.6, following the approach of [13, p. 8] and adjusting \(\epsilon>0\) if necessary, we fix suitable sufficiently small rational numbers \(\beta_{1},\ldots,\beta_{q}\in\mathbb{Q}\), which are such that if
\[\gamma(L,D_{i}):=\limsup_{m\to\infty}\frac{mh^{0}(X,L^{\otimes m})}{\sum_{ \ell\geqslant 1}h^{0}(X_{\mathbf{F}},L^{\otimes m}_{\mathbf{F}}\otimes \mathcal{O}_{X_{\mathbf{F}}}(-\ell D_{i}))},\]
then \(\beta_{i}<\gamma(L,D_{i})^{-1}\).
Finally, we also fix a positive integer \(b>0\) and a sufficiently small positive number \(\epsilon_{1}>0\) which are such that the inequality
\[\left(1+\frac{d}{b}\right)\max_{1\leqslant i\leqslant q}\frac{\beta_{i}mh^{0} (X,L^{\otimes m})+m\epsilon_{1}}{\sum_{\ell\geqslant 1}h^{0}(X_{\mathbf{F}},L^{ \otimes m}_{\mathbf{F}}\otimes\mathcal{O}_{X_{\mathbf{F}}}(-\ell D_{i}))}<1+\epsilon \tag{8.1}\]
holds true.
Now, for each \(\sigma\in\Sigma\), let
\[\Delta_{\sigma}:=\left\{\mathbf{a}=(a_{i})\in\prod_{i\in\sigma}\beta_{i}^{-1} \mathbb{N}:\sum_{i\in\sigma}\beta_{i}a_{i}=b\right\}\]
and for each \(\mathbf{a}\in\Delta_{\sigma}\) define for all \(t\in\mathbb{R}_{\geqslant 0}\)
\[\mathcal{I}(t):=\sum_{\begin{subarray}{c}\mathbf{b}\in\mathbb{N}^{\#\sigma}\\ \sum_{i\in\sigma}a_{i}b_{i}\geqslant t\end{subarray}}\mathcal{O}_{X_{\mathbf{F }}}\left(-\sum_{i\in\sigma}b_{i}D_{i}\right);\]
set
\[\mathcal{F}(\sigma;\mathbf{a})_{t}=\mathrm{H}^{0}(X_{\mathbf{F}},L_{\mathbf{F }}^{\otimes m}\otimes\mathcal{I}(t))\subseteq\mathrm{H}^{0}(X_{\mathbf{F}},L_ {\mathbf{F}}^{\otimes m}).\]
In what follows, we let \(\mathcal{B}_{\sigma;\mathbf{a}}\) be a basis of \(\mathrm{H}^{0}(X_{\mathbf{F}},L_{\mathbf{F}}^{\otimes m})\) which is adapted to the filtration \(\{\mathcal{F}(\sigma;\mathbf{a})_{t}\}_{t\in\mathbb{R}_{\geqslant 0}}\).
Now, the key technical point, which is exposed in [13, pp. 9-12], is that, since the divisors \(D_{1},\ldots,D_{q}\) intersect properly, over \(\mathbf{F}\), compare with [26, Definition 2.1 (b)], the above filtration construction produces a collection of sections
\[\{s_{1},\ldots,s_{k_{2}}\}\subseteq\mathrm{H}^{0}(X_{\mathbf{F}},L_{\mathbf{F }}^{\otimes m})\]
which have the property that if \(v\in S\), then
\[\frac{b}{b+d}\left(\min_{1\leqslant i\leqslant q}\sum_{\ell \geqslant 0}\frac{h^{0}(X_{\mathbf{F}},L_{\mathbf{F}}^{\otimes m}\otimes \mathcal{O}_{X_{\mathbf{F}}}(-\ell D_{i}))}{\beta_{i}}\right)\sum_{i=1}^{q} \beta_{i}\lambda_{\mathcal{D}_{j}}(\cdot,v)\\ \leqslant\max_{1\leqslant i\leqslant k_{1}}\sum_{j\in J_{i}} \lambda_{s_{j}}(\cdot,v)+\mathrm{O}_{v}(1). \tag{8.2}\]
Here \(J_{i}\subseteq\{1,\ldots,k_{1}\}\) are chosen such that \(\mathcal{B}_{i}=\{s_{j}:j\in J_{i}\}\) where
\[\bigcup_{\sigma;\mathbf{a}}\mathcal{B}_{\sigma;\mathbf{a}}=\mathcal{B}_{1} \bigcup\cdots\bigcup\mathcal{B}_{k_{1}}=\{s_{1},\ldots,s_{k_{2}}\}.\]
On the other hand, it follows from the Subspace Theorem, with linear scattering (Theorem 1.4) in the form of Corollary 1.5, that
\[\sum_{v\in S}\max_{J}\sum_{j\in J}\lambda_{s_{j}}(x,v)\leqslant\left(h^{0}(X, L^{\otimes m})+\epsilon_{1}\right)h_{L^{\otimes m}}(x)+\mathrm{O}(1) \tag{8.3}\]
for all \(x\in X(\mathbf{K})\) outside of a Zariski closed subset \(Z\). This subset \(Z\) may be taken to be in the form desired by the conclusion of Theorem 1.6. (In (8.3), the maximum is taken over all \(J\subseteq\{1,\ldots,k_{1}\}\) for which the sections \(s_{j}\), \(j\in J\), are linearly independent.)
Combining the above three inequalities, (8.1), (8.2) and (8.3), and using the fact that \(h_{L^{\otimes m}}(x)=mh_{L}(x)\) it then follows that
\[\sum_{i=1}^{q}\beta_{i}m_{S}(x,D_{i})\leqslant\\ \left(1+\frac{d}{b}\right)\max_{1\leqslant i\leqslant q}\left( \frac{\beta_{i}h^{0}(X,L^{\otimes m})+\epsilon_{1}}{\sum_{\ell\geqslant 1}h^{0}(X_{ \mathbf{F}},L_{\mathbf{F}}^{\otimes m}\otimes\mathcal{O}_{X_{\mathbf{F}}}(- \ell D_{i}))}\right)h_{L^{\otimes m}}(x)+\mathrm{O}(1). \tag{8.4}\]
This final inequality (8.4), may be written in the form
\[\sum_{i=1}^{q}\beta_{i}m_{S}(x,D)\leqslant(1+\epsilon)h_{L}(x)+\mathrm{O}(1).\]
We have proved existence of constants \(\beta(L,D_{i})\), for \(i=1,\ldots,q\), which are such that the conclusion of Theorem 1.6 holds true. To complete the proof, we replace the \(\beta(L,D_{i})\) by smaller real numbers as required.
## 9. Asymptotic nature of linear sections
We now discuss _asymptotic aspects_ of the concept of linear section with respect to a linear series. To this end, let \(X\) be a geometrically irreducible and geometrically normal projective variety over a base number field \(\mathbf{K}\). Let \(L\) be a line bundle on \(X\) which has the property that \(\kappa(X,L)>0\).
For simplicity we assume that \(L\) has exponent \(e=e(L)\) equal to \(1\). For sufficiently large integers \(m\in\mathrm{N}(X,L)\), with the property that
\[n_{m}:=\dim|L^{\otimes m}|>1,\]
let \(X_{m}\subseteq\mathbb{P}^{n_{m}}\) be the closure of the image of the rational map
\[\phi_{m}=\phi_{|L^{\otimes m}|}\colon X\dashrightarrow\mathbb{P}^{n_{m}}.\]
Moreover, denote by
\[\pi_{m}\colon X_{m}^{\prime}\to X\]
a resolution of indeterminacies of \(\phi_{|L^{\otimes m}|}\).
Fix a suitable sufficiently large integer \(\ell_{0}\in\mathrm{N}(X,L)\) which has the property that
\[\dim X_{\ell_{0}}=\kappa(X,L).\]
Fix two relatively prime positive integers \(p\) and \(q\) with the property that
\[\dim X_{p}=\dim X_{q}=\kappa(X,L).\]
Observe that all sufficiently large positive integers \(m\gg 0\) can be written in the form
\[m=bp^{\ell_{0}}+cq^{\ell_{0}}.\]
Here, \(b,c\geqslant 1\) are suitable positive integers.
It then follows, via the theory of Iitaka fibrations, [20], that for all such sufficiently large integers \(m\gg 0\), there exists a commutative diagram
Here, \(\phi_{\infty}\) is an Iitaka fibration for \(L\), and defined over some finite extension \(\mathbf{F}\) of the base number field \(\mathbf{K}\), the morphism \(\mu_{m}\) is generically finite whereas the morphisms \(u_{m}\) and \(u_{\infty}\) are birational.
Thus, for all sufficiently large positive integers
\[m=bp^{\ell_{0}}+cq^{\ell_{0}}\gg 0,\]
the proper linear sections of \(X\), with respect to \(L\), _stabilize_ in the sense that each linear section of \(X\) with respect to \(|L^{\otimes m}|\) can be described as
\[\pi_{m}(\phi_{m}^{\prime-1}(T))=u_{\infty}(u_{m}^{-1}(\phi_{m}^{{}^{\prime}-1 }(T))),\]
for some proper linear subspace \(T\subsetneq\mathbb{P}_{\mathbf{K}}^{n_{m}}\).
|
2310.04661
|
Integrable systems on rectangular $\mathcal{W}$-superalgebras via super
Adler-type operators
|
In this paper, we introduce a class of super Adler-type operators associated
with the Lie superalgebra $\mathfrak{gl}(m|n)$. We show that these operators
generate Poisson vertex superalgebras which are isomorphic to the classical
$\mathcal{W}$-superalgebras associated with $\mathfrak{gl}(m|n)$ and some
rectangular nilpotent elements. We use this isomorphism to construct integrable
hierarchies on these rectangular $\mathcal{W}$-superalgebras.
|
Sylvain Carpentier, Gahng Sahn Lee, Uhi Rinn Suh
|
2023-10-07T02:53:35Z
|
http://arxiv.org/abs/2310.04661v1
|
# Integrable systems on rectangular \(\mathcal{W}\)-superalgebras
###### Abstract.
In this paper, we introduce a class of super Adler-type operators associated with the Lie superalgebra \(\mathfrak{g}(m|n)\). We show that these operators generate Poisson vertex superalgebras which are isomorphic to the classical \(\mathcal{W}\)-superalgebras associated with \(\mathfrak{g}(m|n)\) and some rectangular nilpotent elements. We use this isomorphism to construct integrable hierarchies on these rectangular \(\mathcal{W}\)-superalgebras.
S. Carpentier thanks the National Research Foundation of Korea(NRF) for the grant funded by the Korean government(MSIT) (No.2020R1A5A1016126). G.S. Lee and U.R. Suh thank to the NRF for the grant #2022R1C1C1008698 and Seoul National University for the Creative-Pioneering Researchers Program.
where the elements \(E_{\alpha}\) form a basis of the Borel subalgebra generated by \(e_{i}\) and \(h_{i}\) and \(\mathcal{V}(\mathfrak{b})\) is the algebra of differential polynomials generated by the indeterminates \(q_{\alpha}\). They constructed integrable hierarchies on the algebra \(\mathcal{V}(\mathfrak{b})\) and showed that these are Hamiltonian for each of the following Lie brackets defined on the quotient spaces \(\mathcal{V}(\mathfrak{b})/\partial\mathcal{V}(\mathfrak{b})\)
\[\big{[}\,\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
\(i,j,k\in I\). If \(m\) is the number of even elements in \(\mathcal{B}\) and \(n=|I|-m\), it is clear that \(\mathfrak{gl}(I)\) is isomorphic to \(\mathfrak{gl}(m|n)\). To this data, we attach the even matrix differential operator
\[L(\partial)=\sum_{i\in I}e_{ii}\otimes\partial^{N}+\sum_{M=0}^{N-1}\sum_{i,j \in I}e_{ij}\otimes u_{M,ij}\partial^{M}\quad\in\mathfrak{gl}(I)\otimes \mathcal{V}_{I}^{N}, \tag{1.8}\]
where the \(u_{M,ij}\)'s are generators of a superalgebra of differential polynomials \(\mathcal{V}_{I}^{N}\), and the elements \(e_{ij}\) form the natural basis of \(\mathfrak{gl}(I)\). Note that both \(e_{ij}\) and \(u_{M,ij}\) have parity \(\bar{i}+\bar{j}\). We prove in Section 3.3 that the following identity defines a unique PVsA structure on \(\mathcal{V}_{I}^{N}\)
\[\begin{split}\{L_{ij}(z)_{\lambda}L_{hk}(w)\}=&(- 1)^{\bar{i}\bar{j}+\bar{i}\bar{h}+\bar{j}\bar{h}}L_{hi}(w+\lambda+\partial)(z- w-\lambda-\partial)^{-1}(L_{ik})^{*}(\lambda-z)(1)\\ &-(-1)^{\bar{i}\bar{j}+\bar{i}\bar{h}+\bar{j}\bar{h}}L_{hj}(z)( z-w-\lambda-\partial)^{-1}L_{ik}(w)(1),\end{split} \tag{1.9}\]
for \(i,j,h,k\in I\). We call this identity the _super Adler identity_ which is a super analogue of [1].
We show in Proposition 3.9 that the \(\lambda\)-bracket (1.9) on the PVsA \(\mathcal{V}_{I}^{N}\) is connected to the following quadratic super Gelfand-Dickey Lie bracket
\[\int\{f_{\,\lambda}\,g\}_{\lambda=0}=\text{ Res }\text{str}\int(L\star\frac{\delta f}{\delta L})_{+}\star L \star\frac{\delta g}{\delta L}-L\star(\frac{\delta f}{\delta L}\star L)_{+} \star\frac{\delta g}{\delta L} \tag{1.10}\]
on \(\mathcal{V}_{I}^{N}/\partial\mathcal{V}_{I}^{N}\), for \(L\) in (1.8) and \(f,g\in\mathcal{V}_{I}^{N}.\) In this formula, the supertrace \(\text{str}\) is defined by \(\text{str}(e_{ii})=(-1)^{\bar{i}}\) and the \(\star\)-product is given by
\[(a\otimes v)\star(b\otimes w):=(-1)^{\bar{b}\bar{i}+\bar{a}\bar{b}}ab\otimes vw. \tag{1.11}\]
Additionally, this \(\lambda\)-bracket can be deformed by adding constant multiples of the identity matrix to \(L\). The deformed bracket \([\ _{\lambda}\ ]\) reduces on the quotient \(\mathcal{V}_{I}^{N}/\partial\mathcal{V}_{I}^{N}\) to the linear super Gelfand-Dickey Lie bracket
\[\int[f_{\,\lambda}\,g]_{\lambda=0}=\text{ Res }\text{str}\int\Big{(}L\star\frac{ \delta f}{\delta L}-L\star\frac{\delta f}{\delta L}\Big{)}_{+}\star\frac{ \delta g}{\delta L}. \tag{1.12}\]
More generally, an _Adler-type PVsA_ is defined to be a PVsA which is generated as a differential superalgebra by the coefficients of some matrix pseudo-differential operator and whose \(\lambda\)-brackets between generators are given by the super Adler identity for that operator. We call such an operator \(A(\partial)\) a _super Adler-type operator_ (sATO). We prove in Theorem 3.7 that if \(\mathcal{V}\) is a differential superalgebra generated by the coefficients of some even matrix pseudo-differential operator \(A(\partial)\) and if the super Adler identity (1.9) is well-defined, then the axioms of a PVsA are satisfied.
In Section 4, we connect Adler-type PVsAs to the theory of classical \(\mathcal{W}\)-superalgebras. After noting that when \(N=1\) the corresponding Adler-type PVsA is the affine PVsA \(\mathcal{V}^{-1}(\mathfrak{gl}(m|n))\) of level \(-1\), we prove the following theorem (Theorem 4.8 in Section 4).
**Theorem 1.1**.: _The differential superalgebra \(\mathcal{V}_{I}^{N}\) endowed with the \(\lambda\)-bracket (1.9) is isomorphic to the classical \(\mathcal{W}\)-superalgebra \(\mathcal{W}(\mathfrak{gl}(Nm|Nn),f)\) of level \(-1\) associated with the so-called \(N\times(m|n)\) rectangular nilpotent element \(f\)._
To prove the Theorem 1.1, we consider a \((Nm|Nn)\times(Nm|Nn)\) matrix of the form
\[P(\partial)=\left[\begin{array}{cccc}\mathbb{1}_{(m|n)}\partial+P_{[11]}& \mathbb{1}_{(m|n)}&0&\cdots&0\\ P_{[12]}&\mathbb{1}_{(m|n)}\partial+P_{[22]}&\mathbb{1}_{(m|n)}&\cdots&0\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ P_{[N-1\,1]}&P_{[N-1\,2]}&P_{[N-1\,3]}&\cdots&\mathbb{1}_{(m|n)}\\ P_{[N1]}&P_{[N2]}&P_{[N3]}&\cdots&\mathbb{1}_{(m|n)}\partial+P_{[NN]}\end{array} \right], \tag{1.13}\]
where the \(P_{[i\beta]}\)'s are even matrices of size \((m|n)\times(m|n)\) whose coefficients are the generators of some superalgebra of differential polynomials. Next, we take the last \((m+n)\) rows and first \((m+n)\) columns of \(P(\partial)\) and define the corresponding _quasi-determinant_ matrix \(\operatorname{qdet}(P(\partial))\in\mathfrak{gl}(m|n)\otimes\mathcal{V}(\partial)\). We emphasize that the quasi-determinant is defined via the \(\star\)-product (1.11). Finally, we show in Theorem 4.9 that the generators of the \(\mathcal{W}\)-superalgebra \(\mathcal{W}(\mathfrak{gl}(Nm|Nn),f)\) are given by the coefficients of this quasi-determinant.
In Section 5, we construct integrable hierarchies on these rectangular \(\mathcal{W}\)-superalgebras. By Theorem 1.1, it is equivalent to build an integrable hierarchy on an Adler-type PVsA \(\mathcal{V}_{I}^{N}\) associated with \(L(\partial)\) in (1.8). The main ingredient in this section is the infinite family of Hamiltonians
\[h_{k}=\frac{N}{k}\mathrm{Res\;str}L^{\frac{k}{N}}(\partial)\in\mathcal{V}_{I}^ {N}, \tag{1.14}\]
for a positive integer \(k\). A key point is that the fractional power \(L^{\frac{k}{N}}(\partial)\) is computed with respect to a different product
\[(a\otimes v)\circ(b\otimes w):=(-1)^{\tilde{b}\tilde{v}}ab\otimes vw \tag{1.15}\]
on \(\mathfrak{gl}(I)\otimes\mathcal{V}_{I}^{N}\), while the super Gelfand-Dickey brackets (1.10) and (1.12) can only be expressed using the \(\star\)-product (1.11). We check in Lemmas 5.5 and 5.6 that \(\{h_{k}\,\lambda\,u\}\,|_{\lambda=0}=\{h_{k+N\;\lambda}\,u\}\,|_{\lambda=0}\) for a positive integer \(k\) and \(\{h_{k^{\prime}\;\lambda\,u}\,u\}\,|_{\lambda=0}=0\) for \(k^{\prime}=1,2,\cdots,N\). Finally, by the Lenard-Magri scheme, we obtain the following theorem.
**Theorem 1.2** (Proposition 5.7 and Theorem 5.8).: _Let \(f\) be the \(N\times(m|n)\) rectangular nilpotent in the Lie superalgebra \(\mathfrak{gl}(Nm|Nn)\). For positive integers \(k\), the family of equations_
\[\frac{du}{dt_{k}}=\{h_{k\;\lambda}\,u\}\,|_{\lambda=0}\quad\Leftrightarrow \quad\frac{dL}{dt_{k}}=(L^{\frac{k}{N}})_{+}\circ L-L\circ(L^{\frac{k}{N}})_{+} \tag{1.16}\]
_is an integrable system on \(\mathcal{V}_{I}^{N}\simeq\mathcal{W}(\mathfrak{gl}(Nm|Nn),f)\). In addition, the local functionals \(\int h_{k^{\prime}}\) for positive integers \(k^{\prime}\) are integrals of motion for any equation in (1.16)._
This system is a specialization of noncommutative KdV [10] to the algebra \((\mathfrak{gl}(I)\otimes\mathcal{V}_{I}^{N},\,\circ)\). Finally, we stress that for the construction of the Hamiltonians and the Lax formulation of the hierarchy the \(\circ\)-product had to be used while the super Gelfand-Dickey brackets (1.10) and (1.12) can only be expressed using the \(\star\)-product.
## 2. Preliminaries
In this section, we present an overview of some basic notions used throughout the paper. A _vector superspace_ is a \(\mathbb{Z}/2\mathbb{Z}\)-graded vector space \(V=V_{\bar{0}}\oplus V_{\bar{1}}\). We only consider vector superspaces and tensor products over \(\mathbb{C}\), the field of complex numbers. A nonzero element \(a\in V\) is called _homogeneous_ if it belongs either to \(V_{0}\) or \(V_{\bar{1}}\) and each homogeneous element \(a\in V_{\bar{i}}\) is assigned a _parity_\(\tilde{a}=i\in\{0,1\}\). The direct summand \(V_{\bar{0}}\) (resp. \(V_{\bar{1}}\)) of \(V\) is called the _even_ (resp. _odd_) subspace of \(V\), and an element in \(V_{\bar{0}}\) (resp. \(V_{\bar{1}}\)) is said to be _even_ (resp. _odd_). A _superalgebra_\((\mathcal{A},\cdot)\) consists of a vector superspace \(\mathcal{A}\) and a bilinear product on it satisfying \(\mathcal{A}_{\bar{i}}\cdot\mathcal{A}_{\bar{j}}\subseteq\mathcal{A}_{\bar{i}+ \bar{j}}\) for \(i,j\in\{0,1\}\). For two vector superspaces \(V\) and \(W\), \(\mathrm{Hom}(V,W)\) is also a vector superspace by letting \(\phi\in\mathrm{Hom}(V,W)_{\bar{i}}\) if and only if \(\phi(V_{\bar{j}})\subseteq W_{\bar{i}+\bar{j}}\). For the sake of simplicity, it is assumed that every element is homogeneous unless stated otherwise.
### Poisson vertex superalgebras
A _Lie superalgebra_\(\mathfrak{g}=\mathfrak{g}_{\bar{0}}\oplus\mathfrak{g}_{\bar{1}}\) is a vector superspace equipped with an even linear map \([\,\ ]:\mathfrak{g}\otimes\mathfrak{g}\rightarrow\mathfrak{g}\) which satisfies
(skew-symmetry) \([a,b]=-(-1)^{\tilde{a}\tilde{b}}[b,a]\),
(Jacobi identity) \([a,[b,c]]=[[a,b],c]+(-1)^{\tilde{b}\tilde{c}}[b,[a,c]]\),
for \(a,b,c\in\mathfrak{g}\). The space of endomorphisms \(\mathrm{End}(V)\) on a vector superspace \(V=V_{\bar{0}}\oplus V_{\bar{1}}\) is one of the fundamental examples of Lie superalgebras whose bracket is defined by \([\phi,\psi]:=\phi\psi-(-1)^{\tilde{\phi}\tilde{\psi}}\psi\phi\) for \(\phi,\psi\in\mathrm{End}(V)\). It is called the _general linear Lie superalgebra_\(\mathfrak{gl}(V)\). For any integers \(m,n\geq 0\), the general linear Lie superalgebra \(\mathfrak{gl}(m|n)\) is given by \(\mathfrak{gl}(\mathbb{C}^{(m|n)})\), where \(\mathbb{C}^{(m|n)}\) is the vector superspace with an ordered basis whose first \(m\) elements are even and last \(n\) elements are odd.
Let \(\mathcal{V}\) be a _differential superalgebra_, i.e. a superalgebra endowed with an even derivation \(\partial:\mathcal{V}\rightarrow\mathcal{V}\). We assume that \(\mathcal{V}\) is associative and supercommutative. The derivation can be naturally extended to the algebra \(\mathcal{V}[\lambda]\) of polynomials in the even indeterminate \(\lambda\). An even linear map \(\{\ _{\lambda}\ \}:\mathcal{V}\otimes\mathcal{V}\rightarrow\mathcal{V}[\lambda]\) satisfying the following conditions
(sesquilinearity) \(\{\partial a_{\lambda}b\}=-\lambda\{a_{\lambda}b\}\) and \(\{a_{\lambda}\partial b\}=(\lambda+\partial)\{a_{\lambda}b\}\),
(left Leibniz rule)\(\{a_{\lambda}bc\}=\{a_{\lambda}b\}c+(-1)^{\tilde{a}\tilde{b}}b\{a_{ \lambda}c\}\),
(right Leibniz rule) \(\{ab_{\lambda}c\}=(-1)^{\tilde{b}\tilde{c}}\{a_{\lambda+\partial}c\}_{ \rightarrow}b+(-1)^{\tilde{a}(\tilde{b}+\tilde{c})}\{b_{\lambda+\partial}c\}_ {\rightarrow}a\),
for \(a,b,c\in\mathcal{V}\), is called a \(\lambda\)-_bracket_ on \(\mathcal{V}\). We use the notation \(\{a_{\lambda}b\}=\sum_{n\in\mathbb{Z}_{+}}\lambda^{n}a_{(n)}b\). In the right Leibniz rule, the right arrow means that the operator \(\partial\) acts only on \(b\). More precisely, \(\{a_{\lambda+\partial}c\}_{\rightarrow}b:=\sum_{n\in\mathbb{Z}_{+}}a_{(n)}c \,(\lambda+\partial)^{n}b\).
**Definition 2.1**.: _A Poisson vertex superalgebra (PVAs) is a differential superalgebra \((\mathcal{V},\partial)\) with a \(\lambda\)-bracket \(\{\ _{\lambda}\ \}\) satisfying the following two additional axioms:_
(skew-symmetry) \(\{a_{\lambda}b\}=-(-1)^{\tilde{a}\tilde{b}}\{b_{-\lambda-\partial}a\}\)_,_
(Jacobi identity) \(\{a_{\lambda}\{b_{\mu}c\}\}=\{\{a_{\lambda}b\}_{\lambda+\mu}c\}+(-1)^{ \tilde{a}\tilde{b}}\{b_{\mu}\{a_{\lambda}c\}\}\)_,_
_for \(a,b,c\in\mathcal{V}\). In the skew-symmetry, \(\{b_{-\lambda-\partial}a\}:=\sum_{n\in\mathbb{Z}_{+}}(-\lambda-\partial)^{n}\,b _{(n)}a\) and the Jacobi identity axiom holds in \(\mathcal{V}[\lambda,\mu]\). Note that the sesquilinearity axioms (resp. left and right Leibniz rules) are compatible with the skew-symmetry axiom._
Now, let us present some useful results on \(\lambda\)-brackets. Consider a vector superspace \(U\). By the Leibniz rule, there is a unique derivation on the superalgebra \(\mathcal{V}(U)=S(\mathbb{C}[\partial]\otimes U)\) extending the action of \(\partial\) on the \(\mathbb{C}[\partial]\)-module \(\mathbb{C}[\partial]\otimes U\). Recall that the supersymmetric algebra of a vector superspace \(V\) is \(S(V):=S(V_{\bar{0}})\otimes\bigwedge(V_{\bar{1}})\), where \(S(V_{\bar{0}})\) is the symmetric algebra of \(V_{\bar{0}}\) and \(\bigwedge(V_{\bar{1}})\) is the exterior algebra of \(V_{\bar{1}}\). If we fix a homogeneous basis \(\{u_{i}\}_{i\in I}\) of \(U\), then as a differential superalgebra
\[\mathcal{V}(U)=\mathbb{C}[u_{i}^{(n)}|i\in I,\,n\in\mathbb{Z}_{+}], \tag{2.1}\]
where the even derivation \(\partial\) is given by \(u_{i}^{(n)}=\partial^{n}u_{i}\) for \(n\in\mathbb{Z}_{+}\). The partial derivatives \((\frac{\partial}{\partial u_{i}^{(m)}})_{i\in I,m\geq 0}\) are the derivations of \(\mathcal{V}(U)\) of the same parity as \(u_{i}\) and are defined by \(\frac{\partial}{\partial u_{i}^{(m)}}u_{j}^{(n)}=\delta_{ij}\delta_{mn}\), for \(i\in I\) and \(m\in\mathbb{Z}_{+}\).
**Theorem 2.2** ([14, 15]).: _Let \(U\) be a vector superspace and \(\mathcal{V}(U)\) be the superalgebra of differential polynomials given in (2.1) endowed with a \(\lambda\)-bracket \(\{\ \lambda\ \}\). Then the following properties hold._
(a) _(master formula) For \(f,g\in\mathcal{V}(U)\), we have_
\[\{f_{\lambda}g\}=\sum_{i,j\in I,m,n\in\mathbb{Z}_{\geq 0}}C_{i,j}^{f,g}\frac{ \partial g}{\partial u_{j}^{(n)}}(\lambda+\partial)^{n}\{u_{i\lambda+\partial }u_{j}\}_{\rightarrow}(-\lambda-\partial)^{m}\frac{\partial f}{\partial u_{i }^{(m)}}, \tag{2.2}\]
_where \(C_{i,j}^{f,g}=(-1)^{\tilde{f}g+\tilde{u}_{i}\tilde{u}_{j}+\tilde{g}\tilde{u}_ {j}}\)._
(b) _The_ \(\lambda\)_-bracket satisfies the skew-symmetry axiom if and only if_
\[\{u_{i\lambda}u_{j}\}=-(-1)^{\tilde{u}_{i}\tilde{u}_{j}}\{u_{j-\lambda- \partial}u_{i}\} \tag{2.3}\]
_holds for any_ \(i,j\in I\)_._
(c) _Assume that the_ \(\lambda\)_-bracket satisfies the skew-symmetry axiom. Then the differential superalgebra_ \(\mathcal{V}(U)\) _endowed with this_ \(\lambda\)_-bracket is a PVsA, provided that_
\[\{u_{i\lambda}\{u_{j\mu}u_{k}\}\}=\{\{u_{i\lambda}u_{j}\}_{\lambda+\mu}u_{k}\} +(-1)^{\tilde{u}_{i}\tilde{u}_{j}}\{u_{j\mu}\{u_{i\lambda}u_{k}\}\} \tag{2.4}\]
_holds for any_ \(i,j,k\in I\)_._
Proof.: For a detailed proof, refer to Theorem 1.15 in [14] and Proposition 2.4 and 2.5 in [15].
In other words, Theorem 2.2 (a) says that a \(\lambda\)-bracket on a superalgebra of differential polynomial \(\mathcal{P}\) is completely determined by its values on pairs of elements in a generating set \(S\). Moreover, in order to see if the \(\lambda\)-bracket on \(\mathcal{P}\) is a PVsA \(\lambda\)-bracket, it is enough to check the skew-symmetry and Jacobi identity axioms between the elements of \(S\). The following example is one of the most fundamental example of PVsA and will be widely used in this paper.
**Example 2.3** (Affine Poisson vertex superalgebra).: _Let \(\mathfrak{g}\) be a Lie superalgebra with an even supersymmetric invariant bilinear form \((\ |\ )\) and let \(k\in\mathbb{C}\). Consider the \(\lambda\)-bracket on \(\mathcal{V}(\mathfrak{g})\) defined by_
\[\{a_{\lambda}b\}=[a,b]+k\lambda(a|b)\quad\text{for}\quad a,b\in\mathfrak{g} \tag{2.5}\]
_and Theorem 2.2 (a). We can also check the skew-symmetry and Jacobi identity axioms by Theorem 2.2 (b) and (c). The differential superalgebra \(\mathcal{V}(\mathfrak{g})\) endowed with the bracket (2.5) is called the affine Poisson vertex superalgebra (affine PVsA) associated with \(\mathfrak{g}\). To emphasize the role of \(k\) in the definition of the bracket (2.5), we sometimes denote the affine PVsA by \(\mathcal{V}^{k}(\mathfrak{g})\) and call \(k\) its level._
### Classical \(\mathcal{W}\)-superalgebras
Let \(\mathfrak{g}\) be a basic classical Lie superalgebra and \(f\) be an even nilpotent element in an \(\mathfrak{sl}_{2}\)-triple \((e,h,f)\) in \(\mathfrak{g}\). Consider the nondegenerate invariant supersymmetric even bilinear form \((\ |\ )\) on \(\mathfrak{g}\) normalized by \((e|f)=\frac{1}{2}(h|h)=1\) and the \(\frac{\mathbb{Z}}{2}\)-graded decomposition of \(\mathfrak{g}:=\bigoplus_{i\in\frac{\mathfrak{g}}{2}}\mathfrak{g}_{i}\), where \(\mathfrak{g}_{i}=\{\,a\in\mathfrak{g}\,|\,[h,a]=2ia\,\}.\) We often denote subspaces of \(\mathfrak{g}\) by \(\mathfrak{g}_{\geq j}=\bigoplus_{i\geq j}\mathfrak{g}_{i}\), \(\mathfrak{g}_{\leq j}=\bigoplus_{i\leq j}\mathfrak{g}_{i}\) and \(\mathfrak{g}_{<j}=\bigoplus_{i<j}\mathfrak{g}_{i}.\) In particular, we let
\[\mathfrak{n}:=\mathfrak{g}_{\geq\frac{1}{2}},\qquad\mathfrak{p}:=\mathfrak{g}_{< 1}. \tag{2.6}\]
Define the differential superalgebra homomorphism \(\rho:\mathcal{V}(\mathfrak{g})\rightarrow\mathcal{V}(\mathfrak{p})\) determined by \(a\mapsto\pi_{\mathfrak{p}}(a)-(f|a)\) for \(a\in\mathfrak{g}\), where \(\pi_{\mathfrak{p}}:\mathfrak{g}\rightarrow\mathfrak{p}\) is the canonical projection map. Extend the map \(\rho\) to the linear map
\(\rho:\mathcal{V}(\mathfrak{g})[\lambda]\to\mathcal{V}(\mathfrak{p})[\lambda]\) such that \(\rho(A\lambda^{n})=\rho(A)\lambda^{n}\) for \(A\in\mathcal{V}(\mathfrak{g})\) and consider the subspace of \(\mathcal{V}(\mathfrak{p})\):
\[\mathcal{W}^{k}(\mathfrak{g},f)=\{a\in\mathcal{V}(\mathfrak{p})|\,\rho(\{n_{ \lambda}a\}_{\mathrm{Aff}})=0\text{ for all }n\in\mathfrak{n}\}, \tag{2.7}\]
where \(\{\ _{\lambda}\ \}_{\mathrm{Aff}}\) is the \(\lambda\)-bracket of the affine PVsA \(\mathcal{V}^{k}(\mathfrak{g})\) in Example 2.3. One can show that \(\mathcal{W}^{k}(\mathfrak{g},f)\) is a differential subalgebra of \(\mathcal{V}(\mathfrak{p})\) and it can be given a PVsA structure for the \(\lambda\)-bracket defined by
\[\{A_{\lambda}B\}:=\rho(\{A_{\lambda}B\}_{\mathrm{Aff}}) \tag{2.8}\]
for \(A,B\in\mathcal{W}^{k}(\mathfrak{g},f)\). The PVsA \(\mathcal{W}^{k}(\mathfrak{g},f)\) is called the _classical \(\mathcal{W}\)-superalgebra associated with \(\mathfrak{g}\) and \(f\)_. For later usage, we note that if the images \(\rho(A),\rho(B)\) of \(A,B\in\mathcal{V}(\mathfrak{g})\) are both in \(\mathcal{W}^{k}(\mathfrak{g},f)\), then
\[\{\rho(A)_{\lambda}\rho(B)\}=\rho(\{\rho(A)_{\lambda}\rho(B)\}_{\mathrm{Aff}}) =\rho(\{A_{\lambda}B\}_{\mathrm{Aff}}). \tag{2.9}\]
The first equality follows from (2.8), and the second equality follows from Corollary 3.3 [13] and Proposition 3.7 [16].
**Proposition 2.4** ([13, 14]).: _Let \(\{v_{i}|i\in I\}\) be a basis of \(\mathfrak{g}^{f}:=\ker(\mathrm{ad}f)\subset\mathfrak{g}\). Then there is a free generating set \(\{w_{i}|i\in I\}\) of the \(\mathcal{W}\)-superalgebra \(\mathcal{W}^{k}(\mathfrak{g},f)\) as a differential superalgebra satisfying_
\[w_{i}-v_{i}\in\partial(\mathbb{C}[\partial]\otimes\mathfrak{p})\oplus\bigoplus _{m\geq 2}(\mathbb{C}[\partial]\otimes\mathfrak{p})^{\otimes m} \tag{2.10}\]
_for \(i\in I\). Moreover, if a subset \(\{w_{i}|i\in I\}\subset\mathcal{W}^{k}(\mathfrak{g},f)\) consists of elements which (i) satisfy the property (2.10) and (ii) are homogeneous with respect to the conformal grading, then it is a free generating set. The conformal grading \(\Delta\) on the differential superalgebra \(\mathcal{V}(\mathfrak{p})\) is defined inductively by_
\[\Delta_{a}=1-j_{a},\quad\Delta_{\partial A}=\Delta_{A}+1,\quad\Delta_{AB}= \Delta_{A}+\Delta_{B}, \tag{2.11}\]
_where \(a\in\mathfrak{p}\cap\mathfrak{g}_{j_{a}}\) and \(A,B\in\mathcal{V}(\mathfrak{p})\) are homogeneous elements for that grading._
Proof.: We refer to Proposition 3.12 [16], and Proposition 2.10 and Remark 2.11 [16].
There are several known methods to describe a generator subset of \(\mathcal{W}^{k}(\mathfrak{g},f)\) explicitly. In particular, when \(\mathfrak{g}\) is a classical finite simple Lie algebra, a generating set can be obtained by the so-called _generalized quasi-determinants_ and _generalized Adler-type operators_[13]. In [13], this method has proved crucial in constructing integrable systems on the corresponding \(\mathcal{W}\)-algebras as it connects the \(\lambda\)-brackets to the Gelfand-Dickey theory. As discussed in the introduction, our goal is to develop an analogous method for \(\mathcal{W}\)-superalgebras. Let us give an example of the quasi-determinant method for \(\mathcal{W}^{k}(\mathfrak{g}\mathfrak{l}_{3},f)\) when \(f\) is the principal nilpotent element.
**Example 2.5**.: _Let \(\mathfrak{g}=\mathfrak{gl}_{3}\) and \(f=e_{21}+e_{32}\). Consider the matrix with entries in \(\mathbb{C}[\partial]\ltimes\mathcal{V}(\mathfrak{p})\):_
\[\mathcal{L}=\left[\begin{array}{ccc}k\partial+q_{11}&q_{21}&q_{31}\\ -1&k\partial+q_{22}&q_{32}\\ 0&-1&k\partial+q_{33}\end{array}\right]. \tag{2.12}\]
_Recall that \(\mathcal{V}(\mathfrak{p})\) is the differential algebra of polynomials in the indeterminates \(q_{ij}\). Consider the quasi-determinant of \(\mathcal{L}\) with respect to the top right corner \(e_{13}\) and denote its coefficients by \(w_{1},w_{2},w_{3}\in\mathcal{V}(\mathfrak{p})\):_
\[|\mathcal{L}|_{13}=q_{31}-[k\partial+q_{11}\ q_{21}]\left[\begin{array}{cc}-1 &-k\partial-q_{22}\\ 0&-1\end{array}\right]\left[\begin{array}{c}q_{32}\\ k\partial+q_{33}\end{array}\right]=:k^{3}\partial^{3}+w_{1}k^{2}\partial^{2}+ w_{2}k\partial+w_{3}. \tag{2.13}\]
_The set \(\{w_{1},w_{2},w_{3}\}\) generates the classical \(\mathcal{W}\)-algebra \(\mathcal{W}^{k}(\mathfrak{g},f)\). In particular, we have \(w_{1}=q_{11}+q_{22}+q_{33}\) and \(w_{2}=q_{21}+q_{32}+q_{11}q_{22}+q_{11}q_{33}+q_{22}q_{33}+k\partial(q_{22})+2k \partial(q_{33})\)._
The following example is the simplest classical \(\mathcal{W}\)-superalgebra with nontrivial odd part. In this case, we can find generators using generalized Drinfeld-Sokolov reduction (See Example 3.15 in [11]).
**Example 2.6**.: _Let \(\mathfrak{g}=\mathfrak{osp}(1|2)\) and \((e,h,f)\) be the unique \(\mathfrak{sl}_{2}\)-triple in \(\mathfrak{g}\). Additionally, there are two independent odd elements \(f_{\text{od}}\in\mathfrak{g}_{-1/2}\) and \(e_{\text{od}}\in\mathfrak{g}_{1/2}\). After a proper normalization, we have \([f,e_{\text{od}}]=f_{\text{od}}\), \([e,f_{\text{od}}]=e_{\text{od}}\), \([f_{\text{od}},f_{\text{od}}]=-2f\), \([e_{\text{od}},e_{\text{od}}]=2e\), and \([e_{\text{od}},f_{\text{od}}]=-h\). Then the two elements,_
\[w_{1}=f_{\text{od}}-\frac{1}{2}e_{\text{od}}h-k\partial(e_{\text{ od}})\text{ \ and }\] \[w_{2}=f+\frac{1}{2}f_{\text{od}}e_{\text{od}}-\frac{1}{4}h^{2}+ \frac{1}{4}e_{\text{od}}\partial(e_{\text{od}})-\frac{1}{2}k\partial(h)\]
_generate \(\mathcal{W}^{k}(\mathfrak{g},f)\). Moreover, \(\mathcal{W}^{k}(\mathfrak{g},f)\) is known to be isomorphic to the Neveu-Schwartz PVA._
The last example in this section is one of the main ingredients of this paper: a classical \(\mathcal{W}\)-superalgebra associated with the algebra \(\mathfrak{gl}(Nm|Nn)\) where \(N,m,n\geq 2\) and its _rectangular nilpotent_\(f\). Explicitly, \(f\) is the even nilpotent element in \(\mathfrak{gl}(Nm|Nn)\) such that its corresponding Jordan blocks are parameterized by the partition \((\underbrace{N,\cdots,N}_{m-\text{copies}}|\underbrace{N,\cdots,N}_{n-\text{ copies}})\). We denote this partition by \(N\times(m|n)\) (see Definition 3.4 [15]).
**Example 2.7**.: _Let \(\mathfrak{g}=\mathfrak{gl}(Nm|Nn)\) for three positive integers \(m,n,N\) such that \(N\geq 2\). (i) After a proper change of basis, we get the following presentation of \(\mathfrak{gl}(Nm|Nn)\) :_
\[\mathfrak{gl}(Nm|Nn)=\left\{\left[\begin{array}{cccc}A_{[11]}&A_{[12]}&\cdots &A_{[1N]}\\ A_{[21]}&A_{[22]}&\cdots&A_{[2N]}\\ \vdots&\vdots&\ddots&\vdots\\ A_{[N1]}&A_{[N2]}&\cdots&A_{[NN]}\end{array}\right]\right|\text{ }A_{[ij]}\in \mathfrak{gl}(m|n)\text{ for }i,j=1,2,\cdots,N\text{ }\right\} \tag{2.14}\]
_(ii) Take \(f\in\mathfrak{gl}(Nm|Nn)\) by letting \(A_{[21]}=A_{[32]}=\cdots=A_{[N\,N-1]}=\mathbbm{1}_{(m|n)}\), where \(\mathbbm{1}_{(m|n)}\) is the identity matrix in \(\mathfrak{gl}(m|n)\), and \(A_{ij}=0\) otherwise. Using the matrix presentation in (2.14),_
\[f=\left[\begin{array}{cccc}0&0&0&\cdots&0\\ \mathbbm{1}_{(m|n)}&0&0&\cdots&0\\ 0&\mathbbm{1}_{(m|n)}&0&\cdots&0\\ \vdots&\vdots&&\ddots&\vdots\\ 0&0&0&\cdots&0\end{array}\right]. \tag{2.15}\]
_Then the classical \(\mathcal{W}\)-superalgebra \(\mathcal{W}^{k}(\mathfrak{g},f)\) is freely generated by \(N\cdot(m+n)^{2}\) elements and it is called a (classical) rectangular \(\mathcal{W}\)-superalgebra. We will omit the term "classical" from now on._
### Integrable Hamiltonian systems
In this section, we briefly review some notions on integrable Hamiltonian systems associated with a PVsA.
**Definition 2.8**.: _Let \(\mathcal{P}\) be a PVsA with a \(\lambda\)-bracket \(\{\,{}_{\lambda}\,\}\). A Hamiltonian equation associated with \(\mathcal{P}\) is an evolutionary equation_
\[\frac{d\phi}{dt}=\{\,{}_{h}\,{}_{\lambda}\,\phi\,\}\Big{|}_{\lambda=0} \tag{2.16}\]
_for some \(h\in\mathcal{P}_{\bar{0}}\). Note that by the Leibniz rule, the equation (2.16) uniquely defines a derivation of \(\mathcal{P}\)._
Consider the quotient space \(\int\mathcal{P}:=\mathcal{P}/\partial\mathcal{P}\). The image of \(a\in\mathcal{P}\) in \(\int\mathcal{P}\) is denoted by \(\int a\) and called a _local functional_. By the skew-symmetry and the Jacobi identity of PVsA, the space \(\mathcal{P}/\partial\mathcal{P}\) of local functionals is a Lie superalgebra for the bracket:
\[[\,{}_{\lambda}\,{}_{\lambda}\,{}_{\lambda}\,{}_{\lambda}\,{}_{\lambda=0}\quad \text{ for}\quad f,g\in\mathcal{P}. \tag{2.17}\]
A local functional \(\int f\in\int\mathcal{P}\) is an _integral of motion_ of the Hamiltonian equation (2.16) if and only if
\[[\int f,\int h]=0. \tag{2.18}\]
Moreover by the PVsA axioms, we have
\[[\int f,\int h]=0\iff\text{the derivations}\ \{f\,_{\lambda}\,\cdot\}|_{\lambda=0} \text{ and }\{h\,_{\lambda}\,\cdot\}|_{\lambda=0}\text{ of }\mathcal{P}\text{ commute}. \tag{2.19}\]
**Definition 2.9**.: _In this paper, we call integrable system an infinite dimensional super-commutative Lie subalgebra of \(\mathcal{P}/\partial\mathcal{P}\) or equivalently an infinite family of compatible Hamiltonian equations._
A well-known method of constructing such an abelian subalgebra, or equivalently an infinite family of pairwise commuting derivations of \(\mathcal{P}\), is the so-called _Lenard-Magri scheme_[14].
**Proposition 2.10** (Lenard-Magri scheme).: _Suppose that \(\mathcal{P}\) is a PVsA endowed with two distinct \(\lambda\)-brackets \(\{\ _{\lambda}\ \}_{K}\) and \(\{\ _{\lambda}\ \}_{H}\). If there exists a set of linearly independent even local functionals \(\{\int h_{i}|\,i\in\mathbb{Z}_{+}\}\) such that_
\[[\,\int f\,,\,\int h_{i+1}\,]_{K}=[\,\int f\,,\,\int h_{i}\,]_{H} \tag{2.20}\]
_for any \(f\in\mathcal{P}\) and \(i\in\mathbb{Z}_{+}\), then_
\[\frac{d\phi}{dt_{i}}=\{\,h_{i}\,_{\lambda}\,\phi\,\}_{H}\Big{|}_{\lambda=0}, \ i\in\mathbb{Z}_{+} \tag{2.21}\]
_is an integrable system. Moreover, all the local functionals \(\int h_{i}\) are integrals of motion of the equations (2.21)._
In the case of the affine PVA \(\mathcal{V}^{k}(\mathfrak{gl}_{n})\), there are two distinct \(\lambda\)-brackets \(\{\ _{\lambda}\ \}_{H}\) and \(\{\ _{\lambda}\ \}_{K}\) defined by
\[\{\,a\,_{\lambda}\,b\,\}_{H}=[a,b]+k\lambda(a|b),\quad\{\,a\,_{\lambda}\,b\,\} _{K}=(\mathbb{1}_{n}|[a,b]) \tag{2.22}\]
for any \(a,b\in\mathfrak{gl}_{n}\) and the \(n\times n\) identity matrix \(\mathbb{1}_{n}\). Moreover one can find a sequence \(\int h_{i}\) satisfying the conditions of Proposition 2.10[10]. As for the \(\mathcal{W}\)-superalgebra \(\mathcal{W}^{k}(\mathfrak{g},f)\), besides the \(\lambda\)-bracket \(\{\ _{\lambda}\ \}_{H}\) inherited from the affine PVsA bracket (2.5), there is another \(\lambda\)-bracket induced from the bracket on the differential superalgebra \(\mathcal{V}(\mathfrak{g}):\)
\[\{\,a\,_{\lambda}\,b\,\}_{K}=(s|[a,b])\quad\text{ for }\quad a,b\in \mathfrak{g} \tag{2.23}\]
where \(s\in\ker(\operatorname{ad}\mathfrak{n})\). If \(f+zs\in\mathfrak{g}(\!(z^{-1})\!)\) is semisimple, integrable systems on \(\mathcal{W}^{k}(\mathfrak{g},f)\) can be constructed using a super analogue of the Drinfeld-Sokolov construction ([11], [10]). Finally, let us come back to the Example 2.6.
**Example 2.11** (super KdV).: _Let \(\mathfrak{g}=\mathfrak{osp}(1|2)\) and \(f\) be the even nilpotent element in Example 2.6. Then \(\mathcal{W}^{k}(\mathfrak{g},f)\) is endowed with two \(\lambda\)-brackets. The first \(\{\ _{\lambda}\ \}_{H}\), which is induced from \(\mathcal{V}^{k}(\mathfrak{g})\), and the second \(\{\ _{\lambda}\ \}_{K}\) defined by (2.23) with \(s:=e\). Since \(f+zs\) is semisimple, the methods of [11] can be applied and lead to the so-called super KdV integrable system:_
\[\begin{cases}\,\frac{dw_{1}}{dt}=4k^{3}\partial^{3}w_{1}-6k\partial(w_{1})w_ {2}-3kw_{1}\partial w_{2},\\ \,\frac{dw_{2}}{dt}=-k^{3}\partial^{3}w_{2}-6k\partial(w_{2})w_{2}+12k^{2}w_{ 2}\partial^{2}w_{1},\end{cases} \tag{2.24}\]
_where \(w_{1}\) and \(w_{2}\) are generators of \(\mathcal{W}^{k}(\mathfrak{g},f)\) in Example 2.6. Note that the system of equations (2.24) is equivalent to the derivation_
\[\frac{d\phi}{dt}=\big{\{}\,w_{2}^{2}+4k\partial(w_{1})w_{1}\ _{\lambda}\ \phi\,\big{\}}_{H}\Big{|}_{\lambda=0}. \tag{2.25}\]
## 3. Adler-type operators associated with \(\mathfrak{gl}(m|n)\)
### Super Adler-type operators
In this subsection, we introduce a super-analog of the so-called Adler-type operators which were first defined in [10], motivated by [1]. Recall that we only consider associative, supercommutative and unital differential superalgebras with even derivations. A _pseudo-differential operator_ on a differential superalgebra \(\mathcal{V}\) is an element in \(\mathcal{V}(\!(\partial^{-1})\!)=\mathcal{V}\otimes\mathbb{C}(\!(\partial^{-1 })\!)\). The space \(\mathcal{V}(\!(\partial^{-1})\!)\) is a superalgebra for the product defined by \(\partial v=v^{\prime}+v\partial\) and
\[\partial^{-1}v=\sum_{m\in\mathbb{Z}_{+}}(-1)^{m}v^{(m)}\partial^{-m-1},\]
for \(v\in\mathcal{V}\) and \(v^{(m)}=\partial^{m}(v)\). The subspace \(\mathcal{V}[\partial]\) is a differential subalgebra of \(\mathcal{V}(\!(\partial^{-1})\!)\) whose elements are called _differential operators_ on \(\mathcal{V}\).
For a nonzero pseudo-differential operator \(A(\partial)=\sum_{k\in\mathbb{Z}}a_{k}\partial^{k}\) on \(\mathcal{V}\), the _order_ of \(A(\partial)\) is defined by \(\operatorname{ord}(A(\partial))=N\in\mathbb{Z}\), where \(N\) is the maximal integer such that \(a_{N}\neq 0\). An operator \(A(\partial)\) can be uniquely decomposed as
\[A(\partial)=A(\partial)_{+}+A(\partial)_{-}.\]
Here \(A(\partial)_{+}=\sum_{k\in\mathbb{Z}_{+}}a_{k}\partial^{k}\) (resp. \(A(\partial)_{-}=A(\partial)-A(\partial)_{+}\)) is called the _differential part_ (resp. _integral part_) of \(A(\partial)\). In addition, the _residue_ of the pseudo-differential operator \(A(\partial)\) is defined by
\[\operatorname{Res}(A(\partial))=A_{-1}\in\mathcal{V}. \tag{3.1}\]
The _symbol_ of a pseudo-differential operator \(A(\partial)\) is the formal Laurent series \(A(z)=\sum_{k\in\mathbb{Z}}a_{k}z^{k}\), for an even indeterminate \(z\). The symbol of a product satisfies
\[(AB)(z)=A(z+\partial)(B(z)). \tag{3.2}\]
In the RHS, we expand \((z+\partial)^{-1}\) as \(\sum_{n\in\mathbb{Z}_{+}}(-1)^{n}z^{-n-1}\partial^{n}\) in such a way that only nonnegative powers of \(\partial\) appear and act naturally on \(B(z)\). Finally, the _adjoint_\(A^{*}(\partial)\) of \(A(\partial)\) is the pseudo-differential operator
\[A^{*}(\partial)=\sum_{k\in\mathbb{Z}}(-\partial)^{k}a_{k}. \tag{3.3}\]
Now we introduce a special family of pseudo-differential operators called _Adler-type_, which is one of the main ingredients of this paper. In the scalar case, our definition of Adler-type operator is identical to the nonsuper setting in [10] (this class of operators first appeared in [10]).
**Definition 3.1**.: _Let \(\mathcal{V}\) be a differential superalgebra endowed with a \(\lambda\)-bracket \(\{\,{}_{\lambda}\,\}\). An even pseudo-differential operator \(A(\partial)\) on \(\mathcal{V}\) is called super Adler-type (sATO) if the following identity:_
\[\{A(z)_{\lambda}A(w)\}=A(w+\lambda+\partial)\iota_{z}(z-w-\lambda-\partial)^ {-1}A^{*}(\lambda-z)-A(z)\iota_{z}(z-w-\lambda-\partial)^{-1}A(w), \tag{3.4}\]
_holds in \(\mathcal{V}[\lambda](\!(z^{-1},w^{-1})\!)\) where \(\iota_{z}(z-w-\lambda-\partial)^{-1}\) is obtained by taking the geometric expansion of \((z-w-\lambda-\partial)^{-1}\) for large \(|z|\). In the RHS of (3.4), we mean explicitly_
\[\Big{(}A(w+\lambda+\partial)\iota_{z}(z-w-\lambda-\partial)^{-1}\Big{)}(A^{*} (\lambda-z))-A(z)\Big{(}\iota_{z}(z-w-\lambda-\partial)^{-1}\Big{)}(A(w)). \tag{3.5}\]
_For the sake of simplicity, we will keep this slight abuse of notations in the rest of our paper._
We call such an operator sATO to emphasize that some of its coefficients have odd parity as opposed to an Adler-type operator(ATO), whose coefficients are all even. Let us first recall two elementary examples of ATOs.
**Example 3.2**.: _Let \(\mathcal{V}=\mathbb{C}[q^{(n)}|n\in\mathbb{Z}_{+}]\) be the (even) differential algebra generated by \(q\) and consider the \(\lambda\)-bracket given by \(\{q_{\lambda}q\}=\lambda\). Note that it defines a PVA structure on \(\mathcal{V}\) by Theorem 2.2. One can check
_explicitly that \(A(\partial)=\partial+q\) is an ATO since_
\[A(w+\lambda+\partial)\iota_{z}(z-w-\lambda-\partial)^{-1}A^{*}( \lambda-z)\] \[= (w+\lambda+\partial-z+z+q)\iota_{z}(z-w-\lambda-\partial)^{-1}(z- \lambda-\partial-w+w+q)(1)\] \[= -(z-\lambda+q)+(z+q)+(z+q)\iota_{z}(z-w-\lambda-\partial)^{-1}(w+q)\] \[= \{A(z)_{\lambda}A(w)\}+A(z)\iota_{z}(z-w-\lambda-\partial)^{-1}A( w).\]
**Example 3.3**.: _Consider the differential operator \(A(\partial)=\partial^{2}+u\partial+v.\) One can check that it is an ATO for the \(\lambda\)-bracket defined on the generators by_
\[\{u\,_{\lambda}\,u\}=2\lambda,\ \{u\,_{\lambda}\,v\}=\lambda^{2}+u\lambda,\ \{v\,_{\lambda}\,v\}=-\lambda^{3}+(u^{2}-2v+2u^{\prime})\lambda+u^{\prime \prime}+uu^{\prime}-v^{\prime}.\]
_In this PVA, there is a one-parameter family of Virasoro, or conformal, elements_
\[\omega_{\alpha}=-v+\frac{1}{2}u^{2}+\alpha u^{\prime},\ \alpha\in\mathbb{C}.\]
_Indeed they satisfy the relations_
\[\{\omega_{\alpha}\,_{\lambda}\,\omega_{\alpha}\}=(\partial+2\lambda)\omega_{ \alpha}+(-2\alpha^{2}+2\alpha-1)\lambda^{3}. \tag{3.6}\]
In order to connect sATOs with the Gelfand-Dickey brackets, we need the following lemma.
**Lemma 3.4**.: _Let \(A\) and \(B\) be two differential operators on a differential superalgebra \(\mathcal{V}\) of order at most \(N\). Then for any \(\lambda,z,w\in\mathbb{C}\) the three following expressions are equal :_
1. \(A(z)\iota_{z}(z-w-\lambda-\partial)^{-1}(B(w))-A(\partial+\lambda+w)\iota_{z} (z-w-\lambda-\partial)^{-1}(B^{*}(\lambda-z))\)_,_
2. \(\big{(}A(z)(z-w-\lambda-\partial)^{-1}B(\partial+w)-A(\partial+\lambda+w)(z-w- \lambda-\partial)^{-1}B^{*}(\lambda-z)\big{)}(1)\)_,_
3. \(\sum_{i,j=0}^{N-1}z^{i}w^{j}\mathrm{Res}\big{(}(A(\partial+\lambda)( \partial+\lambda)^{-i-1})_{+}B(\partial)\partial^{-j-1}-A(\partial+\lambda)(( \partial+\lambda)^{-i-1}B(\partial))_{+}\partial^{-j-1}\big{)}\)_._
Proof.: We show first that \((i)=(iii)\). If \((i,j)\) is not in \(\{0,\cdots,N-1\}^{2}\subset\mathbb{Z}^{2}\), then we can easily see that
\[\mathrm{Res}\big{(}(A(\partial+\lambda)(\partial+\lambda)^{-i-1})_{+}B( \partial)\partial^{-j-1}-A(\partial+\lambda)((\partial+\lambda)^{-i-1}B( \partial))_{+}\partial^{-j-1}\big{)}=0.\]
Hence we can rewrite \((iii)\) as a double infinite sum
\[(iii) =\sum_{i,j\in\mathbb{Z}}z^{i}w^{j}\mathrm{Res}\big{(}(A(\partial+ \lambda)(\partial+\lambda)^{-i-1})_{+}B(\partial)\partial^{-j-1}-A(\partial+ \lambda)((\partial+\lambda)^{-i-1}B(\partial))_{+}\partial^{-j-1}\big{)} \tag{3.7}\] \[=\sum_{i\in\mathbb{Z}}\big{(}(A(\partial+\lambda)z^{i}(\partial+ \lambda)^{-i-1})_{+}B(\partial)-A(\partial+\lambda)(z^{i}(\partial+\lambda)^{- i-1}B(\partial))_{+}\big{)}(w)\] \[=\sum_{i\in\mathbb{Z}}\big{(}(A(z)z^{i}(\partial+\lambda)^{-i-1}) _{+}B(\partial)-A(\partial+\lambda)(z^{i}(\partial+\lambda)^{-i-1}B^{*}( \lambda-z))_{+}\big{)}(w).\]
The third line follows from the fact that the formal delta distribution \(\delta(z,w):=\sum_{i\in\mathbb{Z}}z^{i}w^{-i-1}\) satisfies
\[(z-\partial-\lambda)\delta(z,\partial+\lambda)=\delta(z,\partial+\lambda)(z- \partial-\lambda)=0.\]
Hence we have
\[(iii) =\sum_{i\in\mathbb{Z}}\big{(}A(z)(z^{i}(\partial+\lambda)^{-i-1})_{+ }B(\partial)-A(\partial+\lambda)(z^{i}(\partial+\lambda)^{-i-1})_{+}B^{*}( \lambda-z)\big{)}(w)\] \[=\sum_{i\in\mathbb{Z}_{+}}\big{(}A(z)(z^{-i-1}(\partial+\lambda)^ {i})B(\partial)-A(\partial+\lambda)(z^{-i-1}(\partial+\lambda)^{i})B^{*}( \lambda-z)\big{)}(w)\] \[=\sum_{i\in\mathbb{Z}_{+}}A(z)(z^{-i-1}(\partial+w+\lambda)^{i})( B(w))-A(\partial+w+\lambda)(z^{-i-1}(\partial+w+\lambda)^{i})(B^{*}(\lambda-z))=(i).\]
We now prove that \((ii)=(iii)\).
\[(iii) =\sum_{i\in\mathbb{Z}_{+},j\in\mathbb{Z}}\text{Res}\big{(}(A( \partial+\lambda)z^{i}(\partial+\lambda)^{-i-1})_{+}B(\partial)\partial^{-j- 1}-A(\partial+\lambda)(z^{i}(\partial+\lambda)^{-i-1}B(\partial))_{+}\partial ^{-j-1}\big{)}w^{j}\] \[=\sum_{i\in\mathbb{Z}_{+}}z^{i}\big{(}(A(\partial+\lambda)( \partial+\lambda)^{-i-1})_{+}B(\partial)-A(\partial+\lambda)(z^{i}(\partial+ \lambda)^{-i-1}B(\partial))_{+}\big{)}(w)\] \[=\big{(}(A(\partial+\lambda)(\partial+\lambda-z)^{-1})_{+}B( \partial)-A(\partial+\lambda)((\partial+\lambda-z)^{-1}B(\partial))_{+}\big{)} (w)\] \[=\big{(}A(z)(z-\partial-\lambda)^{-1}B(\partial)-A(\partial+ \lambda)(z-\partial-\lambda)^{-1}B^{*}(\lambda-z)\big{)}(w)\] \[=\big{(}A(z)(z-\partial-w-\lambda)^{-1}B(\partial+w)-A(\partial+ \lambda+w)(z-\partial-w-\lambda)^{-1}B^{*}(\lambda-z)\big{)}(1)=(ii).\]
To deduce the fourth line above, we have used the identities
\[(A(\partial+\lambda)(\partial+\lambda-z)^{-1})_{+}=A(\partial+ \lambda)(\partial+\lambda-z)^{-1}-A(z)(\partial+\lambda-z)^{-1}\;\;\text{and}\] \[((\partial+\lambda-z)^{-1}B(\partial))_{+}=(\partial+\lambda-z)^ {-1}B(\partial)-(\partial+\lambda-z)^{-1}B^{*}(\lambda-z).\]
### Matrix super Adler-type operators
Let \(I\) be a finite set equipped with a parity map \(p:I\mapsto\{0,1\}\), \(i\mapsto\tilde{i}\). We say that an index \(i\in I\) is even (resp. odd) if \(\tilde{i}=0\) (resp. \(\tilde{i}=1\)). We define the superalgebra \(\mathfrak{gl}(I)\) as the \(\mathbb{C}\)-vector superspace generated by the elements \(e_{ij}\) for \(i,j\in I\) with the parity \(p(e_{ij})\equiv\tilde{i}+\tilde{j}\;(\text{mod }2)\) and relations \(e_{ij}e_{kl}=\delta_{jk}e_{il}\). This superalgebra is, in particular, a Lie superalgebra for the bracket
\[[e_{ij},e_{kl}]=\delta_{jk}e_{il}-(-1)^{(\tilde{i}+\tilde{j})(\tilde{k}+ \tilde{l})}\delta_{il}e_{kj}. \tag{3.8}\]
This Lie superalgebra is isomorphic to \(\mathfrak{gl}(m|n)\) where \(m=|I|-n\) is the number of even elements in \(I\), but it is useful for us to keep this flexibility in the notations, as it will become clear later on. Given any differential superalgebra \(\mathcal{V}\), we can extend the superalgebra \(\mathfrak{gl}(I)\) to \(\mathfrak{gl}(I)\otimes\mathcal{V}\) by letting for \(a,b\in\mathfrak{gl}(I)\) and \(v,w\in\mathcal{V}\),
\[(a\otimes v)\circ(b\otimes w)=(-1)^{\tilde{b}\tilde{v}}ab\otimes vw. \tag{3.9}\]
In addition, we define another product \(\star\) on the space \(\mathfrak{gl}(I)\otimes\mathcal{V}\) by
\[(a\otimes v)\star(b\otimes w)=(-1)^{\tilde{b}\tilde{v}+\tilde{a}\tilde{b}}ab \otimes vw \tag{3.10}\]
for \(a,b\in\mathfrak{gl}(I)\) and \(v,w\in\mathcal{V}\). Note that if \(a\otimes v\) is even, there is no sign in (3.10) and both products are associative. We stress here that one of the main differences between the constructions in this paper and the analogue results in the nonsuper setting is the need to use these two products.
Let \(A(\partial)=\sum_{k\in\mathbb{Z}}a_{k}\partial^{k}\in(\mathfrak{gl}(I) \otimes\mathcal{V})\big{(}\partial^{-1}\big{)}\) be a matrix pseudo-differential operator over \(\mathcal{V}\). By definition, \(A(\partial)\) is a square matrix whose \(ij\)-th entry \(A_{ij}(\partial)\) is in \(\mathcal{V}(\!(\partial^{-1})\!)\). A matrix pseudo-differential operator over \(\mathcal{V}\) is called _monic_ of order \(N\) if \(\text{ord}(A_{ij}(\partial))\leq N\) for any \(i,j\in I\) and \(a_{N}=\mathbb{1}_{I}=\sum_{i\in I}e_{ii}\). We now define
the super-analog of matrix-valued Adler-type operators. The nonsuper case of matrix Adler-type operator was introduced in [10].
**Definition 3.5**.: _Let \(\mathcal{V}\) be a differential superalgebra endowed with a \(\lambda\)-bracket \(\{\,{}_{\lambda}\,\}\). An even pseudo-differential operator \(A(\partial)\in\left(\mathfrak{gl}(I)\otimes\mathcal{V}\right)(\!(\partial^{-1})\!)\) is said to be a matrix super Adler-type operator (sATO) on \(\mathcal{V}\) if it satisfies the set of identities_
\[\begin{split}\{A_{ij}(z)_{\lambda}A_{hk}(w)\}=&(- 1)^{\bar{i}\bar{j}+i\bar{k}+j\bar{h}}A_{hj}(w+\lambda+\partial)\iota_{z}(z-w- \lambda-\partial)^{-1}(A_{ik})^{*}(\lambda-z)\\ &-(-1)^{\bar{i}\bar{j}+i\bar{h}+j\bar{h}}A_{hj}(z)\iota_{z}(z-w- \lambda-\partial)^{-1}A_{ik}(w)\end{split} \tag{3.11}\]
_for all \(i,j,h,k\in I\). In particular, Definition 3.5 recovers the Definition 2.1 [10] of matrix Adler-type operators if \(I\) is a purely even index set._
**Example 3.6**.: _Let us consider the affine Poisson vertex superalgebra \(\mathcal{V}:=\mathcal{V}^{-1}(\mathfrak{gl}(I))\) of level \(-1\) and the matrix pseudo-differential operator \(A(\partial)\in(\mathfrak{gl}(I)\otimes\mathcal{V})(\!(\partial^{-1})\!)\) whose \(ij\)-th entry is \(A_{ij}(\partial)=\delta_{ij}\partial+(-1)^{\bar{i}}q_{ij}\). One can check that_
\[\begin{split}\{(-1)^{\bar{i}}q_{ij}\,{}_{\lambda}\,(-1)^{\bar{h} }q_{hk}\}_{\rm Aff}&=(-1)^{\bar{i}+\bar{h}}(\delta_{h}q_{ik}-(- 1)^{(\bar{i}+\bar{j})(\bar{h}+\bar{k})}\delta_{ik}q_{hj}-(-1)^{\bar{i}}\delta _{hj}\delta_{ik}\lambda)\\ &=(-1)^{\bar{i}\bar{j}+\bar{h}+j\bar{h}}\left((-1)^{\bar{i}} \delta_{jh}q_{ik}-(-1)^{\bar{h}}\delta_{ik}q_{hj}-\delta_{hj}\delta_{ik} \lambda\right)\\ &=(-1)^{\bar{i}\bar{j}+\bar{i}\bar{h}+j\bar{h}}\left((\delta_{hj }z+(-1)^{\bar{h}}q_{hj})\iota_{z}(z-w-\lambda-\partial)^{-1}(\delta_{ik}w+(- 1)^{\bar{i}}q_{ik})\right.\\ &-\left.(\delta_{hj}(w+\lambda+\partial)+(-1)^{\bar{h}}q_{hj}) \iota_{z}(z-w-\lambda-\partial)^{-1}(\delta_{ik}(z-\lambda)+(-1)^{\bar{i}}q_ {ik})\right).\end{split} \tag{3.12}\]
_The first equality is the definition of \(\lambda\)-bracket of affine PVsA while the second and third equality can be checked by direct computations. Hence \(A(\partial)\) is a matrix sATO on \(\mathcal{V}^{-1}(\mathfrak{gl}(I))\)._
In Definition 3.5, we are given a differential superalgebra \(\mathcal{V}\) together with a \(\lambda\)-bracket. In the following theorem, we show that the equality (3.11) implies the skew-symmetry and Jacobi-identity for this \(\lambda\)-bracket restricted to the subalgebra of \(\mathcal{V}\) generated by the coefficients of the matrix sATO.
**Theorem 3.7**.: _Let \(A(\partial)=(A_{ij}(\partial))_{i,j\in I}\) be a matrix sATO on the differential superalgebra \(\mathcal{V}\) endowed with the \(\lambda\)-bracket \(\{\,{}_{\lambda}\,\}\). For \(a,b,c,d,e,f\in I\), we have_
\[\begin{split}&\text{(skew-symmetry)}\quad\{A_{ab}(z)_{\lambda}A_{cd} (w)\}=-(-1)^{(\bar{a}+\bar{b})(\bar{c}+\bar{d})}\{A_{cd}(w)_{-\lambda-\partial }A_{ab}(z)\},\\ &\text{(Jacobi-identity)}\end{split}\]
\[\begin{split}&\{A_{ab}(z_{1})_{\lambda}\{A_{cd}(z_{2})_{\mu}A_{ ef}(z_{3})\}\}\\ &=\{\{A_{ab}(z_{1})_{\lambda}A_{cd}(z_{2})_{\lambda+\mu}A_{ef}(z_{ 3})\}\}+(-1)^{(\bar{a}+\bar{b})(\bar{c}+\bar{d})}\{A_{cd}(z_{2})_{\mu}\{A_{ab}( z_{1})_{\lambda}A_{ef}(z_{3})\}\}.\end{split}\]
_Hence if the entries of \(A(\partial)\) are differentially algebraically independent, the differential superalgebra \(\mathcal{U}\) generated by these entries is a PVsA for the \(\lambda\)-bracket \(\{\,{}_{\lambda}\,\}.\) We call such a PVsA an Adler-type PVsA._
Proof.: For any \(u,v\in\mathcal{V}\) and \(n\in\mathbb{Z}_{+}\), we use the notation
\[\left(u(\lambda+\partial)^{n}v\right)_{-\lambda-\partial}:=\left((-\lambda- \partial)^{n}u\right)v. \tag{3.13}\]
From equation (3.11), we have
\[\begin{split}&(-1)^{\bar{b}\bar{d}+\bar{a}\bar{b}+\bar{a}\bar{d}} \{A_{cd}(w)_{-\lambda-\partial}A_{ab}(z)\}\\ &=\left(A_{ad}(w)\sum_{n\in\mathbb{Z}_{+}}\frac{(z+\lambda+ \partial)^{n}}{w^{n+1}}A_{cb}(z)\right)_{-\lambda-\partial}-\left(A_{ad}(z+ \partial+\lambda)\sum_{n\in\mathbb{Z}_{+}}\frac{(z+\lambda+\partial)^{n}}{w^{n +1}}A_{cb}^{*}(-w+\lambda)\right)_{-\lambda-\partial}\\ &=\sum_{n\in\mathbb{Z}_{+}}\frac{(z-\lambda-\partial)^{n}}{w^{n+1}}( A_{ad}(w))A_{cb}(z)-\sum_{n\in\mathbb{Z}_{+}}\frac{(z-\lambda-\partial)^{n}}{w^{n +1}}(A_{ad}^{*}(\lambda-z))A_{cb}(\lambda+\partial+w).\end{split}\]
Hence
\[(-1)^{\tilde{a}\tilde{c}+\tilde{c}\tilde{d}+\tilde{a}\tilde{d}}\{A_{ cd}(w)_{-\lambda-\partial}A_{ab}(z)\}\] \[=A_{cb}(z)\sum_{n\in\mathbb{Z}_{+}}\frac{(z-\lambda-\partial)^{n}}{ w^{n+1}}(A_{ad}(w))-A_{cb}(\lambda+\partial+w)\sum_{n\in\mathbb{Z}_{+}}\frac{(z- \lambda-\partial)^{n}}{w^{n+1}}(A_{ad}^{*}(\lambda-z))\] \[=-A_{cb}(z)i_{w}(z-w-\lambda-\partial)^{-1}(A_{ad}(w))+A_{cb}( \lambda+\partial+w)i_{w}(z-w-\lambda-\partial)^{-1}(A_{ad}^{*}(\lambda-z)).\]
This concludes the proof of the skew-symmetry property, keeping in mind that in the defining equation of Adler-type operators the expansion \(\iota_{w}(z-w-\lambda-\partial)^{-1}\) can be replaced by \(\iota_{z}(z-w-\lambda-\partial)^{-1}\) due to the properties of the delta distribution and differential operators. Similarly, the proof of the Jacobi identity can be derived from Lemma 3.5 and Lemma 3.2 in [10]. Finally, by Theorem 2.2, we conclude that the differential subalgebra \(\mathcal{U}\) generated by the coefficients of \(A\) is a PVsA, provided that its coefficients are differentially algebraically independent.
### Matrix sATOs and Gelfand-Dickey brackets
Given an index set \(I\) with parity map and a positive integer \(N\), we consider the differential superalgebra
\[\mathcal{V}_{I}^{N}:=\mathbb{C}[u_{M,ab}^{(l)}|\,M=0,1,\cdots,N-1,\,a,b\in I, \text{ and }l\in\mathbb{Z}_{+}], \tag{3.14}\]
where \(\tilde{u}_{M,ab}^{(l)}=\tilde{a}+\tilde{b}\) for all \(a,b\in I\) and \(\partial(u_{M,ab}^{(l)})=u_{M,ab}^{(l+1)}\). We define the matrix differential operator
\[L(\partial)=\sum_{a\in I}e_{aa}\otimes\partial^{N}+\sum_{M=0}^{N-1}\sum_{a,b \in I}e_{ab}\otimes u_{M,ab}\partial^{M}\,\in\,\mathfrak{gl}(I)\otimes \mathcal{V}_{I}^{N}(\!(\partial^{-1})\!) \tag{3.15}\]
whose coefficients are the generators of \(\mathcal{V}_{I}^{N}\).
**Lemma 3.8**.: _There is a unique \(\lambda\)-bracket on the superalgebra \(\mathcal{V}_{I}^{N}\) which lets \(\mathcal{V}_{I}^{N}\) be the Adler-type PVsA associated with \(L(\partial)\). We call this \(\lambda\)-bracket the generic bracket associated to \(I\) and \(N\). The superalgebra \(\mathcal{V}_{I}^{N}\) is a PVsA for the \(\lambda\)-bracket (3.11) by Theorem 3.7._
Proof.: The coefficients of the operator are algebraically independent, hence we only need to check that the equation (3.11) is well-defined, which is the case as the exponents of both \(z\) and \(w\) are bounded by \(N-1\) in the RHS by Lemma 3.4.
The PVsA structure constructed above is none other than the lifting of the well-known quadratic Gelfand-Dickey Poisson bracket (1.4) on the space of functions on matrix pseudo-differential operators of degree \(N\). Let us first recall the definition of the variational derivative \(\frac{\delta f}{\delta L}\) for an element \(f\in\mathcal{V}_{I}^{N}\)
\[\frac{\delta f}{\delta L}=\sum_{a,b\in I}\sum_{k=0}^{N-1}(-1)^{\tilde{a}}e_{ba }\otimes\partial^{-k-1}\frac{\delta f}{\delta u_{k,ab}}. \tag{3.16}\]
This definition is justified by the property
\[\frac{d}{d\epsilon}\int f(L+\epsilon A)=\int A\star\frac{\delta f}{\delta L}= \sum_{a,b\in I}\sum_{k=0}^{N-1}\int A_{k,ab}\frac{\delta f}{\delta u_{k,ab}} \tag{3.17}\]
for any even matrix differential operator \(A\in\mathfrak{gl}(I)\otimes\mathcal{V}_{I}^{N}\) and \(f\in\mathcal{V}_{I}^{N}\). Note that \(\mathcal{V}_{I}^{N}\) is a superalgebra which is why the explicit form of \(\frac{\delta f}{\delta L}\) is different from (1.4). However, in both cases the universal property (3.17) is satisfied. The _supertrace_ is the linear map from \(\mathfrak{gl}(I)\otimes\mathcal{V}_{I}^{N}\) to \(\mathcal{V}_{I}^{N}\) defined by
\[\text{str }(e_{ij}\otimes f):=(-1)^{\tilde{i}}\delta_{ij}f\]
for all \(i,j\in I\) and \(f\in\mathcal{V}_{I}^{N}\).
**Proposition 3.9**.: _The Gelfand-Dickey Lie bracket on \(\mathcal{V}_{I}^{N}/\partial\mathcal{V}_{N}^{N}\) coincides with the Lie bracket induced by the generic \(\lambda\)-bracket on \(\mathcal{V}_{I}^{N}\). More precisely, for all \(f,g\in\mathcal{V}_{I}^{N}\), we have_
\[\int\{f\,_{\lambda}\,g\}\Big{|}_{\lambda=0}=\ \mathrm{Res}\ \mathrm{str}\int(L \star\frac{\delta f}{\delta L})_{+}\star L\star\frac{\delta g}{\delta L}-L \star(\frac{\delta f}{\delta L}\star L)_{+}\star\frac{\delta g}{\delta L}. \tag{3.18}\]
Proof.: First, it follows immediately from the super master formula (2.2) that
\[\int\{f\,_{\lambda}\,g\}\Big{|}_{\lambda=0}=\sum_{a,b,c,d\in I}\sum_{k,l=0}^{N -1}(-1)^{(\tilde{f}+\tilde{a}+\tilde{b})(\tilde{c}+\tilde{d})}\int\Big{(}\{u _{k,ab\,\partial}\,u_{l,cd}\}\to\frac{\delta f}{\delta u_{k,ab}}\Big{)}\frac{ \delta g}{\delta u_{l,cd}}. \tag{3.19}\]
On the other hand, we have
\[\mathrm{Res}\ \mathrm{str}\int(L\star\frac{\delta f}{\delta L})_{+} \star L\star\frac{\delta g}{\delta L}-L\star(\frac{\delta f}{\delta L}\star L)_ {+}\star\frac{\delta g}{\delta L}\] \[=\mathrm{Res}\ \mathrm{str}\int\sum_{k,l=0}^{n-1}\sum_{a,b,c,d, \in I}\Big{(}e_{cb}\otimes L_{cb}\star(-1)^{\tilde{a}}e_{ba}\otimes\partial^{ -k-1}\frac{\delta f}{\delta u_{k,ab}}\Big{)}_{+}\star e_{ad}\otimes L_{ad} \star(-1)^{\tilde{c}}e_{dc}\otimes\partial^{-l-1}\frac{\delta g}{\delta u_{l,cd}}\] \[-\mathrm{Res}\ \mathrm{str}\ \int\sum_{k,l=0}^{n-1}\sum_{a,b,c,d, \in I}e_{cb}\otimes L_{cb}\star\Big{(}(-1)^{\tilde{a}}e_{ba}\otimes\partial^{ -k-1}\frac{\delta f}{\delta u_{k,ab}}\star e_{ad}\otimes L_{ad}\Big{)}_{+} \star(-1)^{\tilde{c}}e_{dc}\otimes\partial^{-l-1}\frac{\delta g}{\delta u_{l,cd}}\] \[=\mathrm{Res}\ \mathrm{str}\int\sum_{k,l=0}^{n-1}\sum_{a,b,c,d, \in I}\Big{(}(-1)^{\tilde{a}}e_{ca}\otimes L_{cb}\partial^{-k-1}\frac{\delta f }{\delta u_{k,ab}}\Big{)}_{+}\star(-1)^{\tilde{c}}e_{ac}\otimes L_{ad} \partial^{-l-1}\frac{\delta g}{\delta u_{l,cd}}\] \[-\mathrm{Res}\ \mathrm{str}\int\sum_{k,l=0}^{n-1}\sum_{a,b,c,d, \in I}e_{cb}\otimes L_{cb}\star\Big{(}(-1)^{\tilde{a}+\tilde{f}(\tilde{a}+ \tilde{d})}e_{bd}\otimes\partial^{-k-1}\frac{\delta f}{\delta u_{k,ab}}L_{ad} \Big{)}_{+}\star(-1)^{\tilde{c}}e_{dc}\otimes\partial^{-l-1}\frac{\delta g}{ \delta u_{l,cd}}\] \[-\int\sum_{k,l=0}^{n-1}\sum_{a,b,c,d,\in I}(-1)^{\tilde{a}+ \tilde{c}+\tilde{f}(\tilde{a}+\tilde{c})}\mathrm{Res}\,L_{cb}\Big{(}\partial^ {-k-1}\frac{\delta f}{\delta u_{k,ab}}L_{ad}\Big{)}_{+}\partial^{-l-1}\frac{ \delta g}{\delta u_{l,cd}}.\]
Hence we need to check the following differential operator identities for all \(a,b,c,d\in I\) and \(k,l\in\{0,\cdots,N-1\}\)
\[(-1)^{\tilde{a}\tilde{b}+\tilde{a}\tilde{c}+\tilde{b}\tilde{c}}\{u_{k,ab \partial}\,u_{l,cd}\}\to(F)=(-1)^{\tilde{F}(\tilde{a}+\tilde{d})}\mathrm{Res} \Big{(}\left(L_{cb}\partial^{-k-1}F\right)_{+}L_{ad}\partial^{-l-1}-L_{cb} \Big{(}\partial^{-k-1}FL_{ad}\Big{)}_{+}\partial^{-l-1}\Big{)},\]
for all \(F\in\mathcal{V}_{I}^{N}\). Taking the symbol of both sides yields
\[(-1)^{\tilde{a}\tilde{b}+\tilde{a}\tilde{c}+\tilde{b}\tilde{c}} \{u_{k,ab\lambda}\,u_{l,cd}\}\] \[=\mathrm{Res}\Big{(}\left(L_{cb}(\partial+\lambda)(\partial+\lambda) ^{-k-1}\right)_{+}L_{ad}(\partial)\partial^{-l-1}-L_{cb}(\partial+\lambda) \Big{(}(\partial+\lambda)^{-k-1}L_{ad}(\partial)\Big{)}_{+}\partial^{-l-1} \Big{)}.\]
We are done since this equality holds by Lemma 3.4 and definition of the \(\lambda\)-bracket (3.11).
### Second compatible bracket
For all \(\epsilon\in\mathbb{C}\), there exists a unique \(\lambda\)-bracket \(\{\,_{\lambda}\,\}_{\epsilon}\) on the differential superalgebra \(\mathcal{V}_{I}^{N}\) such that \(L^{\epsilon}(\partial):=L(\partial)+\sum_{a\in I}e_{aa}\otimes\epsilon\) is sATO. Note that for any element \(v\) in any PVsA, we have \(\{v_{\lambda}1\}=0\) by the Leibniz rule. Hence for all \(a,b,c,d\in I\),
\[\{L^{\epsilon}_{ab}(z)_{\lambda}L^{\epsilon}_{cd}(w)\}_{\epsilon}=\{L_{ab}(z)_{ \lambda}L_{cd}(w)\}_{\epsilon}. \tag{3.20}\]
It is clear that the contribution of \(\epsilon^{2}\) is trivial in the RHS of equation (3.11). Therefore, we can decompose the \(\lambda\)-bracket \(\{\,_{\lambda}\,\}_{\epsilon}\) as
\[\{\cdot\,_{\lambda}\,\cdot\}_{\epsilon}=\{\cdot\,_{\lambda}\,\cdot\}_{H}+ \epsilon\,\{\cdot\,_{\lambda}\,\cdot\}_{K}, \tag{3.21}\]
where \(\{\,_{\lambda}\,\}_{H}\) and \(\{\,_{\lambda}\,\}_{K}\) define two compatible PVA structures on \(\mathcal{V}_{I}^{N}\). The bracket \(\{\,_{\lambda}\,\}_{H}\) is the generic bracket on \(\mathcal{V}_{I}^{N}\) constructed in Section 3.3. As for the bracket \(\{\,_{\lambda}\,\}_{K}\), its values on the generators of the
differential superalgebra \(\mathcal{V}^{N}_{I}\) are given by
\[\begin{split}(-1)^{\tilde{a}\tilde{b}+\tilde{a}\tilde{c}+\tilde{b} \tilde{c}}\{L_{ab}(z)\,_{\lambda}\,L_{cd}(w)\}_{K}&=\delta_{ad}(L _{cb}(z)-L_{cb}(w+\lambda+\partial))\iota_{z}(z-w-\lambda-\partial)^{-1}\\ &+\delta_{cb}\iota_{z}(z-w-\lambda-\partial)^{-1}(L_{ad}(w)-(L_{ ad})^{*}(-z+\lambda)).\end{split} \tag{3.22}\]
Note that, in the case of the affine PVsA corresponding to \(N=1\), this \(K\)-bracket is trivial. It is a particular case of the following Lemma.
**Lemma 3.10**.: _For all \(a,b,c,d\in I\), we have_
1. \(\begin{split}(a)&\left\{u_{N-1,ab\ \lambda}\ L_{cd}(w) \right\}_{H}\ =(-1)^{\tilde{a}\tilde{b}+\tilde{a}\tilde{c}+\tilde{b}\tilde{c}}\,[\,\delta_{ cb}L_{ad}(w)-\delta_{ad}L_{cb}(w+\lambda)\,],\\ (b)&\left\{L_{ab}(z)\ \shortrightarrow u_{N-1,cd}\right\}_{H}\ =(-1)^{\tilde{b} \tilde{c}+\tilde{b}\tilde{d}+\tilde{c}\tilde{d}}\,[\,\delta_{cb}(L_{ad})^{*}(- z+\lambda)-\delta_{ad}L_{cb}(z)\,],\\ (c)&\left\{u_{N-1,ab\ \lambda}\ L_{ab}\right\}_{H}\ =(-1)^{\tilde{a} \tilde{b}+\tilde{a}\tilde{c}+\tilde{b}\tilde{c}}\,[\,\delta_{cb}u_{N-1,ad}- \delta_{ad}u_{N-1,cb}-\delta_{ad}\delta_{cb}N\lambda\,],\\ (d)&\left\{u_{N-1,ab\ \lambda}\ L_{cd}(w)\right\}_{K}\ =\left\{L_{ab}(z)\ \ _{ \lambda}\ u_{N-1,cd}\right\}_{K}=0.\end{split}\)__
Proof.: We first check (a).
\[\begin{split}(-1)^{\tilde{a}\tilde{b}+\tilde{a}\tilde{c}+\tilde{b} \tilde{c}}\left\{u_{N-1,ab\ \lambda}\ L_{cd}(w)\right\}_{H}=&(-1)^{\tilde{a}\tilde{b}+ \tilde{a}\tilde{c}+\tilde{b}\tilde{c}}\,\operatorname{Res}_{z}\left[\left\{L_ {ab}(z)\ \ _{\lambda}\ L_{cd}(w)\right\}_{H}z^{-N}\right]\\ =&\operatorname{Res}_{z}\bigl{[}L_{cb}(z)\iota_{z} (z-w-\lambda-\partial)^{-1}\!L_{ad}(w)z^{-N}\bigr{]}\\ -&\operatorname{Res}_{z}\bigl{[}L_{cb}(w+\lambda+ \partial)\iota_{z}(z-w-\lambda-\partial)^{-1}\!(L_{ad})^{*}(-z+\lambda)z^{-N} \bigr{]}\\ =&\delta_{cb}L_{ad}(w)-\delta_{ad}L_{cb}(w+\lambda+ \partial)(1)\\ =&\delta_{cb}L_{ad}(w)-\delta_{ad}L_{cb}(w+\lambda). \end{split}\]
Next, (b) follows from (a) by skew-symmetry:
\[\begin{split}\left\{L_{ab}(z)\ \ _{\lambda}\ u_{N-1,cd}\right\}_{H}& =-(-1)^{(\tilde{a}+\tilde{b})(\tilde{c}+\tilde{d})}\{u_{N-1,cd \ \ -\lambda-\partial}\ L_{ab}(z)\}_{H}\\ &=-(-1)^{(\tilde{a}+\tilde{b})(\tilde{c}+\tilde{d})}(-1)^{ \tilde{a}\tilde{c}+\tilde{a}\tilde{d}+\tilde{c}\tilde{d}}\,[\,\delta_{ad}L_{cb }(z)-\delta_{cb}L_{ad}(z+\lambda)\,]_{-\lambda-\partial}\\ &=(-1)^{\tilde{b}\tilde{c}+\tilde{b}\tilde{d}+\tilde{c}\tilde{d}} \,[\,\delta_{cb}(L_{ad})^{*}(-z+\lambda)-\delta_{ad}L_{cb}(z)\,]\,.\end{split}\]
The third statement (c) follows from taking the coefficient of \(w^{N-1}\) in both sides of (a). Finally, we prove (d) as follows
\[\begin{split}(-1)^{\tilde{a}\tilde{b}+\tilde{a}\tilde{c}+\tilde{b} \tilde{c}}\left\{u_{N-1,ab\ \lambda}\ L_{cd}(w)\right\}_{K}=&(-1)^{\tilde{a}\tilde{b}+\tilde{a} \tilde{c}+\tilde{b}\tilde{c}}\,\operatorname{Res}_{z}\left[\left\{L_{ab}(z) \ _{\lambda}\ L_{cd}(w)\right\}_{K}z^{-N}\right]\\ =&\operatorname{Res}_{z}\delta_{ad}(L_{cb}(z)-L_{cb}(w+ \lambda+\partial))\iota_{z}(z-w-\lambda-\partial)^{-1}z^{-N}\\ +&\operatorname{Res}_{z}\delta_{cb}\iota_{z}(z-w- \lambda-\partial)^{-1}(L_{ad}(w)-(L_{ad})^{*}(-z+\lambda))z^{-N}\\ =&\delta_{ad}\delta_{cb}-\delta_{cb}\delta_{ad}=0. \end{split}\]
Using the super master formula (2.2) and equation (3.22), one can show that the linear Gelfand-Dickey Lie bracket on \(\mathcal{V}^{N}_{I}/\partial\mathcal{V}^{N}_{I}\) is identical to the Lie bracket induced by \(\{\cdot\,_{\lambda}\,\cdot\}_{K}\) following the lines of Section 3.3. Summarizing both results, we obtain the analogue formula to (1.4). Namely we have for all \(f,g\in\mathcal{V}^{N}_{I}\)
\[\begin{split}\int\{f\,_{\lambda}\,g\}_{H}\Big{|}_{\lambda=0}& =\operatorname{Res}\,\operatorname{str}\int(L\star\frac{\delta f}{\delta L})_{+} \star L\star\frac{\delta g}{\delta L}-L\star(\frac{\delta f}{\delta L}\star L)_{+ }\star\frac{\delta g}{\delta L},\\ \int\{f\,_{\lambda}\,g\}_{K}\Big{|}_{\lambda=0}&= \operatorname{Res}\,\operatorname{str}\int\Big{(}L\star\frac{\delta f}{\delta L }-L\star\frac{\delta f}{\delta L}\Big{)}_{+}\star\frac{\delta g}{\delta L}. \end{split} \tag{3.23}\]
## 4. \(\mathcal{W}\)-superalgebras associated with Adler-type operators
### Quasi-determinants of matrix sATOs
In this subsection, we construct more examples of sATOs by taking quasi-determinants of matrix sATOs. For a detailed introduction to quasi-determinants, we refer to [10], [11] and [12]. We recall that all matrix differential operators are assumed to be even.
Let \(I=I_{\bar{0}}\sqcup I_{\bar{1}}\) be a finite set with the parity map \(p:I\to\{0,1\}\), where \(I_{\bar{0}}:=p^{-1}(0)\) is the set of even indices and \(I_{\bar{1}}:=p^{-1}(0)\) is the set of odd indices. We assume in the sequel that these finite index sets are ordered. For a differential superalgebra \(\mathcal{V}\), we recall the \(\star\)-product (3.10) on the space of matrix pseudo-differential operators \(\mathfrak{gl}(I)\otimes\mathcal{V}(\partial^{-1})\). If pseudo-differential operators \(A(\partial)=\sum_{i,j\in I}e_{ij}\otimes a_{ij}\) and \(B(\partial)=\sum_{i,j\in I}e_{ij}\otimes b_{ij}\) are even in \((\mathfrak{gl}(I)\otimes\mathcal{V})_{\bar{0}}\), then
\[A(\partial)\star B(\partial):=\sum_{i,j\in I}\sum_{s\in I}e_{ij}\otimes a_{is }b_{sj}. \tag{4.1}\]
More generally, for finite index sets \(I,J,K\) with parity, we define the \(\star\)-product between even matrix pseudo-differential operators of different sizes as follows
\[(\operatorname{Mat}_{I\times J}\otimes\mathcal{V}(\!(\partial^{ -1})\!))_{\bar{0}}\times(\operatorname{Mat}_{J\times K}\otimes\mathcal{V}(\!( \partial^{-1})\!))_{\bar{0}} \to(\operatorname{Mat}_{I\times K}\otimes\mathcal{V}(\!( \partial^{-1})\!))_{\bar{0}} \tag{4.2}\] \[(A(\partial),B(\partial)) \mapsto(A\star B)(\partial)\]
in the same way as in (3.10), where \(\operatorname{Mat}_{I\times J}\) is the superspace spanned by the elements \(e_{ij}\) for \(i\in I\) and \(j\in J\) with the parity \(\bar{i}+\bar{j}\) (mod 2).
We say that a matrix pseudo-differential operator \(A(\partial)\in\mathfrak{gl}(I)\otimes\mathcal{V}(\partial^{-1})\!)\), is \(\star\)_-invertible_ if there exists \(A^{\operatorname{inv}}(\partial)\in\mathfrak{gl}(I)\otimes\mathcal{V}(\!( \partial^{-1})\!)\) such that \(A(\partial)\star A^{\operatorname{inv}}(\partial)=A^{\operatorname{inv}}( \partial)\star A(\partial)=\mathbb{1}_{I}.\) Suppose that \(J\) and \(K\) are subsets of \(I\) such that
\[|J\cap I_{\bar{0}}|=|K\cap I_{\bar{0}}|\quad\text{ and }\quad|J\cap I_{\bar{1}} |=|K\cap I_{\bar{1}}|. \tag{4.3}\]
Denote by \(A(\partial)_{JK}\in\operatorname{Mat}_{J\times K}\otimes\mathcal{V}(\!( \partial^{-1})\!)\) the submatrix consisting of \(jk\)-entries of \(A(\partial)\) for \(j\in J\) and \(k\in K\). If there exists a matrix valued operator \(\big{(}A(\partial)_{JK}\big{)}^{\operatorname{inv}}\in\operatorname{Mat}_{K \times J}\otimes\mathcal{V}(\!(\partial^{-1})\!)\) satisfying
\[A(\partial)_{JK}\star\big{(}A(\partial)_{JK}\big{)}^{\operatorname{inv}}= \mathbb{1}_{J}\quad\text{ and }\quad\big{(}A(\partial)_{JK}\big{)}^{ \operatorname{inv}}\star A(\partial)_{JK}=\mathbb{1}_{K}, \tag{4.4}\]
then we call \(\big{(}A(\partial)_{JK}\big{)}^{\operatorname{inv}}\) the \(\star\)_-inverse_ of \(A(\partial)_{JK}\) and say that \(A(\partial)_{JK}\) is \(\star\)_-invertible_.
**Definition 4.1**.: _Let \(A(\partial)\in\mathfrak{gl}(I)\otimes\mathcal{V}(\!(\partial^{-1})\!)\) be a \(\star\)-invertible matrix pseudo-differential and \(J,K\subset I\) be subsets satisfying (4.3). Suppose \(A^{\operatorname{inv}}(\partial)_{KJ}\) is also \(\star\)-invertible. Then the \((J,K)\) quasi-determinant of \(A(\partial)\) is_
\[|A(\partial)|_{JK}:=\big{(}(A^{\operatorname{inv}}(\partial))_{KJ}\big{)}^{ \operatorname{inv}}. \tag{4.5}\]
For subsets \(I\), \(J\), \(K\) in Definition 4.1, let \(J^{c}=I\setminus J\) and \(K^{c}=I\setminus K\). If the \((J,K)\) quasi-determinant of the matrix pseudo-differential operator \(A(\partial)\) is well-defined and the submatrix \((A_{J^{c}K^{c}})(\partial)\) is \(\star\)-invertible, then we have
\[|A(\partial)|_{JK}=A_{JK}(\partial)-A_{JK^{c}}(\partial)\star(A_{J^{c}K^{c}})^{ \operatorname{inv}}(\partial)\star A_{J^{c}K}(\partial). \tag{4.6}\]
See [11] for the proof of (4.6). We also have the following lemma which directly follows by (3.11).
**Lemma 4.2**.: _Let \(A(\partial)\in\mathfrak{gl}(I)\otimes\mathcal{V}(\!(\partial^{-1})\!)\) be a sATO. For two subsets \(J\) and \(K\) of \(I\) satisfying (4.3), we can identify \(\operatorname{Mat}_{J\times K}\) with \(\mathfrak{gl}(J)\). Moreover, under this identification \(\big{(}A(\partial)\big{)}_{JK}\in\operatorname{Mat}_{J\times K}\otimes \mathcal{V}(\!(\partial^{-1})\!)\cong\mathfrak{gl}(J)\otimes\mathcal{V}(\!( \partial^{-1})\!)\) is also a sATO._
By Definition 4.1 and Lemma 4.2, in order to show that the \((J,K)\) quasi-determinant of a sATO is still a sATO for \(J\) and \(K\) satisfying (4.3), it is enough to show that the \(\star\)-inverse of a sATO is again a sATO.
**Lemma 4.3**.: _Let \(A(\partial)\in\mathfrak{gl}(I)\otimes\mathcal{V}(\!(\partial^{-1})\!)\) be a monic matrix pseudo-differential operator of order \(N\). There exists a unique \(\star\)-inverse of order \(-N\)._
Proof.: Let \(A(\partial)=\mathbb{1}_{I}\partial^{N}+\sum_{M<N}U_{M}\partial^{M}\) and write \(X:=\sum_{M<N}U_{M}\partial^{M-N}\). Then we have \(A(\partial)^{\mathrm{inv}}=\partial^{-N}\big{(}\mathbb{1}_{I}-X+X^{\star 2}-X^{ \star 3}\cdots\big{)}\), where \(X^{\star n}\) is the \(n\)-th power of \(X\) with respect to the \(\star\)-product.
**Lemma 4.4**.: _Suppose that a differential superalgebra \(\mathcal{V}\) is endowed with a \(\lambda\)-bracket \(\{\,{}_{\lambda}\,\}\) and let \(C(\partial)=(C_{ij}(\partial))_{i,j\in I}\in\mathfrak{gl}(I)\otimes\mathcal{V }(\!(\partial^{-1})\!)\) be a \(\star\)-invertible matrix pseudo-differential operator. For \(a\in\mathcal{V}\), we have_
\[\begin{split}&\{a\,{}_{\lambda}\,(C^{\mathrm{inv}})_{ij}(z)\}\\ &\quad=-\sum_{r,t\in I}\sum_{n\in\mathbb{Z}}(-1)^{\tilde{a}( \tilde{t}+\tilde{r})}(C^{\mathrm{inv}})_{ir}(\lambda+z+\partial)\,\{a\,{}_{ \lambda}\,C_{rt;n}\}\,(z+\partial)^{n}(C^{\mathrm{inv}})_{tj}(z)\end{split} \tag{4.7}\]
_and_
\[\begin{split}&\{(C^{\mathrm{inv}})_{ij}(\lambda)\,{}_{\lambda+z }\,a\}\\ &\quad=-\sum_{r,t\in I}\sum_{n\in\mathbb{Z}}(-1)^{\tilde{a}( \tilde{j}+\tilde{t})+(\tilde{i}+\tilde{r})(\tilde{a}+\tilde{r}+\tilde{j})}\, \{C_{rt;n\,\lambda+z+\partial}\,a\}\,(\lambda+\partial)^{n}(C^{\mathrm{inv}})_ {tj}(\lambda)(C^{\mathrm{inv}})_{ir}^{\ast}(z),\end{split} \tag{4.8}\]
_where \(C_{ij}(\partial)=\sum_{n\in\mathbb{Z}}C_{ij;n}\partial^{n}\) and negative powers of \((x+\partial)\) are expanded using the geometric series with nonnegative powers of \(\partial\)._
Proof.: This can be shown by direct calculations similar to the proof of Lemma 2.8 in [13].
**Proposition 4.5**.: _Let \(\mathcal{V}\) be a differential superalgebra with a \(\lambda\)-bracket \(\{\,{}_{\lambda}\,\}\) and \(A(\partial)\in\mathfrak{gl}(I)\otimes\mathcal{V}(\!(\partial^{-1})\!)\) be a sATO on \(\mathcal{V}\). If \(A(\partial)\) is \(\star\)-invertible, then the \(\star\)-inverse \(A(\partial)^{\mathrm{inv}}\) is also a sATO on \(\mathcal{V}\) with respect to the opposite bracket \(-\{\,{}_{\lambda}\,\}\). Hence a quasi-determinant of a sATO is a sATO._
Proof.: The proposition follows from the proof of Proposition 3.7 [13] but with additional sign considerations.
Using both statements in Lemma 4.4, we see that the \(\lambda\)-bracket between inverse operators is as follows
\[\begin{split}\big{\{}(A^{\mathrm{inv}})_{ij}(z)\,{}_{\lambda}\,( A^{\mathrm{inv}})_{hk}(w)\big{\}}&=\sum_{p,q,s,t\in I}(-1)^{( \tilde{t}+\tilde{r})(\tilde{t}+\tilde{s})}(-1)^{(\tilde{s}+\tilde{t})(\tilde{ q}+\tilde{r})+(\tilde{t}+\tilde{p})(\tilde{s}+\tilde{t}+\tilde{p}+\tilde{r})}\\ &\quad(A^{\mathrm{inv}})_{hs}(w+\lambda+\partial)\,\{A_{pq}(z+x_{ 1})\,{}_{\lambda+x_{1}+x_{3}}\,A_{st}(w+x_{2})\}\\ &\quad\big{(}\big{|}_{x_{1}=\partial}(A^{\mathrm{inv}})_{qj}(z) \big{)}\big{(}\big{|}_{x_{3}=\partial}\,\big{(}(A^{\mathrm{inv}})_{ip}\big{)} ^{\ast}\,(\lambda-z)\big{)}\big{(}\big{|}_{x_{2}=\partial}(A^{\mathrm{inv}})_{ tk}(w)\big{)}.\end{split} \tag{4.9}\]
In (4.9), the notation \(x_{1}^{k}x_{2}^{l}\big{(}\big{|}_{x_{1}=\partial}a\big{)}\big{(}\big{|}_{x_{2}= \partial}b\big{)}\) represents \(\partial^{k}(a)\partial^{l}(b)\). By the definition of sATO, we have
\[\eqref{eq:Aq}= \sum_{p,q,s,t\in I}(-1)^{(\tilde{t}+\tilde{r})(\tilde{t}+\tilde{s })+(\tilde{s}+\tilde{t})(\tilde{q}+\tilde{j})+(\tilde{t}+\tilde{p})(\tilde{s}+ \tilde{t}+\tilde{p}+\tilde{r})+\tilde{t}+(\tilde{t}+\tilde{k})(\tilde{p}+ \tilde{t})+\tilde{p}\tilde{q}+\tilde{p}\tilde{s}+\tilde{q}\tilde{s}}(A^{ \mathrm{inv}})_{hs}(w+\lambda+\partial)\] \[\quad\times\Big{(}A_{sq}(w+x_{1}+x_{2}+x_{3}+\lambda+\partial)_{tz }(z-w-x_{2}-x_{3}-\lambda-\partial)^{-1}(A_{pt})^{\ast}(\lambda+x_{3}-z)\] \[\quad\quad\times\Big{(}\Big{|}_{x_{1}=\partial}(A^{\mathrm{inv}})_ {qj}(z)\Big{)}\Big{(}\Big{|}_{x_{2}=\partial}(A^{\mathrm{inv}})_{tk}(w)\Big{)} \Big{(}\Big{|}_{x_{3}=\partial}\,\big{(}(A^{\mathrm{inv}})_{pi}\big{)}\,( \lambda-z)\Big{)}\] \[= \sum_{p,q,s,t\in I}(-1)^{\tilde{t}+\tilde{j}\tilde{t}+\tilde{t} \tilde{q}+\tilde{t}\tilde{j}+\tilde{j}+\tilde{t}\tilde{j}+\tilde{k}\tilde{p}+ \tilde{k}\tilde{t}+\tilde{p}\tilde{t}+\tilde{t}+(\tilde{t}+\tilde{p})(\tilde{t }+\tilde{t}+\tilde{q}+\tilde{r})}\] \[\quad\times(A^{\mathrm{inv}})_{hs}(w+\lambda+\partial)A_{sq}(w+x_{ 1}+x_{2}+x_{3}+\lambda+\partial)_{tz}(z-w-x_{2}-x_{3}-\lambda-\partial)^{-1}\] \[\quad\times(A^{\ast}_{tp})(\lambda+x_{3}-z)\Big{(}\Big{|}_{x_{3}= \partial}(A^{\mathrm{inv}})_{pi}(\lambda-z)\Big{)}\Big{(}\Big{|}_{x_{1}= \partial}(A^{\mathrm{inv}})_{qj}(z)\Big{)}\Big{(}\Big{|}_{x_{2}=\partial}(A^{ \mathrm{inv}})_{tk}(w)\Big{)}\] \[-\sum_{p,q,s,t\in I}(-1)^{\tilde{t}+\tilde{j}\tilde{h}+\tilde{t} \tilde{q}+\tilde{t}\tilde{j}+\tilde{t}\tilde{j}+\tilde{j}\tilde{p}+\tilde{k} \tilde{p}+\tilde{k}\tilde{t}+\tilde{t}\tilde{p}+\tilde{t}+(\tilde{t}+\tilde{p} )}\]
\[\times(A^{\rm inv})_{hs}(w+\lambda+\partial)A_{sq}(z+x_{1})\Big{(}\Big{|}_{x_{1}= \partial}(A^{\rm inv})_{qj}(z)\Big{)}t_{z}(z-w-\lambda-x_{2}-x_{3}-\partial)^{-1}\] \[\times A_{pt}(w+x_{2})\Big{(}\Big{|}_{x_{2}=\partial}(A^{\rm inv})_ {tk}(w)\Big{)}\Big{(}\Big{|}_{x_{3}=\partial}\left((A^{\rm*inv})_{pi}\right)^{ *}(\lambda-z)\Big{)}\] \[=\sum_{q,t\in I}(-1)^{\tilde{i}\tilde{h}+\tilde{j}\tilde{h}+ \tilde{t}\tilde{q}+\tilde{j}\tilde{t}+\tilde{t}+\tilde{i}\tilde{t}}\delta_{ hq}(A^{\rm inv})_{qj}(z)t_{z}(z-w-\lambda-\partial)^{-1}\delta_{ti}(A^{\rm inv})_{tk}(w)\] \[-\sum_{p,s\in I}(-1)^{\tilde{i}\tilde{h}+\tilde{j}\tilde{h}+ \tilde{i}\tilde{j}+\tilde{k}\tilde{p}+\tilde{k}\tilde{t}}(A^{\rm inv})_{hs}(w+ \lambda+\partial)\delta_{sj}t_{z}(z-w-\lambda-\partial)^{-1}\delta_{pk}(A^{ \rm*inv})_{pi}(\lambda-z)\] \[=(-1)^{\tilde{i}\tilde{j}+\tilde{t}\tilde{h}+\tilde{j}\tilde{h}} \big{(}(A^{\rm inv})_{hj}(z)t_{z}(z-w-\lambda-\partial)^{-1}(A^{\rm inv})_{ ik}(w)\] \[\qquad\qquad\qquad-(A^{\rm inv})_{hj}(w+\lambda+\partial)\iota_{ z}(z-w-\lambda-\partial)^{-1}\left((A^{\rm inv})_{ik}\right)^{*}(\lambda-z) \big{)}.\]
This completes the proof.
### Rectangular \(\mathcal{W}\)-superalgebras and sATOs
In this subsection we fix \(m,n,N\in\mathbb{Z}_{+}\) and \(N\geq 2\). We explain how to construct the rectangular \(\mathcal{W}\)-superalgebra \(\mathcal{W}(\mathfrak{gl}(Nm|Nn),f)\) within the theory of sATOs where \(f\) is the \(N\times(m|n)\) rectangular nilpotent in \(\mathfrak{gl}(Nm|Nn)\) as defined in Example 2.7.
Consider the index set \(\Pi=\{1,2,\cdots,N(m+n)\}\) with the parity map \(p:\Pi\to\{0,1\}\) defined by
\[\tilde{i}:=p(i)=\left\{\begin{aligned} & 0&\text{if}\ \ 0<i\leq m\,(\text{mod}\,m+n),\\ & 1&\text{if}\ \ m<i\leq m+n\,(\text{mod}\,m+n). \end{aligned}\right. \tag{4.10}\]
Then \(\mathfrak{gl}(\Pi)\cong\mathfrak{gl}(Nm|Nn)\) and an element in \(\mathfrak{gl}(\Pi)\) is presented by a matrix (2.14) in Example 2.7. In other words, \(\mathfrak{gl}(\Pi)=\mathfrak{gl}_{N}\otimes\mathfrak{gl}(m|n)\). Let us consider the sATO \(A(\partial)\in\mathfrak{gl}(\Pi)\otimes\mathcal{V}\big{(}\mathfrak{gl}(\Pi) \big{)}\big{(}\!(\partial^{-1})\!\big{)}\) of degree one as in Example 3.6, i.e., the \(ij\)-th entry for \(i,j\in\Pi\) is
\[A_{ij}(\partial)=\delta_{ij}\partial+(-1)^{\tilde{i}}q_{ij}. \tag{4.11}\]
As in Example 2.7, we can decompose \(A(\partial)\) into \(N^{2}\) submatrix-valued operators \(\big{(}A_{[uv]}(\partial)\big{)}_{1\leq u,v\leq N}\), each of which is isomorphic to a \(\mathcal{V}(\mathfrak{gl}(\Pi))\)-valued \((m|n)\!\times\!(m|n)\) matrix pseudo-differential operator. This can be described by the following picture:
\[A(\partial)=\left[\begin{array}{cccc}A_{[11]}(\partial)&A_{[12]}(\partial)& \cdots&A_{[1N]}(\partial)\\ A_{[21]}(\partial)&A_{[22]}(\partial)&\cdots&A_{[2N]}(\partial)\\ \vdots&\vdots&\ddots&\vdots\\ A_{[N1]}(\partial)&A_{[N2]}(\partial)&\cdots&A_{[NN]}(\partial)\end{array} \right]. \tag{4.12}\]
Denote \(q_{\frac{[uv]}{(ij)}}:=q_{(u-1)(m+n)+i,(v-1)(m+n)+j}\) so that
\[A_{[uv]}(\partial):=\bigg{(}A_{\frac{[uv]}{(ij)}}(\partial)\bigg{)}_{i,j\in\{1, \cdots,m+n\}}\ \text{for}\ \ \ \ A_{\frac{[uv]}{(ij)}}(\partial):=\delta_{uv}\delta_{ij}\partial+(-1)^{\tilde{i}}q_ {\frac{[uv]}{(ij)}} \tag{4.13}\]
and
\[\mathcal{V}(\mathfrak{gl}(\Pi))\cong\mathbb{C}[q_{\frac{[uv]}{(ab)}}^{(k)}|1 \leq u,v\leq N,\,1\leq a,b\leq m+n\ \text{and}\ k\in\mathbb{Z}_{+}]. \tag{4.14}\]
Let us fix index subsets \(I,J\) as follows
\[I=\{(N-1)(m+n)+1,(N-1)(m+n)+2,\cdots,N(m+n)\}\subset\Pi\ \ \text{and}\ \ J=\{1,2,\cdots,m+n\}\subset\Pi. \tag{4.15}\]
Let \(\mathfrak{g}=\mathfrak{gl}(\Pi)\) and recall the definitions of \(f\) in (2.15), \(\mathfrak{p}\) in (2.6) and the differential superalgebra homomorphism \(\rho:\mathcal{V}(\mathfrak{g})\to\mathcal{V}(\mathfrak{p})\) defined by \(\rho(a)=\pi_{\mathfrak{p}}(a)+(f|a)\) for \(a\in\mathfrak{g}.\) We also denote by \(\rho\) its extension to \(\mathfrak{g}\otimes\mathcal{V}(\mathfrak{g})(\!(\partial^{-1})\!)\) such that
\[\rho(a\otimes q(\partial)):=a\otimes\rho(q(\partial)) \tag{4.16}\]
for \(a\in\mathfrak{g}\) and \(q(\partial)\in\mathcal{V}(\mathfrak{g})(\emptyset^{-1}).\) We are now ready to define our candidate matrix differential operator
\[L(\partial)=(-1)^{N-1}|\rho(A(\partial))|_{IJ}=(-1)^{N-1}|\mathbb{1}_{\,(m|n)} \partial+f^{\perp}+\sum_{q_{ij}\in\mathfrak{g}_{\leq 0}}(-1)^{\tilde{i}}e_{ij} \otimes q_{ij}|_{IJ} \tag{4.17}\]
whose coefficients will be generators of the rectangular \(\mathcal{W}\)-superalgebra \(\mathcal{W}(\mathfrak{gl}(Nm|Nn),f),\) where
\[f^{\perp}=\sum_{u=1}^{N-1}\left(e_{\frac{|u,u+1|}{(11)}}+e_{\frac{|u,u+1|}{(22 )}}+\cdots+e_{\frac{|u,u+1|}{(m+n,m+n)}}\right)=\left[\begin{array}{ccccc}0& \mathbb{1}_{\,(m|n)}&0&\cdots&0\\ 0&0&\mathbb{1}_{\,(m|n)}&\cdots&0\\ 0&0&0&\cdots&0\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ 0&0&0&\cdots&0\end{array}\right]. \tag{4.18}\]
Remark that the diagonal element
\[h=\sum_{u=1}^{N}(N-2u+1)\left(e_{\frac{|u|}{(11)}}+e_{\frac{|u|}{(22)}}+ \cdots+e_{\frac{|u|}{(m+n,m+n)}}\right) \tag{4.19}\]
gives a \(\frac{\mathbb{Z}}{2}\)-grading of \(\mathfrak{gl}(\Pi)\) which is needed to define the \(\mathcal{W}\)-superalgebra \(\mathcal{W}(\mathfrak{gl}(Nm|Nn),f).\)
Since both \(\rho(A(\partial))\) and \(\rho(A(\partial))_{I^{c}J^{c}}\) are \(\star\)-invertible, we can express the quasi-determinant formula (4.6) using the \(\star\)-product and block matrix differential operators \(A_{[uv]}\):
\[|\rho(A(\partial))|_{IJ}=\left[\begin{array}{ccccc}A_{[N1]}& \end{array}\right]-\left[\begin{array}{ccccc}A_{[N2]}&\cdots&A_{[NN]}& \end{array}\right]\star\left[\begin{array}{ccccc}\mathbb{1}_{\,(m|n)}&0& \cdots&0\\ A_{[22]}&\mathbb{1}_{\,(m|n)}&\cdots&0\\ \vdots&\vdots&\ddots&\vdots\\ A_{[N-1,2]}&A_{[N-1,3]}&\cdots&\mathbb{1}_{\,(m|n)}\end{array}\right]^{\rm inv} \star\left[\begin{array}{c}A_{[11]}\\ A_{[21]}\\ \vdots\\ A_{[N-1,1]}\end{array}\right].\]
The matrix differential operator \(L(\partial)\) is monic of degree \(N\). Each component of the symbol \(L(z)\) can be written as a product of symbols of scalar differential operators \(A_{\frac{|u|}{(\varepsilon t)}}(z)\):
\[L_{ij}(z)=(-1)^{N-1}\sum_{k=0}^{N-1}\sum_{\begin{subarray}{c}(i_{1},\,\cdots, \,i_{k})\,{\rm s.t.}\\ l>i_{1}>\cdots>i_{k}>0\end{subarray}}\sum_{s_{1},\,\cdots,\,s_{k}=1}^{m+n}(-1)^{ k}A_{\frac{|N,i_{1}+1|}{(i_{1},s_{1})}}A_{\frac{|i_{1},s_{2}+1|}{(s_{1}s_{2})}} \cdots A_{\frac{|i_{k},1|}{(s_{k},j)}}(z), \tag{4.20}\]
for \(i,j\in J=\{1,2,\cdots,m+n\}.\)
**Lemma 4.6**.: _Let \(u\in\{1,2,\cdots,N-1\}\) and \(a,b\in J\). Then the following equalities hold:_
1. _For_ \(\alpha>u+1\)_,_ \[\rho\Big{(}\big{\{}q_{\frac{|u,u+1|}{(ab)}}\,A_{\frac{|u,u+1|}{(ab)}}A_{\frac{|u |}{(ab)}}(z)-A_{\frac{|u|}{(\beta a)}}(z)\big{\}}\Big{)}=0.\]
2. _For_ \(\beta\in J\setminus\{b\}\)_,_ \[\rho\Big{(}\big{\{}q_{\frac{|u,u+1|}{(ab)}}\,A_{\frac{|u+1|+1|}{(ab)}}A_{\frac{|u |}{(ba)}}(z)-A_{\frac{|u+1|}{(\beta a)}}(z)\big{\}}\Big{)}=0.\]
3. _For_ \(\beta\in J\setminus\{a\}\)_,_ \[\rho\Big{(}\big{\{}q_{\frac{|u,u+1|}{(ab)}}\,A_{\frac{|u+1|+1|}{(ba)}}A_{\frac{|u |}{(ab)}}(z)-A_{\frac{|u+1|}{(b)}}(z)\big{\}}\Big{)}=0.\]
4. _For_ \(a,b\in J\) _with_ \(a\neq b\)_,_ \[\rho\Big{(}\big{\{}q_{\frac{|u,u+1|}{(ab)}}\,\lambda\,A_{\frac{|u+1|+1|}{(ab)}}A_ {\frac{|u|}{(ba)}}(z)+A_{\frac{|u+1|+1|}{(ba)}}A_{\frac{|u|}{(ab)}}(z)-A_{ \frac{|u+1|}{(ba)}}(z)\big{\}}\Big{)}=0.\]
5. _For_ \(\alpha<u\)_,_
\[\rho\Big{(}\big{\{}q_{\frac{[u,u+1]}{(ab)}}\,\lambda\,A_{\frac{[u+1,u+1]}{(ab)}} \,A_{\frac{[u,u+1]}{(ab)}}(z)-A_{\frac{[u+1,u+1]}{(ab)}}(z)\big{\}}\Big{)}=0.\]
Proof.: Here, we give the full proof of (a), (d) and (e). (b) and (c) can be proved similarly.
The following computation shows (a):
\[\rho\Big{(}\big{\{}q_{\frac{[u,u+1]}{(ab)}}\,\lambda\,A_{\frac{[u, u+1]}{(ab)}}\,A_{\frac{[u,u]}{(ab)}}(z)\big{\}}\Big{)}\] \[= \rho\Big{(}\big{\{}q_{\frac{[u,u+1]}{(ab)}}\,q_{\frac{[u,u+1]}{( ab)}}(\delta_{ab}z+(-1)^{p(b)}q_{\frac{[u,u]}{(ab)}}\big{\}}\big{)}\Big{)}=\rho \Big{(}\big{\{}q_{\frac{[u,u+1]}{(ab)}}\,\lambda\,q_{\frac{[u,u+1]}{(ab)}}\, \big{\}}\big{)}\Big{)}\] \[= -\rho\Big{(}(-1)^{(a+b)(\beta+a)}q_{\frac{[u,u+1]}{(ab)}}\Big{)}= \rho\Big{(}\big{\{}q_{\frac{[u,u+1]}{(ab)}}\,\lambda\,q_{\frac{[u,u]}{(ab)}} \big{\}}\Big{)}\Big{)}=\rho\Big{(}\big{\{}q_{\frac{[u,u+1]}{(ab)}}\,\lambda \,A_{\frac{[u,u]}{(ab)}}(z)\big{\}}\Big{)},\]
where the third equality holds by the fact \(\rho\big{(}q_{\frac{[u,u+1]}{(ab)}}\big{)}=(-1)^{\tilde{a}}\delta_{ab}\).
(d) can be obtained in the following way:
\[\rho\Big{(}\big{\{}q_{\frac{[u,u+1]}{(ab)}}\,\lambda\,A_{\frac{[u,u]}{(ab)}}\,A_{\frac{[u,u]}{(ab)}}\,(z)+A_{\frac{[u+1,u+1]}{(ab)}}A_{\frac{[u, u]}{(aa)}}(z)\big{\}}\Big{)}\] \[= \rho\Big{(}\big{\{}q_{\frac{[u,u+1]}{(ab)}}\,\lambda\,(-1)^{b}(q_ {\frac{[u+1,u+1]}{(ba)}}+q_{\frac{[u,u]}{(ba)}})z+(-1)^{b}q_{\frac{[u,u]}{(ba)} }^{\prime}+q_{\frac{[u+1,u+1]}{(bb)}}q_{\frac{[u,u]}{(ba)}}+(-1)^{a+b}q_{\frac{ [u+1,u+1]}{(ba)}}q_{\frac{[u,u]}{(aa)}}\big{\}}\Big{)}\] \[= \rho\Big{(}\big{(}-(-1)^{a}q_{\frac{[u,u+1]}{(bb)}}+(-1)^{b}q_{ \frac{[u,u+1]}{(aa)}}\big{)}z-(\lambda+\partial)(-1)^{a}q_{\frac{[u,u+1]}{(bb)}}\] \[+\big{(}q_{\frac{[u,u+1]}{(ab)}}q_{\frac{[u,u]}{(ba)}}-(-1)^{a+b}q _{\frac{[u+1,u+1]}{(bb)}}q_{\frac{[u,u+1]}{(bb)}}\big{)}+(-1)^{a+b}\big{(}q_{ \frac{[u,u+1]}{(aa)}}q_{\frac{[u,u]}{(aa)}}-(-1)^{a+b}q_{\frac{[u+1,u+1]}{(ba)} }q_{\frac{[u,u+1]}{(ab)}}\big{)}\Big{)}\] \[= (-1)^{b}\Big{(}q_{\frac{[u,u]}{(aa)}}-(-1)^{a+b}q_{\frac{[u+1,u+1]} {(bb)}}-(-1)^{a}\lambda\Big{)}\] \[= \rho\Big{(}\big{\{}q_{\frac{[u,u+1]}{(ab)}}\,\lambda\,A_{\frac{[u +1,u]}{(ba)}}\,\big{\}}\Big{)}.\]
(e) is proved as follows:
\[\rho\Big{(}\big{\{}q_{\frac{[u,u+1]}{(ab)}}\,\lambda\,A_{\frac{[u +1,u+1]}{(ba)}}\,A_{\frac{[u,u]}{(ab)}}(z)\big{\}}\Big{)}\Big{)}\] \[=\rho\Big{(}\big{\{}q_{\frac{[u,u+1]}{(ab)}}\,\lambda\,\delta_{ab} q_{\frac{[u,u]}{(ab)}}^{\prime}+\delta_{ab}q_{\frac{[u,u]}{(ab)}}\, \partial+(-1)^{a+b}q_{\frac{[u+1,u+1]}{(ba)}}q_{\frac{[u,u]}{(ab)}}\big{\}} \big{)}\Big{)}\] \[=\rho\Big{(}\big{\{}q_{\frac{[u,u+1]}{(ab)}}\,\lambda(-1)^{a+b}q_{ \frac{[u+1,u+1]}{(ba)}}\,\big{\{}q_{\frac{[u,u]}{(ab)}}\,\big{\}}\big{)}\Big{)}\] \[=\rho\Big{(}(-1)^{a+b}q_{\frac{[u,u+1]}{(aa)}}q_{\frac{[u,u]}{(ab)} }\Big{)}=\rho\Big{(}\big{\{}q_{\frac{[u,u+1]}{(ab)}}\,\lambda\,A_{\frac{[u+1,u]} {(ab)}}\,\big{\}}\Big{)}.\]
**Proposition 4.7**.: _The coefficients of \(L(\partial)\) in (4.17) are elements in the \(\mathcal{W}\)-superalgebra \(\mathcal{W}(\mathfrak{gl}(Nm|Nn),f)\) associated with the \(N\times(m|n)\) rectangular nilpotent element \(f\)._
Proof.: Recall that \(\mathfrak{gl}(\Pi)\cong\mathfrak{gl}(Nm|Nn)\) and hence we can consider \(\mathcal{W}(\mathfrak{gl}(Nm|Nn),f)\) as a subalgebra of \(\mathcal{V}(\mathfrak{gl}(\Pi))\).
First we check that the coefficients of \(L(z)\) lie in \(\mathcal{V}(\mathfrak{p})\). It is clear since \(q_{\frac{[u,u]}{(ij)}}\in\mathfrak{gl}(\Pi)_{v-u}\). Next we should check that \(\rho\{\nu_{\lambda}L_{ij}(z)\}=0\) for all \(\nu\in\mathfrak{n}=\mathfrak{gl}(\Pi)_{\geq 1}\) and \(i,j\in J\). It is enough to show that
\[\rho\Big{(}\big{\{}q_{\frac{[u,u+1]}{(ab)}}\lambda\,L_{ij}(z)\big{\}}\Big{)}=0, \tag{4.21}\]
for \(\nu=q_{\frac{[u,u+1]}{(ab)}}\in\mathfrak{gl}(\Pi)_{1}\) and \(a,b\in J\) and \(u\in\{1,2,\cdots,N-1\}\) by the Jacobi identity and since \([\mathfrak{g}_{i},\ \mathfrak{g}_{j}]\subseteq\mathfrak{g}_{i+j}\). To show (4.21), first we fix some \(a,b\) and \(u\).
We can split the terms in (4.20) into five types which depend on \(a,b\) and \(u\):
(Type I) The terms in (4.20) containing \(A_{\frac{[u,u+1]}{(ab)}}A_{\frac{[u,u]}{(ba)}}\) for some \(\alpha\geq u+1\) and \(\beta\in J\),
(Type II) The terms in (4.20) containing \(A_{\frac{[u+1,u+1]}{(ab)}}A_{\frac{[ua]}{(ab)}}\) for some \(\alpha\leq u\) and \(\beta\in J\),
(Type III) The terms in (4.20) containing \(A_{\frac{[u]}{(ab)}}\) for some \(\alpha\geq u+1\) and \(\beta\in J\),
(Type IV) The terms in (4.20) containing \(A_{\frac{[u+1,\alpha]}{(b)}}\) for some \(\alpha\leq u\) and \(\beta\in J\),
(Type V) The other terms in (4.20).
Note that for a term \(Z\) of Type V, we have \(\rho\big{(}\{q_{\frac{[u,u+1]}{(ab)}}\lambda Z\}\big{)}=0\). Hence we focus on the terms of Type I - Type IV. Let us divide each of Type I - Type IV into three sub-types. For example,
\[\text{Type I}=\text{Type I}_{(\alpha>u+1)}\sqcup\text{Type I}_{(\alpha=u+1, \beta\neq b)}\sqcup\text{Type I}_{(\alpha=u+1,\beta=b)},\]
where Type I\({}_{(\alpha>u+1)}\) consists of the terms in Type I such that \(\alpha>u+1\). Type I\({}_{(\alpha=u+1,\beta\neq b)}\) and Type I\({}_{(\alpha=u+1,\beta=b)}\) can be defined similarly. Now we list the following facts:
1. Type I\({}_{(\alpha>u+1)}\sqcup\text{Type III}_{(\alpha>u+1)}\) : In \(L_{ij}(z)\), all terms with \(A_{\frac{[u,u+1]}{(ab)}}A_{\frac{[uu]}{(ab)}}\) (i.e. Type I\({}_{(\alpha>u+1)}\) terms) can be written as \(XA_{\frac{[u,u+1]}{(ab)}}A_{\frac{[uu]}{(ab)}}Y(z)\) for some \(X(\partial),Y(\partial)\in\mathcal{V}(\mathfrak{p})[\partial].\) Similarly, with the same \(X\) and \(Y\), the terms with \(A_{\frac{[u,u+1]}{(ab)}}\) (i.e. Type III\({}_{(\alpha>u+1)}\) terms) in \(L_{ij}(z)\) has the form \(-XA_{\frac{[u,u+1]}{(ab)}}Y(z)\). Note that \(\{q_{\frac{[u,u+1]}{(ab)}}\,\lambda\,X(z)\}=0\) and \(\{q_{\frac{[u,u+1]}{(ab)}}\,\lambda\,Y(z)\}=0\) and hence, by Lemma 4.6 (a), we have \[\rho\Big{(}\{q_{\frac{[u,u+1]}{(ab)}}\,\lambda\,XA_{\frac{[u,u+1]}{(ab)}}A_{ \frac{[u,u]}{(ba)}}Y(z)-XA_{\frac{[u,u]}{(ab)}}Y(z)\}\Big{)}=0.\] (4.22)
2. Type I\({}_{(\alpha=u+1,\beta\neq b)}\sqcup\text{Type III}_{(\alpha=u+1,\beta\neq b)}\) : By the same reason as (i), the sum of Type I\({}_{(\alpha=u+1,\beta\neq b)}\) and Type III\({}_{(\alpha=u+1,\beta\neq b)}\) terms in \(L_{ij}(z)\) has the form of \(\big{(}XA_{\frac{[u+1,u+1]}{(ab)}}A_{\frac{[u,u]}{(ab)}}Y(z)-XA_{\frac{[u+1,u]} {(ba)}}Y(z)\big{)}\) for some \(X(\partial),Y(\partial)\in\mathcal{V}(\mathfrak{p})[\partial]\) such that \(\{q_{\frac{[u,u+1]}{(ab)}}\,\lambda\,X(z)\}=\{q_{\frac{[u,u+1]}{(ab)}}\,\lambda \,Y(z)\}=0\). Moreover, by Lemma 4.6 (b), we have \[\rho\Big{(}\{q_{\frac{[u,u+1]}{(ab)}}\,\lambda\,XA_{\frac{[u+1,u+1]}{(ab)}}A_ {\frac{[u,u]}{(ba)}}Y(z)-XA_{\frac{[u+1,u]}{(ba)}}Y(z)\}\Big{)}=0.\] (4.23)
3. Type II\({}_{(\alpha=u,\beta\neq a)}\sqcup\text{Type IV}_{(\alpha=u,\beta\neq a)}\) : The sum of all terms of the given types in \(L_{ij}(z)\) has the form \(\big{(}XA_{\frac{[u+1,u+1]}{(ba)}}A_{\frac{[u,u]}{(ab)}}Y(z)-XA_{\frac{[u+1,u] }{(ab)}}Y(z)\big{)}\) and, by Lemma 4.6 (c), we have \[\rho\Big{(}\{q_{\frac{[u,u+1]}{(ab)}}\,\lambda\,XA_{\frac{[u+1,u+1]}{(ba)}}A_ {\frac{[u,u]}{(ab)}}Y(z)-XA_{\frac{[u+1,u]}{(ab)}}Y(z)\}\Big{)}=0.\] (4.24)
4. Type I\({}_{(\alpha=u+1,\beta=b)}\sqcup\text{Type III}_{(\alpha=u,\beta=a)}\sqcup \text{Type III}_{(\alpha=u+1,\beta=b)}\) with \(a\neq b\) : The sum of all terms of the given types in \(L_{ij}(z)\) has the form \(\big{(}XA_{\frac{[u+1,u+1]}{(ab)}}A_{\frac{[u,u]}{(ba)}}Y(z)+XA_{\frac{[u+1,u +1]}{(ba)}}A_{\frac{[u,u]}{(aa)}}Y(z)-XA_{\frac{[u+1,u]}{(ba)}}Y(z)\big{)}\) and, by Lemma 4.6 (d), we have \[\rho\Big{(}\{q_{\frac{[u,u+1]}{(ab)}}\,\lambda\,XA_{\frac{[u+1,u+1]}{(ab)}}A_ {\frac{[u,u]}{(ab)}}Y(z)+XA_{\frac{[u+1,u+1]}{(ba)}}A_{\frac{[u,u]}{(ab)}}Y( z)-XA_{\frac{[u+1,u]}{(ba)}}Y(z)\}\Big{)}=0.\] (4.25)
5. Type II\({}_{(\alpha<u)}\sqcup\text{Type IV}_{(\alpha<u)}\) : The sum of all terms of the given types in \(L_{ij}(z)\) has the form \(\big{(}XA_{\frac{[u+1,u+1]}{(ba)}}A_{\frac{[u,u]}{(ab)}}Y(z)-XA_{\frac{[u+1,u ]}{(ab)}}Y(z)\big{)}\) and, by Lemma 4.6 (e), we have \[\rho\Big{(}\{q_{\frac{[u,u+1]}{(ab)}}\,\lambda\,XA_{\frac{[u+1,u+1]}{(ba)}}A_ {\frac{[u,u]}{(ab)}}Y(z)-XA_{\frac{[u+1,\alpha]}{(b)}}Y(z)\}\Big{)}=0.\] (4.26)
By adding (4.22)-(4.26), we get \(\rho\Big{(}\big{\{}q_{\frac{[u,u+1]}{(ab)}}\,\lambda\,L_{ij}(z)\big{\}}\Big{)}=0.\)
Using Proposition 4.7, we obtain the following theorems which are the main results of this section.
**Theorem 4.8**.: _The coefficients of \(L(\partial)\) in (4.17) freely generate the \(\mathcal{W}\)-superalgebra \(\mathcal{W}(\mathfrak{gl}(Nm|Nn),f)\) associated with the \(N\times(m|n)\) rectangular nilpotent element \(f\)._
Proof.: Let us name the coefficients of (4.20) by
\[L_{ij}(\partial)=\sum_{k=0}^{N}w_{ij;k}\partial^{k}. \tag{4.27}\]
By Proposition 4.7, each coefficient \(w_{ij;k}\) of \(L_{ij}(z)\) lies in \(\mathcal{W}(\mathfrak{gl}(Nm|Nn),f)\). Hence it is enough to show that \(\{w_{ij;k}\}_{\frac{1+i,j\leq m+n}{0\leq i\leq N-1}}\), freely generates \(\mathcal{W}(\mathfrak{gl}(Nm|Nn),f)\). To this end, we use Proposition 2.4. In other words, we aim to show that (i) \(w_{ij;k}\) is homogeneous with respect to the conformal grading \(\Delta\) in (2.11) and that (ii) the linear part of each \(w_{ij;k}\) spans \(\mathfrak{g}^{f}\) modulo total derivatives.
(i) We can extend the conformal grading on \(\mathcal{V}_{I}^{l}\) to \(\mathcal{V}_{I}^{l}(\partial^{-1})\!\!)\) by letting \(\Delta_{\partial}=1\). Then, from (4.13), we have
\[\Delta_{A_{\frac{1\mathrm{inv}}{(ij)}}(\partial)}=1+u-v, \tag{4.28}\]
for \(u,v\in\{1,2,\cdots,N\}\) with \(u\geq v\) and \(i,j\in\{1,2,\cdots,m+n\}\). Hence for \(k\in\{0,\cdots,N-1\}\),
\[\Delta_{A_{\frac{1N,i+1}{(i\pi)_{1}}}A_{\frac{1i_{(i+1)}}{(i\pi)_{1}}}(w+ \lambda+\partial)\iota_{z}(z-w-\lambda-\partial)^{-1}((A^{\mathrm{inv}})_{ \frac{[i1]}{(ik)}})^{*}(\lambda-z)}\]
Comparing (4.20) and (4.2), we get \(\Delta_{L_{ij}(\partial)}=N\). In addition, we have \(\Delta_{w_{ij;k}}=l-k\) for \(w_{ij;k}\) in (4.27).
(ii) From (4.20) we get the explicit formula:
\[w_{ij;k}=\sum_{r=k}^{N-1}\sum_{i_{1},\cdots,i_{r}}\sum_{s_{1},\cdots,s_{r}=1}^ {m+n}\mathrm{Res}\,A_{\frac{[N,i_{1}+1]}{(i\pi)_{1}}}A_{\frac{[i_{1},i_{2}+1]} {(i_{1},i_{2})}}\cdots A_{\frac{[i_{r},-1]}{(i_{r},j)}}(\partial)\partial^{-k -1}\in S(\mathbb{C}[\partial]\otimes\mathfrak{p}). \tag{4.30}\]
Let us pick the summand in (4.30) which yields the linear part of \(w_{ij;k}\), that is
\[f_{ij;k}:=\sum_{i_{1},\cdots,i_{k}}\sum_{s_{1},\cdots,s_{k}=1}^{m+n}\mathrm{ Res}\,A_{\frac{[N,i_{1}+1]}{(i\pi)_{1}}}A_{\frac{[i_{1},i_{2}+1]}{(i_{1},i_{2}) }}\cdots A_{\frac{[i_{k},1]}{(i_{k},i_{2})}}(\partial)\partial^{-k-1}=(-1)^{k }\sum_{h=0}^{k}q_{\frac{1N+h-k,h+1}{(ij)}}\in\mathfrak{p}. \tag{4.31}\]
One can check that \(f_{ij;k}\) for \(i,j\in\{1,2,\cdots,m+n\}\) and \(k\in\{0,1,\cdots,N-1\}\) forms a basis of \(\mathfrak{g}^{f}\) and satisfies (2.10):
\[w_{ij;k}-f_{ij;k}\in\partial(\mathbb{C}[\partial]\otimes\mathfrak{p})\oplus \bigoplus_{m\geq 2}(\mathbb{C}[\partial]\otimes\mathfrak{p})^{\otimes m}. \tag{4.32}\]
Since we proved \((i)\) and \((ii)\), we can conclude that \(\{w_{ij;k}\}_{\frac{1+i,j\leq m+n}{0\leq i\leq N-1}}\). freely generates \(\mathcal{W}(\mathfrak{gl}(Nm|Nn),f)\) by Proposition 2.4.
**Theorem 4.9**.: _The matrix differential operator \(L(\partial)\) in (4.17) is a sATO of order \(N\geq 2\) on \(\mathcal{W}(\mathfrak{gl}(Nm|Nn),f)\). In other words, \(\mathcal{W}(\mathfrak{gl}(Nm|Nn),f)\) is the Adler-type PVsA associated with \(L(\partial)\)._
Proof.: Note that the coefficients of \(L^{\mathrm{inv}}(z)\) are also in the \(\mathcal{W}\)-superalgebra \(\mathcal{W}(\mathfrak{gl}(Nm|Nn),f)\) since they are in the differential superalgebra generated by coefficients of the monic operator \(L(\partial)\). Let us denote the \(\lambda\)-bracket (2.7) on \(\mathcal{W}(\mathfrak{gl}(Nm|Nn),f)\) by \(\{\ _{\lambda}\ \}_{\mathcal{W}}\) and the affine bracket (2.5) defined on \(\mathcal{V}(\mathfrak{g}(\Pi))\) by \(\{\ _{\lambda}\ \}_{\mathrm{Aff}}\). We aim to show that \(L^{\mathrm{inv}}(\partial)\) is a sATO on \(\mathcal{W}(\mathfrak{gl}(Nm|Nn),f)\) with respect to the opposite bracket \(-\{\ _{\lambda}\ \}_{\mathcal{W}}\). Recall that the matrix operator \(A(\partial)\) defined in (4.12) is a sATO for the affine bracket. Hence, the following equalities hold:
\[-\{(L^{\mathrm{inv}})_{ij}(z)\,_{\lambda}\,(L^{\mathrm{inv}})_{hk} (w)\}_{\mathcal{W}}=-\rho\{(A^{\mathrm{inv}})_{\frac{[i1]}{(ij)}}(z)\,_{ \lambda}\,(A^{\mathrm{inv}})_{\frac{[i1]}{(hk)}}(z)\}_{\mathrm{Aff}}\] \[=(-1)^{\mathfrak{j}+\mathfrak{i}\mathfrak{h}+\mathfrak{j}\mathfrak{ h}}\rho\Big{(}(A^{\mathrm{inv}})_{\frac{[i1]}{(h\mathfrak{j})}}(w+ \lambda+\partial)\iota_{z}(z-w-\lambda-\partial)^{-1}((A^{\mathrm{inv}})_{ \frac{[i1]}{(ik)}})^{*}(\lambda-z)\] \[-(A^{\mathrm{inv}})_{\frac{[i1]}{(hj)}}(z)\iota_{z}(z-w-\lambda- \partial)^{-1}(A^{\mathrm{inv}})_{\frac{[i1]}{(ik)}}(w)\Big{)}\]
\[=(-1)^{\bar{i}j+\bar{i}\bar{h}+\bar{j}\bar{h}}\Big{(}(L^{\rm inv})_{ hj}(w+\lambda+\partial)_{t_{z}}(z-w-\lambda-\partial)^{-1}((L^{\rm inv})_{ik})^{ \ast}(\lambda-z)\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad
(4.17) _by direct computations. For example, the (1,1) entry of \(L(\partial)\) is_
\[L_{11}(\partial)= q_{\frac{[331]}{(11)}}-q_{\frac{[32]}{(11)}}(\partial+q_{\frac{[11 1]}{(11)}})-q_{\frac{[32]}{(12)}}q_{\frac{[11]}{(12)}}-q_{\frac{[33]}{(13)}}(-q_ {\frac{[111]}{(31)}})-(\partial+q_{\frac{[331]}{(11)}})q_{\frac{[211]}{(11)}} \tag{4.38}\] \[-q_{\frac{[33]}{(12)}}q_{\frac{[211]}{(21)}}-q_{\frac{[33]}{(13)}} (-q_{\frac{[221]}{(31)}})+(\partial+q_{\frac{[331]}{(11)}})(\partial+q_{\frac{ [22]}{(11)}})(\partial+q_{\frac{[111]}{(11)}})\] \[+(\partial+q_{\frac{[33]}{(11)}})q_{\frac{[22]}{(12)}}q_{\frac{[1 1]}{(21)}}+(\partial+q_{\frac{[331]}{(11)}})q_{\frac{[222]}{(13)}}(-q_{\frac{ [11]}{(31)}})\] \[+q_{\frac{[33]}{(12)}}q_{\frac{[22]}{(13)}}(\partial+q_{\frac{[11 1]}{(1)}})+q_{\frac{[331]}{(12)}}(\partial+q_{\frac{[22]}{(22)}})q_{\frac{[11 ]}{(21)}}+q_{\frac{[33]}{(12)}}q_{\frac{[22]}{(23)}}(-q_{\frac{[11]}{(31)}})\] \[+q_{\frac{[33]}{(13)}}(-q_{\frac{[22]}{(31)}})(\partial+q_{\frac{ [11]}{(11)}})+q_{\frac{[33]}{(13)}}(-q_{\frac{[22]}{(32)}})q_{\frac{[11]}{(21 )}}+q_{\frac{[33]}{(13)}}(\partial-q_{\frac{[22]}{(33)}})(-q_{\frac{[11]}{(31 )}}),\]
_where \(q_{\frac{[us]}{(ab)}}=q_{(u-1)(m+n)+a,(v-1)(m+n)+b}\) for \(1\leq u,v\leq 3\) and \(1\leq a,b\leq 3\). By symbolizing the terms in RHS of (4.38), we get_
\[L_{11}(z)=z^{3}+w_{11;2}z^{2}+w_{11;1}z+w_{11;0}, \tag{4.39}\]
_where_
\[w_{11;2} =q_{11}+q_{44}+q_{77},\] \[w_{11;1} =-q_{74}-q_{41}+2q_{11}^{\prime}+q_{44}^{\prime}\] \[+q_{44}q_{11}+q_{77}q_{11}+q_{77}q_{44}+q_{45}q_{21}-q_{46}q_{31} +q_{78}q_{54}+q_{78}q_{21}-q_{79}q_{64}-q_{79}q_{31},\] \[w_{11;0} =q_{11}^{\prime\prime}+q_{71}-q_{41}^{\prime}+(q_{44}q_{11})^{ \prime}+q_{77}q_{11}^{\prime}-q_{74}q_{11}+q_{75}q_{21}+q_{76}q_{31}-q_{77}q_{ 41}+q_{78}q_{51}+q_{79}q_{61}\] \[+(q_{45}q_{21})^{\prime}-(q_{46}q_{31})^{\prime}+q_{78}q_{21}^{ \prime}-q_{79}q_{31}^{\prime}+q_{77}q_{44}q_{11}+q_{77}q_{45}q_{21}-q_{77}q_{ 46}q_{31}\] \[+q_{78}q_{54}q_{11}+q_{78}q_{55}q_{21}-q_{78}q_{56}q_{31}-q_{79}q _{64}q_{11}-q_{79}q_{65}q_{21}+q_{79}q_{66}q_{31}.\]
_The three elements \(w_{11;2}\), \(w_{11;1}\), \(w_{11;0}\) are in \(\mathcal{W}(\mathfrak{gl}(6|3),f)\)._
## 5. Integrable systems and rectangular \(\mathcal{W}\)-superalgebras
In this section, we construct integrable systems on the rectangular \(\mathcal{W}\)-superalgebras \(\mathcal{W}(\mathfrak{gl}(Nm|Nn),f)\). By Theorem 4.8, it amounts to define an integrable system on a PVsA defined by a generic sATO (see Section 3). While the previous section was making use of the \(\star\)-product, we will now need to use the first product \(\circ\) (3.9).
### Fractional powers of sATO
Let \(\mathcal{V}\) be a PVsA endowed with a \(\lambda\)-bracket \(\{\,{}_{\lambda}\,\}\) and \(I\) be a finite index set with parity map.
**Lemma 5.1**.: _Let \(A(\partial)\in\mathfrak{gl}(I)\otimes\mathcal{V}(\!(\partial^{-1})\!)\) be a monic matrix pseudo-differential operator of order \(N\geq 1\)._
1. _There exists a unique matrix pseudo-differential operator_ \(A(\partial)^{-1}\) _such that_ \(A(\partial)^{-1}\circ A(\partial)=A(\partial)\circ A(\partial)^{-1}=\mathbb{1} _{I}\)_, which is monic of order_ \(-N\)_._
2. _There exists a unique matrix pseudo-differential operator_ \(A^{\frac{1}{\lambda}}(\partial)\) _such that_ \((A^{\frac{1}{\lambda}})^{\circ N}=A\) _which is monic of order one._
_Here, we emphasize that matrix pseudo-differential operators \(A(\partial)^{-1}\) and \(A^{\frac{1}{N}}\) are defined with respect to the \(\circ\)-product (3.9), not the \(\star\)-product._
Proof.: (a) For the existence and uniqueness of the inverse, one can just apply the proof of Lemma 4.3.
(b) Since the leading term is the identity matrix, the proof reduces to the scalar setting in which case the statement is clear. The idea is as follows. The coefficient \(V_{0}\) in the operator
\[B(\partial)=\partial+\sum_{k<0}V_{k}\partial^{k}\in\mathfrak{gl}(I)\otimes \mathcal{V}(\!(\partial^{-1})\!) \tag{5.1}\]
satisfying \(B(\partial)^{N}=A(\partial)\) can be obtained by comparing the coefficients of \(\partial^{N-1}\) in the both sides. Inductively, we get \(V_{k}\) for negative integers \(k\).
For \(i,j\in I\), the \(ij\)-th entry of \(A\circ B(z)\) is \(\left(A\circ B\right)_{ij}(z)=\sum_{t\in I}(-1)^{(\bar{t}+\bar{t})(\bar{j}+\bar {t})}A_{it}(z+\partial)B_{tj}(z)\). Hence, for any \(v\in\mathcal{V}\), by the Leibniz rule, we have
\[\begin{split}\left\{\left(A\circ B\right)_{ij}(z)\ _{\lambda}\ v\right\}=& \sum_{t\in I}(-1)^{(\bar{t}+\bar{j})(\bar{v}+\bar{t}+\bar{t})}\left\{A_{it}(z+ \partial)\ _{\lambda+\partial}\ v\right\}_{\to}B_{tj}(z)\\ &+\sum_{t\in I}(-1)^{\bar{v}(\bar{t}+\bar{t})}\left\{B_{tj}(z)\ _{\lambda+\partial}\ v\right\}_{\to}(A^{\ast})_{ti}(-z+\lambda)\end{split} \tag{5.2}\]
where the negative powers of \(z+\partial\) are expanded for large \(|z|\).
**Lemma 5.2**.: _Let \(A(\partial)\) be a monic matrix pseudo-differential operator on \(\mathcal{V}\) of order \(N>0\). Then for all \(a,b\in I\) and for all \(k\in\mathbb{Z}_{+}\), we have_
\[\operatorname{Res}_{z}\!\left\{\operatorname{str}A^{\frac{k}{N}}(z)\,_{ \lambda}\,A_{ab}(w)\right\}\big{|}_{\lambda=0}=\frac{k}{N}\sum_{c,d\in I}(-1) ^{(\bar{a}+\bar{b})(\bar{c}+\bar{d})+\bar{d}}\operatorname{Res}_{z}\left\{A_ {cd}(z+\partial)\,_{\partial}\,A_{ab}(w)\right\}(A_{dc}^{\frac{k}{N}-1}(z)). \tag{5.3}\]
Proof.: By induction on \(k\), it can be checked using the sesquilinearity and Leibniz rule axioms that
\[\begin{split}\left\{\operatorname{str}A^{\frac{k}{N}}(z)\ _{\lambda}\ A_{ab}(w)\right\}=\sum_{i,s,t\in I}\sum_{l=1}^{k}(-1)^{\bar{t}}(-1) ^{(\bar{t}+\bar{j})(\bar{s}+\bar{t})}(-1)^{(\bar{t}+\bar{t})(1+\bar{a}+\bar{ b})}(-1)^{(\bar{s}+\bar{t})(\bar{a}+\bar{b})}\\ \times&\big{\{}A_{ts}^{\frac{1}{\lambda}}(z+x)\ _{\lambda+x+y}\ A_{ab}(w)\big{\}}\, \big{(}\big{|}_{x=\partial}A_{si}^{\frac{k-l}{N}}(z)\big{)}\big{(}\big{|}_{y= \partial}(A^{s\frac{l-1}{N}})_{ti}(-z+\lambda)\big{)}.\end{split} \tag{5.4}\]
We simplify the signs in (5.4) as follows
\[(-1)^{\bar{t}}(-1)^{(\bar{t}+\bar{s})(\bar{s}+\bar{t})}(-1)^{(\bar{t}+\bar{t}) (1+\bar{a}+\bar{b})}(-1)^{(\bar{s}+\bar{t})(\bar{a}+\bar{b})}=(-1)^{(\bar{s}+ \bar{t})\bar{t}+\bar{s}\bar{t}+\bar{t}+(\bar{s}+\bar{t})(\bar{a}+\bar{b})+\bar {s}}.\]
Next, we take the residue and evaluate at \(\lambda=0\) to get
\[\begin{split}&\operatorname{Res}_{z}\bigl{\{}\mathrm{str}A^{\frac{ 1}{N}}(z)\ _{\lambda}\ A_{ab}(w)\bigr{\}}\bigr{|}_{\lambda=0}\\ &=\operatorname{Res}_{z}\sum_{i,s,t\in I}\sum_{l=1}^{k}\big{\{}A^ {\frac{1}{N}}_{ts}(z\!+\!x\!+\!y\!+\!\lambda)\ _{\lambda+x+y}\ A_{ab}(w)\bigr{\}}\bigl{(}\bigr{|}_{x=\partial}A^{\frac{k-l}{ N}}_{si}(z\!+\!\lambda\!+\!y)\bigr{)}\\ &\qquad\qquad\qquad\times\bigl{(}\bigr{|}_{y=0}A^{\frac{l-1}{N}}_{ it}(z)\bigr{)}\bigr{|}_{\lambda=0}\times(-1)^{(\tilde{s}+\tilde{t})\tilde{t}+ \tilde{s}\tilde{t}+\tilde{\imath}}(-1)^{(\tilde{s}+\tilde{t})(\tilde{a}+\tilde {b})+\tilde{s}}\\ &=k\operatorname{Res}_{z}\sum_{s,t\in I}\big{\{}A^{\frac{1}{N}}_{ ts}(z+\partial)\ _{\partial}\ A_{ab}(w)\big{\}}(A^{\frac{k-1}{N}}_{s\tilde{t}}(z))\times(-1)^{( \tilde{s}+\tilde{t})(\tilde{a}+\tilde{b})+\tilde{s}}.\end{split} \tag{5.5}\]
Here, the first equality follows from (5.2) and
\[\operatorname{Res}_{z}B_{ij}(z)(C\,^{*})_{hk}(-z+\lambda)=\operatorname{Res}_ {z}B_{ij}(z+\lambda+\partial)C_{kh}(z). \tag{5.6}\]
Recall that \(A(z+x)\) is expanded using the geometric expansion for large \(|z|\). For any \(c,d\in I\), one gets
\[\bigl{\{}A^{\frac{N}{N}}_{cd}(z)\ _{\lambda}\ A_{ab}(w)\bigr{\}} =\sum_{s,t\in I}\sum_{l=1}^{N}(-1)^{(\tilde{a}+\tilde{b})(\tilde{ s}+\tilde{d})}(-1)^{(\tilde{c}+\tilde{t})(\tilde{t}+\tilde{s}+\tilde{a}+ \tilde{b}+\tilde{s}+\tilde{d})}(-1)^{(\tilde{c}+\tilde{t})(\tilde{t}+\tilde{d })+(\tilde{t}+\tilde{s})(\tilde{s}+\tilde{d})} \tag{5.7}\] \[\times\bigl{\{}A^{\frac{1}{N}}_{ts}(z+x)\ _{\lambda+x+y}\ A_{ab}(w) \bigr{\}}\,\bigl{(}\bigr{|}_{x=\partial}A^{\frac{N-l}{N}}_{sd}(z)\bigr{)} \bigl{(}\bigr{|}_{y=\partial}(A^{*\frac{l-1}{N}})_{t\epsilon}(-z+\lambda) \bigr{)}, \tag{5.8}\]
by the same process as we get (5.4). For the simplicity of notation, let us denote the sign in (5.7) by
\[\eta=(-1)^{(\tilde{a}+\tilde{b})(\tilde{s}+\tilde{d})}(-1)^{(\tilde{c}+\tilde{ t})(\tilde{t}+\tilde{s}+\tilde{a}+\tilde{b}+\tilde{s}+\tilde{d})}(-1)^{( \tilde{c}+\tilde{t})(\tilde{t}+\tilde{d})+(\tilde{t}+\tilde{s})(\tilde{s}+ \tilde{d})}. \tag{5.9}\]
After replacing \(z\) by \(z+\partial\) and \(\lambda\) by \(\lambda+\partial\), applying both sides of our equation to \(A^{\frac{k}{N}-1}_{dc}(z)\) and summing over all pairs of indices \((c,d)\in I\times I\), we obtain
\[\begin{split}&\sum_{c,d\in I}\big{\{}A_{cd}(z\!+\!x)\ _{\lambda+\partial}\ A_{ab}(w)\big{\}}\bigl{(}\bigr{|}_{x=\partial}A^{\frac{k}{ N}-1}_{dc}(z)\bigr{)}\\ &=\sum_{c,d\in I}\sum_{s,t\in I}\sum_{l=1}^{N}\big{\{}A^{\frac{1} {N}}_{ts}(z\!+\!x)\ _{\lambda+x+y}\ A_{ab}(w)\big{\}}\,\bigl{(}\bigr{|}_{x=\partial}A^{\frac{N-l}{ N}}_{sd}(z+x)\bigr{)}\bigl{(}\bigr{|}_{y=\partial}(A^{*\frac{l-1}{N}})_{t \epsilon}(-z+\lambda)\bigr{)}\\ &\qquad\qquad\qquad\qquad\times\bigl{(}\bigr{|}_{x=\partial}A^{ \frac{k}{N}-1}_{dc}(z)\bigr{)}\times\eta\\ &=\sum_{c,d\in I}\sum_{s,t\in I}\sum_{l=1}^{N}\big{\{}A^{\frac{1} {N}}_{ts}(z\!+\!x)\ _{\lambda+x+y}\ A_{ab}(w)\big{\}}\,\bigl{(}\bigr{|}_{x=\partial}A^{\frac{N-l}{ N}}_{sd}(z+\partial)A^{\frac{k}{N}-1}_{dc}(z)\bigr{)}\\ &\qquad\qquad\qquad\qquad\qquad\times\bigl{(}\bigr{|}_{y=\partial} (A^{*\frac{l-1}{N}})_{t\epsilon}(-z+\lambda)\bigr{)}\times\eta\cdot(-1)^{( \tilde{t}+\tilde{c})(\tilde{c}+\tilde{d})}\\ &=\sum_{c\in I}\sum_{s,t\in I}\sum_{l=1}^{N}\big{\{}A^{\frac{1}{N} }_{ts}(z\!+\!x)\ _{\lambda+x+y}\ A_{ab}(w)\big{\}}\,\bigl{(}\bigr{|}_{x=\partial}A^{\frac{k-l}{ N}}_{sc}(z)\bigr{)}\bigl{(}\bigr{|}_{y=\partial}(A^{*\frac{l-1}{N}})_{t\epsilon}(-z+ \lambda)\bigr{)}\\ &\qquad\qquad\qquad\qquad\times\eta\cdot(-1)^{(\tilde{t}+\tilde{c} )(\tilde{c}+\tilde{d})}(-1)^{(\tilde{s}+\tilde{d})(\tilde{d}+\tilde{c})}.\end{split} \tag{5.10}\]
The sign in the last equation of (5.10) is simplified as follows :
\[\eta\cdot(-1)^{(\tilde{t}+\tilde{c})(\tilde{c}+\tilde{d})}(-1)^{(\tilde{s}+ \tilde{d})(\tilde{d}+\tilde{c})}=(-1)^{(\tilde{s}+\tilde{t})(\tilde{a}+\tilde{b}) +\tilde{s}}(-1)^{(\tilde{a}+\tilde{b})(\tilde{c}+\tilde{d})+\tilde{d}}(-1)^{( \tilde{s}+\tilde{c})(\tilde{t}+\tilde{c})}.\]
Taking the residue and evaluating \(\lambda\) at \(0\), we get
\[\begin{split}&\sum_{c,d\in I}\operatorname{Res}_{z}\left\{A_{cd}(z+ \partial)\ _{\partial}\ A_{ab}w\right\}(A_{dc}^{\frac{k}{N}-1}(z))\times(-1)^{(\tilde{a}+ \tilde{b})(\tilde{c}+\tilde{d})+\tilde{d}}\\ &=\operatorname{Res}_{z}\sum_{c\in I}\sum_{s,t\in I}\sum_{l=1}^{N} \left\{A_{ts}^{\frac{1}{N}}(z+x)\ _{x+y}\ A_{ab}(w)\right\}\left(\big{|}_{x=\partial}A_{sc}^{\frac{k-l}{N}}(z) \right)\left(\big{|}_{y=\partial}(A^{s\frac{l-1}{N}})_{tc}(-z)\right)\\ &\qquad\qquad\qquad\qquad\qquad\times(-1)^{(\tilde{s}+\tilde{t}) (\tilde{a}+\tilde{b})+\tilde{s}}(-1)^{(\tilde{s}+\tilde{c})(\tilde{t}+\tilde{ c})}\\ &=\operatorname{Res}_{z}\sum_{c\in I}\sum_{s,t\in I}\sum_{l=1}^{N} \left\{A_{ts}^{\frac{1}{N}}(z+x)\ _{x+y}\ A_{ab}(w)\right\}\left(\big{|}_{x=\partial}A_{sc}^{\frac{k-l}{N}}(z+y )\right)\left(\big{|}_{y=\partial}(A^{\frac{l-1}{N}})_{ct}(z)\right)\\ &\qquad\qquad\qquad\qquad\qquad\times(-1)^{(\tilde{s}+\tilde{t}) (\tilde{a}+\tilde{b})+\tilde{s}}(-1)^{(\tilde{s}+\tilde{c})(\tilde{t}+\tilde{ c})}\\ &=N\operatorname{Res}_{z}\sum_{s,t\in I}\left\{A_{ts}^{\frac{1}{ N}}(z+\partial)\ _{\partial}\ A_{ab}(w)\right\}(A_{st}^{\frac{k-1}{N}}(z))\times(-1)^{(\tilde{s}+ \tilde{t})(\tilde{a}+\tilde{b})+\tilde{s}}.\end{split} \tag{5.11}\]
We conclude the proof by comparing the last equation with (5.5).
### integrable hierarchies on a rectangular \(\mathcal{W}\)-superalgebra
While lemmas in Section 5.1 are stated for arbitrary choices of monic matrix pseudo-differential operator \(A\), we specialize now to the differential superalgebra \(\mathcal{V}:=\mathcal{V}_{I}^{N}\) generated by the coefficients of the generic sATO
\[L(\partial):=\partial^{N}+\sum_{M=0}^{N-1}\sum_{a,b\in I}e_{ab}\otimes u_{M,ab }\partial^{M},\]
which is isomorphic to a rectangular \(\mathcal{W}\)-superalgebra by Theorem 4.8.
For all positive integer \(k\), let us define the differential polynomial
\[h_{k}:=\frac{N}{k}\operatorname{Res}\operatorname{str}L^{\frac{k}{N}}( \partial). \tag{5.12}\]
Then the variational derivative of \(h_{k}\) can be written in terms of the coefficients of \(L(\partial)\) as follows.
**Lemma 5.3**.: _Let \(i\in\mathbb{Z}_{+}\) such that \(i<N\). For \(a,b\in I\) and a positive integer \(k\), we have_
\[\frac{\delta h_{k}}{\delta u_{i,ab}}=\operatorname{Res}_{z}(z+\partial)^{-i-1}( L^{\frac{k}{N}-1})_{ba}(z). \tag{5.13}\]
Proof.: By Lemma 5.2,
\[\operatorname{Res}_{z}\bigl{\{}\operatorname{str}L^{\frac{k}{N}}(z)\ _{\lambda}\ L_{ab}(w)\bigr{\}}\big{|}_{\lambda=0}=\frac{k}{N}\sum_{c,d\in I}( -1)^{(\tilde{a}+\tilde{b})(\tilde{c}+\tilde{d})+\tilde{d}}\operatorname{Res}_ {z}\left\{L_{cd}(z+\partial)\ _{\partial}\ L_{ab}(w)\right\}(L^{\frac{k}{N}-1})_{dc}(z)). \tag{5.14}\]
Taking the coefficients of \(w^{-j-1}\) in both sides, we get
\[\left\{h_{k}\ _{\lambda}\ u_{j;ab}\right\}\Big{|}_{\lambda=0} =\sum_{c,d\in I}(-1)^{(\tilde{a}+\tilde{b})(\tilde{c}+\tilde{d})+ \tilde{d}}\operatorname{Res}_{z}\left\{L_{cd}(z+\partial)\ _{\partial}\ u_{j;ab}\right\}((L^{\frac{k}{N}-1})_{dc}(z)) \tag{5.15}\] \[=\sum_{c,d\in I}\sum_{i=0}^{N-1}(-1)^{(\tilde{a}+\tilde{b})(\tilde {c}+\tilde{d})}\left\{u_{i,cd\ \partial}\ u_{j,ab}\right\}\operatorname{Res}_{z}(z+\partial)^{-i-1}((L^{\frac{ k}{N}-1})_{dc}(z)). \tag{5.16}\]
On the other hand, using the master formula (2.2) and by definition of the variational derivative, we obtain
\[\left\{h_{k}\ _{\lambda}\ u_{j;ab}\right\}\big{|}_{\lambda=0}=\sum_{i=0}^{N-1} \sum_{c,d\in I}(-1)^{(\tilde{a}+\tilde{b})(\tilde{c}+\tilde{d})}\left\{u_{i, cd\ \lambda+\partial}\ u_{j,ab}\right\}(-\lambda-\partial)^{m}\frac{\partial h_{k}}{ \partial u_{i,cd}^{(m)}}\bigg{|}_{\lambda=0} \tag{5.17}\]
\[=\sum_{c\in I}\left\{(L^{\frac{k}{N}})_{ac}(w\!+\!\partial)_{\!+}\,L_{cb}(w)-L_{ ac}(w\!+\!\partial)\left(L^{\frac{k}{N}}\right)_{cb}(w)_{\!+}\right\}\times(-1)^{ \tilde{c}(\tilde{a}+\tilde{b}+1)+\tilde{a}\tilde{b}},\]
where \(\beta=(-1)^{\tilde{c}\tilde{a}+\tilde{c}\tilde{b}+\tilde{a}\tilde{b}}\cdot(-1)^ {(\tilde{a}+\tilde{b})(\tilde{c}+\tilde{d})+\tilde{d}}\). In details, the sign simplification reads
\[\beta\cdot(-1)^{(d+c)(c+b)} =(-1)^{(\tilde{a}+\tilde{d})(\tilde{d}+\tilde{c})}\,\,(-1)^{ \tilde{a}\tilde{c}+\tilde{b}\tilde{c}+\tilde{c}+\tilde{a}\tilde{b}}\] \[\overset{\text{or}}{=}(-1)^{(\tilde{d}+\tilde{c})(\tilde{c}+ \tilde{b})}\,\,(-1)^{\tilde{a}\tilde{d}+\tilde{b}\tilde{d}+\tilde{d}+\tilde{a }\tilde{b}}.\]
(b) Recall that the second PVsA bracket is defined on the generators of \(\mathcal{V}\) by
\[\left\{L_{cd}(z)\,_{\lambda}\,L_{ab}(w)\right\}_{K} =(-1)^{\tilde{a}\tilde{c}+\tilde{b}\tilde{c}+\tilde{a}\tilde{b}} \delta_{bc}(L_{ad}(z)\!-\!L_{ad}(w\!+\!\lambda))_{\iota_{z}}(z\!-\!w\!-\! \lambda)^{-1}\] \[-(-1)^{\tilde{a}\tilde{c}+\tilde{b}\tilde{c}+\tilde{a}\tilde{b}} \delta_{ad}\iota_{z}(z\!-\!w\!-\!\lambda\!-\!\partial)^{-1}(L_{cb}(w)\!-\!(L_ {cb})^{*}(-z\!+\!\lambda)).\]
By Lemma 5.2, we have
\[\left\{h_{k}\ \ \lambda\ \ L_{ab}(w)\right\}_{K}\big{|}_{\lambda=0} =\sum_{c,d\in I}\zeta\ \text{Res}_{z}\left[\delta_{cb}t_{z}(z-w)^{-1}(L_{ad}(z+ \partial)-L_{ad}(w+\partial))\right](L^{\frac{k}{N}-1})_{dc}(z)\] \[\ \
**Proposition 5.7**.: _Let \(d/dt_{k}\) be the Hamiltonian derivation of \(\mathcal{V}\)_
\[\frac{dv}{dt_{k}}:=\left\{h_{k}\ \ \lambda\ \ v\right\}_{H}\big{|}_{\lambda=0},\ v\in \mathcal{V} \tag{5.21}\]
_associated with the Hamiltonian \(h_{k}\) in (5.12)._
1. _The equation (_5.21_) is bihamiltonian. More precisely, we have_ \[\frac{dv}{dt_{k}}:=\left\{h_{k}\ \ \lambda\ \ v\right\}_{H}\big{|}_{\lambda=0}= \left\{h_{k+N}\ \ \lambda\ \ v\right\}_{K}\big{|}_{\lambda=0}.\] (5.22) _Hence it is an integrable system by the Lenard-Magri scheme._
2. _The derivations_ \(d/dt_{k}\) _pairwise commute. In other words, we have_ \[[\,\int h_{k}\,,\,\int h_{k^{\prime}}]_{H}=0\] (5.23) _for all positive integers_ \(k\) _and_ \(k^{\prime}\)_. Hence local functionals_ \(\int h_{k^{\prime}}\) _are all integral of motions of (_5.21_)._
3. _The equation (_5.21_) is equivalent to the Lax equation:_ \[\frac{dL}{dt_{k}}=(L^{\frac{k}{N}})_{+}\circ L-L\circ(L^{\frac{k}{N}})_{+}\,.\] (5.24)
Proof.: (a) directly follows from Lemma 5.5. Recall that
\[[\,\int h_{k}\,\int h_{m}]_{H}:=\int\left\{\,h_{k}\,\lambda\,h_{m}\,\right\}_ {H}\big{|}_{\lambda=0}\]
hence (b) also follows from Lemma 5.5 and 5.6. Finally, (c) follows from the following computations:
\[[(L^{\frac{k}{N}})_{+},L\,]_{ab}(w) =\left((L^{\frac{1}{N}})_{+}(\partial+w)\circ L(w)\right)_{ab}- \left(L(\partial+w)\circ(L^{\frac{1}{N}})_{+}(w)\right)_{ab}\] \[=\sum_{c\in I}(-1)^{(\bar{a}+\bar{c})(\bar{b}+\bar{c})}\big{(}(L^{ \frac{k}{N}})_{ac\,+}(\partial+w)L(w)_{cb}-L_{ac}(\partial+w)(L^{\frac{k}{N}} )_{cb\,+}(w)\big{)}\] \[=\left\{h_{k}\ \ \lambda\ \ L_{ab}(w)\right\}_{H}\big{|}_{ \lambda=0}.\]
**Theorem 5.8**.: _Let \(m,n,N\) be positive integers. For \(N\geq 2\), let \(f\) be the \(N\times(m|n)\) rectangular nilpotent element in \(\mathfrak{gl}(Nm|Nm).\) Then the Hamiltonian equation (5.21) is an integrable bihamiltonian system on the \(\mathcal{W}\)-superalgebra \(\mathcal{W}(\mathfrak{gl}(Nm|Nn),f)\). In addition, every local functional \(\int h_{k^{\prime}}\) for a positive integer \(k\) is its integral of motion._
Proof.: It is a direct consequence of Theorem 4.10 and Proposition 5.7.
Note that by Lemma 3.10 (d) and Proposition 5.7, these systems of equations can be reduced to the the sub PVsA of \(\mathcal{V}\) generated by the elements \(u_{k,ab}\) for \(k\leq N-2\). Indeed, the generators \(u_{N-1,ab}\) are all constants of the system. The first bracket \(\{\,\lambda\,\}_{H}\) can also be reduced to this subalgebra via the so-called _Dirac reductions_. This reduced system is the specialization of noncommutative KdV [11] to the algebra \((\mathfrak{gl}(I)\otimes\mathcal{V}_{I}^{N},\circ)\).
**Example 5.9**.: _We construct an integrable hierarchy on the \(\mathcal{W}\)-superalgebra \(\mathcal{W}(\mathfrak{gl}(2|2),f)\) associated with the rectangular nilpotent \(f\) which corresponds to the partition \(2\times(1|1)\). Consider the generic super Adler-type operator \(L(\partial)=\mathbb{1}_{\,(1|1)}\partial^{2}+V\partial+W\in\mathcal{V}_{(1|1)} ^{2}\), where_
\[V=\begin{bmatrix}v_{11}&v_{12}\\ v_{21}&v_{22}\end{bmatrix},\quad W=\begin{bmatrix}w_{11}&w_{12}\\ w_{21}&w_{22}\end{bmatrix}.\]
_The PVsA \(\mathcal{V}_{(1|1)}^{2}\) is isomorphic to the \(\mathcal{W}\)-superalgebra \(\mathcal{W}(\mathfrak{gl}(2|2),f)\) by Theorem 4.10._
_Proposition 5.7 provides an integrable system on this PVsA whose conserved densities are given by_
\[h_{k}=\frac{2}{k}\ {\rm Res}_{z}\ {\rm str}L^{\frac{k}{2}}(z), \tag{5.25}\]
_for \(k=1,3,5\cdots\). When \(k=1\), the equation (5.24) is_
\[\frac{dW}{dt_{1}}=W^{\prime}-\frac{1}{2}V^{\prime\prime}-\frac{1}{2}V\circ V^{ \prime}+\frac{1}{2}(V\circ W-W\circ V),\ \ \frac{dV}{dt_{1}}=0.\]
_As we observed in the remark preceding this example, the four generators in \(V\) are constant for all the derivations \(d/dt_{2s+1}\) where \(s\in\mathbb{Z}_{+}\). We can hence reduce the integrable system to the sub PVsA generated by \(W\) and obtain a specialization of the noncommutative KdV hierarchy. In particular the equation corresponding to \(k=3\) in this reduced hierarchy is_
\[\frac{dW}{dt_{3}}=\frac{1}{4}W^{\prime\prime\prime}+\frac{3}{4}W\circ W^{ \prime}+\frac{3}{4}W^{\prime}\circ W, \tag{5.26}\]
_which is equivalent to the following system of differential equations_
\[\left\{\begin{array}{l}\frac{dw_{11}}{dt_{3}}=\frac{1}{4}w_{11}^{\prime \prime\prime}+\frac{3}{4}(w_{11}w_{11}^{\prime}-w_{12}w_{21}^{\prime}),\\ \\ \frac{dw_{12}}{dt_{1}}=\frac{1}{4}w_{12}^{\prime\prime\prime}+\frac{3}{4}(w_{ 11}w_{12}^{\prime}+w_{12}w_{22}^{\prime}),\\ \\ \frac{dw_{21}}{dt_{1}}=\frac{1}{4}w_{21}^{\prime\prime\prime}+\frac{3}{4}(w_{ 21}w_{11}^{\prime}+w_{22}w_{21}^{\prime}),\\ \\ \frac{dw_{22}}{dt_{1}}=\frac{1}{4}w_{22}^{\prime\prime\prime}+\frac{3}{4}(-w_{ 21}w_{12}^{\prime}+w_{22}w_{22}^{\prime}).\end{array}\right. \tag{5.27}\]
|
2303.04759
|
RAF: Holistic Compilation for Deep Learning Model Training
|
As deep learning is pervasive in modern applications, many deep learning
frameworks are presented for deep learning practitioners to develop and train
DNN models rapidly. Meanwhile, as training large deep learning models becomes a
trend in recent years, the training throughput and memory footprint are getting
crucial. Accordingly, optimizing training workloads with compiler optimizations
is inevitable and getting more and more attentions. However, existing deep
learning compilers (DLCs) mainly target inference and do not incorporate
holistic optimizations, such as automatic differentiation and automatic mixed
precision, in training workloads.
In this paper, we present RAF, a deep learning compiler for training. Unlike
existing DLCs, RAF accepts a forward model and in-house generates a training
graph. Accordingly, RAF is able to systematically consolidate graph
optimizations for performance, memory and distributed training. In addition, to
catch up to the state-of-the-art performance with hand-crafted kernel libraries
as well as tensor compilers, RAF proposes an operator dialect mechanism to
seamlessly integrate all possible kernel implementations. We demonstrate that
by in-house training graph generation and operator dialect mechanism, we are
able to perform holistic optimizations and achieve either better training
throughput or larger batch size against PyTorch (eager and torchscript mode),
XLA, and DeepSpeed for popular transformer models on GPUs.
|
Cody Hao Yu, Haozheng Fan, Guangtai Huang, Zhen Jia, Yizhi Liu, Jie Wang, Zach Zheng, Yuan Zhou, Haichen Shen, Junru Shao, Mu Li, Yida Wang
|
2023-03-08T17:51:13Z
|
http://arxiv.org/abs/2303.04759v1
|
# RAF: Holistic Compilation for Deep Learning Model Training
###### Abstract
As deep learning is pervasive in modern applications, many deep learning frameworks are presented for deep learning practitioners to develop and train DNN models rapidly. Meanwhile, as training large deep learning models becomes a trend in recent years, the training throughput and memory footprint are getting crucial. Accordingly, optimizing training workloads with compiler optimizations is inevitable and getting more and more attentions. However, existing deep learning compilers (DLCs) mainly target inference and do not incorporate holistic optimizations, such as automatic differentiation and automatic mixed precision, in training workloads.
In this paper, we present RAF, a deep learning compiler for training. Unlike existing DLCs, RAF accepts a forward model and in-house generates a training graph. Accordingly, RAF is able to systematically consolidate graph optimizations for performance, memory and distributed training. In addition, to catch up to the state-of-the-art performance with hand-crafted kernel libraries as well as tensor compilers, RAF proposes an operator dialect mechanism to seamlessly integrate all possible kernel implementations. We demonstrate that by in-house training graph generation and operator dialect mechanism, we are able to perform holistic optimizations and achieve either better training throughput or larger batch size against PyTorch [34] (eager and torchscript mode), XLA [49], and DeepSpeed [14] for popular transformer models on GPUs.
## 1 Introduction
In recent years, deep learning is pervasive in modern applications, ranging from computer vision [14, 21, 44, 45], nature language processing [12, 37, 46], to speech recognition [7, 11, 50]. As deep learning models along with their training datasets are getting larger and larger, the efficiency of training deep learning models becomes more and more critical. Modern deep learning practitioners normally rely on the deep learning frameworks, such as TensorFlow [1] and PyTorch [34], to describe and train models. However, deep learning frameworks are not designed for efficient model training in the first place. Instead, they aim for friendly interactive experience to facilitate model design. When it comes to performance, frameworks normally invoke hand-crafted kernel libraries such as cuDNN [10] to execute the computationally-intensive operators. This is constrained by whatever the kernel libraries can provide, and misses the global optimization opportunities between operators (e.g., operator fusion and decomposition).
Moreover, the design of modern deep learning frameworks also limits the developments of distributed training systems [40, 51, 24], which are usually built on the top of these frameworks. Taking DeepSpeed [40], a state-of-the-art (SOTA) distributed system built on top of PyTorch, as an example: although DeepSpeed implements a distribution engine with ZeRO [39] memory optimization technique to enable gigantic model training, its per-device execution engine is native PyTorch runtime without graph optimizations. As a result, DeepSpeed cannot globally optimize the model execution by well hiding the latency of inter-device communication 1 during distributed training.
Footnote 1: PyTorch features “hooks” that allow developers to inject callback functions before and after each tensor or module execution, so it is possible to prefetch tensors using hooks to overlap communication latency. However, it is challenging to derive an optimal prefetch plan for various model architectures without analyzing an entire model graph.
Consequently, adopting deep learning compilers to be the backend engine of deep learning frameworks become a promising solution to efficient model training, as they are capable of systematically optimizing the entire model graph and generating kernel code for the hardware platform. For example, PyTorch incorporates torchscript as its compiler backend for model inference, which recently was extended to support training workloads as well and enhanced by nvFuser [36]. However, torchscript itself does not cover distributed computation, but relies on other components in PyTorch to do so. XLA [49] is the most notable deep learning compiler that tackles model training, and some deep learning frameworks have adopted XLA as their compilation engine. For instance, JAX [6] uses XLA to just-in-time (JIT) compile pure-and-statically-composed (PSC) subroutines; PyTorch also sup
ports XLA of its traced model [43] to enable model training with compilation. However, most existing compilers like XLA were not designed in a _holistic_ way to facilitate full stack optimization. For instance, XLA relies on the framework for part of graph-level manipulations such as auto-differentiation and automatic mixed precision (AMP). Furthermore, XLA does not guarantee that the best available kernel implementations can be used.
We argue that the best deep learning model training performance can be achieved by taking full control of the entire software stack from graph-level to operator-level. This paper proposes RAF as a compiler-based system that provides compilation for deep learning model training with holistic optimizations. We use RAF to demonstrate the following points.
**#1: Holistic optimization.** RAF traces vanilla deep learning models 2 from a framework such as PyTorch [34] to generate required training graphs and compile them all the way to the executables, including graph manipulation and optimization, operator-level kernel code generation, and distributed parallelism implementation. We will illustrate in Section 5 that holistic optimization leads to a better performance.
**#2: Three-phase graph optimizations.** RAF for the first time puts together three types of graph-level optimization into one compiler stack, including graph generation (e.g., backward graph by automatic differentiation), expression optimizations (e.g., constant folding), and execution order optimizations (e.g., memory planning). Accordingly, RAF systematically abstracts the graph-level optimizations to three phases, and chooses the most suitable intermediate representation (IR) for each phase to ease the developments. While existing DLCs either work on dataflow [8] or A-normal form (ANF) [49] IR for all optimization passes for simplicity, we will illustrate that 1) adopting the suitable IR form in each optimization phase could improve the development efficiency; and 2) it is possible to preserve the semantic when converting IRs between dataflow and ANF forms.
Footnote 2: A vanilla model is a user-written model without any framework specific manipulations, such as auto-diff and automatic mixed precision.
**#3: Extensible backend for kernel libraries and tensor compilation.** To make use of hand-crafted kernels from different backends, RAF introduces _an operator dialect mechanism_ to dispatch each operator, which could be a single operator or a fused computation subgraph, to either high-performance kernel libraries or a tensor program compiler. We demonstrate that by intelligently dispatching to high quality hand-crafted kernels while evolving the tensor compiler, RAF could catch up with the latest SOTA performance at all time.
We summarize contribution of this paper as follows:
* We design and implement RAF training compiler that performs holistic optimizations to transfer a vanilla model all the way to an executable.
* graph generation, expression optimization, and execution order optimization, and adopt the most suitable IRs in each phase for better development efficiency.
* We introduce an operator dialect mechanism to enable intelligent operator dispatching, including third-party kernel libraries and a tensor compiler, for multiple platforms.
* We evaluate RAF with popular transformer models and show that RAF either outperforms the SOTA performance or achieves larger batch sizes on a single and multiple GPUs, respectively.
The source code of RAF is open source at [https://github.com/awslabs/raf](https://github.com/awslabs/raf).
## 2 Background
In this section, we first analyze the difference between training and inference workloads in Section 2.1, followed by an introduction to deep learning compilers (DLCs) with the challenges of supporting training workloads in Section 2.2.
### Deep Learning Training Workloads
This paper focuses compilation technologies to enable better throughput and memory footprint of deep learning model training. We illustrate the difference between inference and training workloads along with Figure 1. An inference workload are composed of a forward graph defined in the deep learning models, as well as trained parameters. Meanwhile, a training workload is composed of 1) a forward graph (**A** and **B**), 2) loss computation, 3) a backward graph (**C** and **D**), 4) an optimizer (**E** and **F**), and 5) learnable parameters being trained. These differences make training workloads more challenging to be optimized by a deep learning compiler.
First, in inference workloads, since parameter values are trained and frozen, they are treated as constants. Thus, the performance overhead of related optimizations such as data layout (e.g., row-major or column-major) and data type (e.g.,
Figure 1: Deep learning training workload.
full precision, half precision or quantization) transformation can be eliminated via constant folding. This, however, is not a case for parameters being trained in training workloads. Second, unlike inference workloads that usually require single or small batches and focus on latency optimization, training workloads focus on reducing convergence time [26], which could be achieved by large batch sizes and high throughputs. Consequently, optimizing memory footprint [22, 20, 9, 13] to support larger batch sizes, as well as optimizing distributed mechanism [53, 51, 40] to achieve a higher throughput, are the key for training workloads.
### Deep Learning Compiler
Unlike deep learning frameworks that directly map operators to hand-crafted kernel libraries for execution, deep learning compilers (DLCs), such as Apache TVM [8] and XLA [49], serve as a backend engine that converts the model graph from deep learning frameworks to a particular intermediate representation (IR), applies a series of graph- and operator-level optimizations, and generates an executable. However, it is challenging for DLCs to achieve the optimal performance or to be adopted in training workloads:
**Challenge 1: Optimization scope.** In addition to the forward inference, training graph also includes backward propagation and weight updating. Existing DLCs optimize the complete training graph generated by deep learning frameworks. However, as we will illustrate in Section 3.1.1, framework-generated training graph may not be friendly to the DLC, which results in sub-optimal performance.
**Challenge 2: Intermediate tensors.** Unlike inference graphs that are mostly a straight dataflow so intermediate tensors can be released immediately, in training workloads, many intermediate tensors generated by forward inference have to be materialized and preserved for gradient calculations in the backward propagation. For example, the output tensors of node **A** and **B** in Figure 1 can be freed soon in inference, but they have to be alive until node **C** and **D** in training. These long-life intermediate tensors introduce two challenges: 1) _Memory capacity and optimization_ become crucial. 2) The operators that produce and consume the tensor in the forward graph (e.g., node **A** and **B**) cannot be fused anymore, significantly _reducing fusion opportunities_.
**Challenge 3: Distribution.** While most inference workloads still target a single device, modern DLCs that target inference workloads lack the distribution support and cannot be used for training workloads, which may require to be scaled out due to the size of deep learning model as well as the training datasets.
## 3 System Design
Existing deep learning compilers (DLCs) focus on optimizing a given IR without changing its semantics to limit the problem scope, so that they include only expression and execution order optimizations. Accordingly, these DLCs can choose one IR format for all compiler passes to simplify the framework design. For example, all compiler passes of XLA [49] work on ANF IR. Meanwhile, Apache TVM [8] lets almost all its compiler passes work on dataflow IR3, and lack of execution order optimization.
Footnote 3: Although TVM also converts dataflow IR to ANF, it is only for virtual machine execution.
On the other hand, to include holistic optimization, RAF accepts the IR from a vanilla forward (i.e., inference) model, and generates user-specified graphs (e.g., backward and automatic mixed precision (AMP) [30] graphs). Consequently, to deal with the new introduced complexity, we organize compiler passes in RAF to three phases - graph generation, expression optimization, and execution order optimization. Figure 2 presents an overview of the RAF compilation flow. The first phase applies user-specified model manipulations to the graph in ANF IR. For instance, it may append backward propagation generated by auto-differentiation (Section 3.1.1), optimizer (e.g., SGD [5] and Adam [19]), perform auto-casting to enable automatic mixed precision (AMP) [30] training (Section 3.1.2), or auto-parallelism (Section 3.1.3) to enable ZeRO [39] data parallelism. Since expressions in the ANF IR are bound to a variable using let, and all let-statements form a sequence that implies the execution order. Therefore, it is easy to encode extra information in the IR associated with variables, such as the chain rule information in the automatic differentiation and parameter aliasing for in-place updating.
Next, in the optimization phase, we first convert the IR from ANF to dataflow, and apply a series of IR transformation passes to optimize the expression. This is because the enforcement of expression order in ANF makes it harder for developers to implement transformation passes that focus on optimizing expressions but not execution order, such as operator fusion and expression simplification. In contrast, dataflow IR forms a model to be a dataflow graph fashion, and expressions are embedded as arguments in the subsequent expressions. As a result, it is straightforward to implement pattern
Figure 2: RAF compilation flow.
and rule-based expression transformations in dataflow IR.
Finally, in the phase of execution order optimization, we again convert the IR back to ANF and apply a series of passes. For instance, distribution passes (Section 3.1.3), including collective communication operator optimization and computation-communication overlapping, are also used to empower gigantic model training. Meanwhile, memory optimization passes, including memory-footprint-aware execution order scheduling (Section 3.1.4) and automatic rematerialization (Section 3.1.5), are applied to reduce peak memory consumption to support larger batch sizes. Note that since dataflow IR does not bound expressions with variables, we encode extra information in ANF to either edges or node attributes in dataflow IR, and recover them when converting back.
In addition to the graph-level optimization, RAF also incorporates operator-level optimizations. Specifically, to support extensible backends so that both kernel libraries (e.g., cuBLAS and CUTLASS [32]) and tensor compilation (e.g., TVM [8]) can be easily integrated, the dialect dispatching pass is used to determine which backend should be used for each operator. The details are illustrated in Section 3.2.1. Besides, fusion passes (detailed in Section 3.2.2) that fuse dialect operators together as closures are applied to reduce the kernel invocation and inter-operator communication overheads.
In the rest of this section, we present key system component designs in graph- and operator-level optimizations along with insights.
### Holistic Graph-Level Optimizations
#### 3.1.1 Automatic Differentiation
In RAF, we implement an automatic differentiation (Autodiff) pass that employs source code transformation with a closure-based method [35] to perform the differentiation. Autodiff transforms each operator in forward graph by its corresponding adjoint function, which computes the derivative with respect to the inputs given the derivatives with respect to the outputs. Specifically, each adjoint function returns the partial derivatives with respect to the inputs of the original function, as well as an ordered set of partial derivatives with respect to the outputs.
The advantage of self-contained Autodiff comes from flexible adjoint functions in terms of _data dependency_ and _operator composition_. For data dependency, the adjoint function usually takes both primal operator's input \(x\) and output \(y\) as parameters to derive the gradient, which creates data dependency to both tensors in the forward graph. For example, node \(\mathbf{D}\) in Figure 1 requires the output (\(dy\)) of node \(\mathbf{C}\), the input (\(x\)) and output (\(y\)) of node \(\mathbf{A}\). However, for some operators, only \(x\) or \(y\) is sufficient to calculate the gradient. Taking \(y=tanh(x)\) as an example, gradient can be calculated by either \(grad=1-2\times tanh(x)\) or \(grad=1-2\times y\). As a result, we only require either \(x\) or \(y\) to calculate the gradient but not both, which not only reduces peak memory, but also increases fusion opportunity for the corresponding forward operators.
For operator composition, intuitively decomposing a backward operator to a series of small operators is able to provide more rooms for operator fusion. For example, \(tanh\_dx(y)=1-2\times y\) can be decomposed to a multiply followed by a subtract instead of an encapsulated tanh_dx operator. However, kernel libraries may outperform compilers for certain backward operators such as Conv2D_dx even with the fusion opportunity sacrificed. With flexible adjoint functions, RAF could select the best option for each case.
#### 3.1.2 Automatic Mixed Precision
Although it is common to use half precision to support a larger batch size when training a gigantic mode, the time-to-train performance may still be inefficient due to the accuracy loss. Thus, modern DL frameworks offer a compromise solution that performs most computations in half precision; while keeping numerically sensitive computations in full precision. This is called automatic mixed precision (AMP) [30]. However, whether an operator can be computed in half precision not only depends on its arithmetic expression but also the implementation. As a result, taking a framework manipulated AMP graph may result in correctness issues for a DLC, as we will show in Section 5.
In RAF, we define our own list of numerical sensitive operators, and leverage our own AutoCast pass to insert cast operators to the graph for AMP training. In addition to guaranteeing the numerical correctness, AutoCast also optimizes the performance. Specifically, although minimizing the number of inserted cast operators is straightforward, we observe that this is not always the best strategy. Taking Figure 3 as an example, if AutoCast minimizes the number of cast operators as in Figure 2(b), then the shared cast cannot be fused into any of its consumers, as in Figure 2(c). In contrast, the exclusive cast operators in Figure 2(d) can be fused, as shown in Figure 2(e). Since the fused cast operation has almost no performance overhead, we could achieve better end to end performance accordingly. The AutoCast pass in RAF consid
Figure 3: Impact of inserting minimized or exclusive cast operators for AMP on fusion.
ers the operator fusion opportunities, and inserts exclusive cast operators to the consumer that is capable of fusing the element-wise predecessors and successors.
#### 3.1.3 Distribution Optimization
RAF currently supports data parallelism4 to enable distributed training. The core idea is inspired by ZeRO [39] that partitions optimizer state and gradients to reduce memory footprint on a single device when scaling out. However, unlike DeepSpeed [40] that manually implements ZeRO on top of PyTorch, RAF automatically applies several transformation passes to analyze an IR graph for single device, and insert required collective operators to scale it out on multiple devices.
Footnote 4: Model and pipeline parallelisms are ongoing future work.
Compared to the manual implementation (i.e., DeepSpeed), our compiler-based solution introduces the following advantages. First, the implemented transformation passes are generally applicable to all models and optimizers, significantly reducing the engineering efforts when a new model and optimizer emerge. Second, it is easy to incorporate more advance optimizations. For example, when inserting collective operators for distributed training, we leverage CUDA streams to overlap computation and communication, so that inter-device communication overheads can be hidden. In addition, we also apply horizontal fusion to collective operators to increase bandwidth utilization and reduce the communication overheads. As we will demonstrate in Section 5, by automatically applying these optimizations, RAF is able to outperform DeepSpeed by \(\sim 14\%\) on a custom encoder model with 1.5 billion parameters on 8 NVIDIA A100 GPUs.
#### 3.1.4 Operator Scheduling
When converting a dataflow IR to ANF, any valid topological order is a valid execution order in terms of the correctness. However, certain execution orders may cause a much higher peak memory consumption which exceeds the DRAM capacity. For instance, if we execute an operator that produces a large tensor too early, then we have to keep the large tensor occupy the memory for a long period of time.
To reduce the peak memory of an execution order, operator scheduling pass analyzes the memory footprint of each operator to determine when to execute it. Formally, let the total size of tensors produced by this operator be \(p\), the total size of tensors that can be released after this operator be \(c\), operator scheduler moves this operator forward when \(p-c<0\), or backward when \(p-c>0\). Thus, the operator that decreases memory consumption is executed as early as possible, and vice versa.
#### 3.1.5 Rematerialization
In the case that the peak memory exceeds the capacity after operator scheduling, another widely used optimization that is able to fit the model into the hardware is rematerialization [9], which releases some intermediate tensors before encountering out of memory, and regenerates them later when needed.
Deep learning frameworks such as PyTorch rely on model developers to determine the checkpoints (i.e., the intermediate tensors that will not be regenerated) in forward, and replay every operators between two checkpoints in backward. This approach, however, brings two drawbacks. First, model developers have to manually identify checkpoints. Second, once checkpoints are enabled, even the memory budget is sufficient, every operator between two checkpoints must be replayed, which results in a significant performance overhead.
On the other hand, RAF demonstrates that a compilation-based rematerialization is more usable and flexible. Specifically, the rematerialization pass in RAF traverses an ANF IR and analyzes the memory footprint via liveness analysis. To minimize rematerialization overheads, when the estimated memory consumption at a certain execution point exceeds the given budget, the rematerialization pass applies a cost function, which considers 1) the latency of recomputation, 2) the remaining use count, and 3) the size, to current alive intermediate tensors. It then identifies one or more intermediate tensors, and breaks their liveness to two intervals, where the second interval will be initialized by replaying the corresponding operator. Consequently, this approach brings two benefits. First, since this process is fully automatic, developers do not need to insert checkpoints to the model. Second, RAF only rematerializes necessary tensors to fit into the memory budget, so the performance overhead is moderated compared to manual checkpoints.
### Operator-Level Optimizations
#### 3.2.1 Operator Dialects
One platform may have multiple kernel implementations, but there is no single library/compiler that outperforms others in all operators. For example, according to our evaluations on NVIDIA V100 GPU, although both cuDNN and TVM provide certain CUDA kernels for NVIDIA GPUs, cuDNN achieves 1.8\(\times\) speedups over TVM in Conv2D backward, but TVM is able to achieve up to 100\(\times\) speedups (with autotuning) in softmax backward kernel. As a result, it is inevitable to have a flexible kernel selection mechanism to achieve the optimal performance. Even worse, more and more backend implementations now introduce the capability of operator fusion such as cuDNN v8 [31] and CUTLASS [32] while supporting different fusion patterns, so it is even more challenging as described in [18].
Motivated by MLIR [23] that introduces IR dialects, we introduce the operator dialects to address this issue. Similar
to the dialect language in MLIR, we register each backend implementation as a new dialect in the IR, e.g., "cudnn", "tvm", etc., and allow each dialect to have customized operator attributes. Different from MLIR which views all backends as dialects, RAF comes with a set of pre-defined operators as base operators. This can reduce the efforts of adding a new dialect operator by sharing certain operator attributes from base operators such as type relation function. In addition, users do not need to write the code for translation between every dialect pair as MLIR requires. Nonetheless, it still provides the flexibility that allows dialect operators to have customized semantics via attribute tables, as shown in Figure 5, and more importantly, enables different fusion patterns for each dialect (detailed in Section 3.2.2).
In Figure 4, a dialect operator is registered to a base operator with a dispatching priority, which can be pre-defined by developers based on prior experiences or derived from profiling results. A dialect operator is also associated with a dialect. For example, tvm.conv2d that belongs to dialect tvm is registered to the base operator conv2d with priority 10. In addition, a dialect is allowed to be only enabled on certain devices. For instance, dialect tvm can be used on both CPU and GPU; while dialect cudnn and cutlass are only available for GPU. If a dialect is disabled in RAF, all dialect operators in this dialect will not be included in the IR or dispatched at runtime.
Every operator including both base and dialect operators has its own attribute table. The attribute table of a base operator contains common attributes that share across its dialect operators, such as argument schema, type relation functions, etc., as shown in Figure 5. Most attributes of dialect operators can inherit from base operators, so that it significantly reduces the developer effort to add new dialect operators (blue items in Figure 5). On the other hand, dialect operators can overwrite certain attribute value on top of the base operators and have its own specific attributes (red and green items respectively). For example, in Figure 5, cudnn.dropout has a different type relation function as the cuDNN implementation needs to reserve extra space for the states.
#### 3.2.2 Operator Fusion
Operator fusion combines multiple operators into a single kernel and can greatly reduce kernel execution time by avoiding the data transfer of intermediate results between cache and main memory. It is particularly effective for GPUs and specialized accelerators. As we mentioned before, multiple backends have the capabilities for operator fusion and their fusion patterns are different from each other. For example, a template based library CUTLASS [32] is capable of fusing a general matrix multiplication (GEMM) and its subsequent element-wise activation functions. On the other hand, as a deep learning compiler, Apache TVM fuses operators based on their fusion rules. Although TVM may not achieve better performance than the handcrafted kernels, in some cases, it is able to achieve better end-to-end performance by effectively fusing operators in a way that vendor libraries do not cover, as reported in [8]. In RAF, we aim to have a fusion plan that benefits from all backends. For example, in this particular case, we prefer to choose the CUTLASS one when the pattern is matched due to its handcrafted high performance; while the rest operators can be fused and optimized by TVM.
To accommodate different levels of fusion capabilities for all backends, RAF divides the operator fusion optimization into two categories: _pattern-based fusion_ and _rule-based fusion_. For example, fusion for CUTLASS and cuDNN backends belong to pattern-based category as they only support limited fusion patterns (e.g., general matrix multiplication (GEMM) followed by an element-wise activation). On the other hand, TVM belongs to rule-based fusion as it performs general-purpose operator fusion based on fusion rules [8].
For pattern-based fusion, RAF provides an interface for developers to register fusion patterns. Thanks to operator dialect described in Section 3.2.1, we can attribute each pattern to a specific dialect, to indicate which backend is used for code generation of the fused operators. Each pattern is
Figure 4: A dialect operator associated with one dialect is registered to a base operator with a dispatching priority. A dialect is only enabled on certain devices.
Figure 5: The attribute table for base and dialect operators. Attributes from base operators can be inherited (blue), overridden (red), and customized (green).
associated with a priority, which determines the order of application of these fusion patterns. We also set priorities to rule-based fusion, and the operator fusion pass applies these fusion patterns and fusion rules one at a time based on their priorities in a descending order. The orders of pattern matching will lead to different fused graphs and can have different performance. Although the priorities are now set by developers based on heuristics, we plan to derive the priorities from profiling results and perform search on all possible graph partition solutions based on the fusion patterns and fusion rules to find optimal fusion plans.
We use an example in Figure 6 to illustrate how operator fusion takes action and applies the fusion patterns and rules. Table 1 gives an example of a few fusion patterns related to conv2d and TVM fusion rules with priorities. First, the CUTLASS fusion pattern for float16 data type in the fusion table has the highest priority. The optimization pass finds a matching in the program and then replaces the matched subgraph by a fused operator with dialect cutlass and a function call. The IR is then transformed from Figure 5(a) to Figure 5(b). After that, no patterns other than TVM fusion rules can be found applicable to the transformed program. Figure 5(c) shows the final program after the fusion optimization.
## 4 Implementations
In addition to the compiler design presented in the previous section, this section introduces the frontend as well as the runtime of RAF, making it an end-to-end deep learning system.
### RAF as PyTorch Extension
We observe that users tend to use the interactive interface provided by PyTorch [34] to write and tune the deep learning models for training. As a result, we aim to integrate RAF with PyTorch user interface and programming paradigm. However, it is challenging because PyTorch embraces an imperative style programming paradigm to provide the best usability, which dynamically interprets and executes a model graph operator by operator. On the other hand, compilation-based frameworks such as RAF intend to compile an entire model graph at once to apply graph-level optimizations before execution for better performance. This programming paradigm deviates drastically from the one PyTorch adopts and makes it difficult for users to print intermediate results or make modification to their models. Consequently, it will take huge amount of work to run their existing models or training scripts with RAF.
To minimize the user efforts of enabling RAF in their training jobs, RAF incorporates Lazy Tensor Core (LTC) [43] as its imperative frontend. The frontend is designed as an extension to PyTorch and can be easily plugged in officially released PyTorch packages without changing upstream PyTorch codebase. The core idea of LTC is to register a new backend in PyTorch. Unlike operators in other backends that generate output tensor when executed, operators in LTC generate a _lazy tensor_, which is only a symbol that connects to its input lazy tensors. In this way, when users execute a PyTorch model imperatively, no execution is actually performed but only the operators are traced and recorded, so that the complete model graph is obtained. The actual compilation and execution happen when the LTC synchronization API is invoked.
Figure 7 shows an example of enabling RAF via LTC for a PyTorch model by changing 2-3 lines of code. Specifically, at L4, a user moves the model from CPU to _lazy_ device, in which backend is registered to be RAF when importing RAF. Then, a RAF specific API at L7 has to be invoked to wrap the forward model. This helps us to capture a complete model structure including control flow. With above changes, the model is now in the lazy mode. It means that during L9-12,
\begin{table}
\begin{tabular}{c|c|c} \hline \hline Pattern / Rules & Dialect & Priority \\ \hline conv2d + bias + relu (fp16) & cutlass & 15 \\ conv2d + bias + relu & cudnn (\textgreater{}=8) & 12 \\ conv2d & cudnn & 10 \\ conv2d + bias + relu & cutlass & 8 \\ TVM fusion rules & tvm & 5 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Fusion patterns and rules with priorities.
Figure 6: An example IR for fusion illustration.
the operators are _not_ executed at the time of visiting, but just staged as a symbol in RAF IR form a graph. The collected IR graph will then be optimized, compiled and executed at the synchronization point users created at L13.
### Runtime Virtual Machine
After the model graph IR has been optimized, we compile the IR to the virtual machine (VM) bytecode that could be executed by RAF VM runtime. We choose virtual machine instead of graph interpreter because of the following reasons. First, it can be extended to support non-DAG executions (e.g., the model with control flow). Second, it is able to dynamically manage memory for dynamic shapes.
The VM execution flow is described as follows. Before kicking off the training loop, RAF initializes a VM instance to load bytecode, initialize registers and memory pool, etc. For each training iteration, the VM interprets the bytecode to manipulate registers, manage memory pool, and invoke the corresponding kernels. In particular, when an operator or a closure (i.e., a fused operator) is visited for the first time, VM invokes a particular backend, which is determined by dialect dispatching pass during IR transformation, to perform just-in-time (JIT) compilation to generate and cache the executable binary. We will demonstrate in Section 5 that since JIT compilation are one-time overheads, it will not hurt the user experience as training a model usually needs thousands to millions iterations.
## 5 Evaluation
In this section, we evaluate RAF with popular transformer models - BERT [12], RoBERTa [25] and GPT-2 [37] - on Amazon EC2 GPU instances. The models are pulled from Huggingface transformer library [48] without modification.
The baselines we choose for single device training are eager mode and torchscript in PyTorch 1.12.1 with CUDA 11.3. We also choose DeepSpeed 0.5.10 for distributed training evaluation. In addition, we also evaluate RAF against PyTorch/XLA 1.12+8badab (released at 05/09/2022), because it also leverages Lazy Tensor Core (LTC) [43] to lower PyTorch model to XLA for compilation. Note that since JAX [6] does not support automatic mixed precision (AMP), and its underlying compile engine is also XLA, we bypass the JAX in our evaluation and leverage the results of PyTorch/XLA for evaluation and analysis. As a result, it would be straightforward to illustrate that 1) compilation techniques could help improve training efficiency, and 2) self-contained backward graphs as well as extensible backends could be the key to achieve better performance.
### Throughput Evaluation
We first evaluate the throughputs of processing a mini-batch to answer the following questions:
1. What speedup can RAF achieve over PyTorch and PyTorch/XLA?
2. How holistic compilation optimization could help achieve higher throughputs?
We perform this evaluation on a single NVIDIA V100 GPU with 16 GB on-device DRAM. We use WikiText-2 [27] as the training dataset. The models and corresponding tokenizers are from Huggingface [48] with sequence length set to 512. We use Adam optimizer [19] implemented in PyTorch with learning rate 1e-5 and eps 1e-6.
Figure 8 depicts the throughputs5 over batch sizes for all models. For each model, we evaluate the throughput of training with full precision (left) and float16 automatic mixed precision (AMP)6 (right). As can be seen, RAF achieves higher batch size and throughput than PyTorch eager mode. We do not enable user-defined gradient checkpointing in PyTorch models, because it constantly introduces \(\sim 2\times\) latency overhead even when the memory is sufficient.
Footnote 5: The one-time compilation overhead is not included, which will be discussed in Section 5.2.
Footnote 6: AMP is able to achieve better convergence than half precision, so it is more preferable to train a model with AMP when the model can be fit into a single device.
Meanwhile, we observe that RAF could support larger batch sizes than PyTorch in both eager and torchscript mode when user-defined gradient checkpointing is disabled. This is because RAF analyzes the memory footprint of the model, and automatically invokes rematerialization (Section 3.1.5) to release some intermediate tensors in forward pass and recompute them in backward propagation. Since RAF only recomputes a minimal number of tensors, the latency overhead is moderated in most cases, so we could still observe throughput improvement, such as BERT-large, RoBERTa-base and RoBERTa-large. On the other hand, we also observe that
Figure 7: Code snippet of RAF frontend programming model.
for a few models such as BERT-base AMP and GPT-2, the throughput drops with the maximum batch size. This is because the peak memory of these models is substantially larger than 16 GB, the GPU DRAM size, under large batch size. To satisfy the memory constraint, RAF has to re-compute many more tensors, which moderates the throughput improvement as the batch size grows.
Besides, we can see that RAF outperforms PyTorch/XLA in large models (BERT-large with 340 million parameters and RoBERTa-large with 355 million parameters). This again illustrates that getting control of the entire procedure of generating and executing training graph of the model is capable of achieving better performance. On the other hand, we ignored the results of PyTorch/XLA in GPT-2 in Figure 8 because it failed to generate the correct results, which will be verified in the next subsection.
To better understand how operator fusion and memory optimization contribute to the training throughput, we conduct an ablation study with BERT-large model in Figure 9. All experiments were done with the batch size leading to the best throughput. As can be seen, RAF-origin even performs worse than PyTorch without any optimizations due to various overheads (e.g., kernel launching, memory allocation, etc.). By getting rid of these overheads with operator fusion, RAF achieves 30% speedup. Furthermore, increasing the batch size from 4 to 6 will trigger rematerialization with minimal overhead (2.5% more latency to recompute 181 operators), which brings another 10% more throughput due to better GPU utilization.
### Time-to-Train Performance Evaluation
In this evaluation, we attempt to answer two questions:
1. Whether RAF delivers the numerical correct results and have the same convergence capability as the state-of-the-art (i.e., PyTorch)?
2. How the one-time compilation overhead is moderated when training large models?
We design the following experiments to answer these questions: Although MLPerf [26] suggests to perform a complete
Figure 8: Throughput (y-axis) vs. batch size (x-axis). Missing points indicate out-of-memory. Results of GPT-2 with PyTorch/XLA is ignored due to incorrect results.
Figure 9: Ablation study of RAF with BERT-large-uncased compared to PyTorch.
pre-training from scratch and measure the time to achieve the target metric as the time-to-train performance, it takes a long time on multiple-device-multiple-node platforms and is not suitable for our evaluation. For the sake of time, we perform _continue pre-training_ based on pretrained models from Huggingface transformer library. Specifically, we first train the model using native PyTorch for 10 epochs, and use the final loss from PyTorch at 10th epoch as the _target loss_ to train RAF and PyTorch/XLA. In other words, we train RAF and PyTorch/XLA for required epochs until they reach the loss PyTorch achieved at the 10th epoch. Note that for all training tasks, we use the same random seed and hyper-parameters for all training tasks for apples-to-apples comparison. This is reasonable because both RAF and PyTorch/XLA aim to seamlessly support PyTorch models and programming paradigm.
The loss trend of all three frameworks are depicted in Figure 10. Note that since RAF performs holistic optimizations, the IRs with backward, optimizer, and AMP between PyTorch and RAF are not exactly the same, which result in different convergence trend. However, as can be seen, RAF spends the same number of epochs as PyTorch to achieve the target loss (1.128 for BERT-large and 1.458 for GPT-2), proving that RAF preserves the numerically correctness.
On the other hand, we can see from Figure 10 that although PyTorch/XLA achieves the target loss as PyTorch on BERT-large, it fails on GPT-2 and the loss fluctuates over time. Deep learning practitioners usually adjust the hyper-parameters such as learning rate and momentum to avoid this issue, but since we perform continue pre-training based on trained parameters, and both PyTorch and RAF could deliver smooth loss trends, we believe PyTorch/XLA should be capable of training the models with the same hyper-parameters. As a result, it is likely due to the fact that PyTorch/XLA fails to preserve the numerically correctness when optimizing GPT-2.
Meanwhile, in Table 2, we report the time-to-train along with model setup time (i.e., one time compilation) in minutes. Although RAF has a one-time compilation overhead compared to PyTorch, it still achieves the target loss in a shorter time. Specifically, given the elapsed time of the first epoch \(t_{0}\), which includes compilation overheads, and the average elapsed time of the rest epochs \(t_{a}\), we could estimate the total training time after \(N\) epochs: \(T=t_{0}+(N-1)\times t_{a}\). Given that the average elapsed time for training an epoch for BERT-large and GPT2 on PyTorch and RAF are 178 vs. 130 and 75 vs. 57 seconds, respectively, RAF achieves better time-to-train performance than PyTorch when \(N\) is larger than 8 and 9 for BERT-large and GPT-2, respectively. Consequently, RAF is capable of delivering better time-to-train performance than PyTorch for large language models, as their pre-training needs many more epochs to converge.
### Data Parallelism Evaluation
We use Figure 11 to demonstrate large model training with data parallelism. The baseline is DeepSpeed [40]7, the ZeRO [39] implementation on top of PyTorch. In this experiment, we evaluate the throughput of training a proprietary custom transformer model with 1.5B parameters on Amazon EC2 p4d instance with 8 NVIDIA A100 GPUs and 40 GB DRAM each. As can be seen, RAF achieves \(\sim 14\%\) throughput improvements due to the optimizations presented in Section 3.1.3.
Footnote 7: Data parallelism support in PyTorch/XLA is not stable yet, so we skip it in this evaluation.
To further illustrate the optimizations mentioned in Section 3.1.3, we conduct an ablation study of the above experiment, as shown in Figure 12. In addition, Figure 13 depicts a detail breakdown of per-iteration execution. As can be seen, by overlapping the computation and communication using CUDA streams RAF is able to match the DeepSpeed manual optimized throughput. Moreover, we can see from Figure 13 that the communication in _Overlapping_ is composed by a huge number of collective operators, which results in a long latency and synchronization overheads. Accordingly, after
\begin{table}
\begin{tabular}{c|c c} \hline Framework & BERT-Large & GPT-2 \\ \hline RAF & 27.9 (6.2) & 11.9 (4.4) \\ PyTorch & 29.8 (\(\sim\)0) & 12.6 (\(\sim\)0) \\ PyTorch/XLA & 40 (5.3) & N/A (2.3) \\ \hline \end{tabular}
\end{table}
Table 2: Time-to-train for continued pre-training. Numbers are time-to-train (model setup time) in minutes.
Figure 11: Throughput comparison when training a custom language model with 1.5B parameters on multiple GPUs.
Figure 10: Loss (y-axis) vs. epochs (x-axis) when continue pre-training **Left:**BERT-large and **Right:**GPT-2 with AMP.
enabling horizontal fusion that aggregates collective operators such as all-gathers and reduce-scatters, the profiling of _Fusion_ shows that the communication latency is significantly reduced and can be completely hidden. It shows that an automatic compiler solution is capable of achieving the same or better performance against manual optimization at the framework level.
## 6 Related Works
**Deep Learning Compilers** XLA [49] is the most notable deep learning compiler in recent years. Deep learning framework such as TensorFlow [1] and JAX [6] adopt XLA as their compiler backend. Meanwhile, a recent work, LazyTensor [43], proposes lazy tensor core that bridges the gap between PyTorch and XLA. Although the focus in [43] is Google TPU, their open source framework, PyTorch/XLA, also supports the GPU backend. When compare to RAF, XLA relies on the training graph generated by the framework and hence sacrifices the graph level flexibility. XLA adopts A-Normal Form (ANF) IRs for all optimizations, and dispatches operators to either kernel libraries (e.g., cuDNN [10]) or code generation. Accordingly, it is tedious to implement certain optimizations, or plug in new emerging hand-crafted kernels.
In addition, there are several existing compilers that can be used to optimize deep learning workloads. Halide [38] proposes a domain-specific language and compiler originally for image processing pipeline. It is then applied to deep learning workloads [2]. TVM [8], which is an open source deep learning compiler, shares many similarities with Halide but focuses more on deep learning and adopts a low-level representation, that explicitly expresses the choice of memory layout, parallelization pattern, locality and hardware primitives etc. Tiramisu [3], is a polyhedral compiler for dense and sparse kernels and performs loop optimizations and data layout transformations with a scheduling language. Astra [42] focuses the computation-intensive kernels like matrix multiplication; while leaving the other work such as element-wise operator fusion to the framework.
**Distributed Training** There are several parallelism strategies when training a large model in a distributed fashion - data parallelism and model parallelism. For data parallelism, ZeRO [39] is the most state-of-the-art work that reduces the replicated tensors of optimizer state, gradients and parameters. With ZeRO memory optimization, DeepSpeed [40] successfully trains a transformer model with 100 billions of parameters. MiCS [51] further improves DeepSpeed by optimizing data communication for limited bandwidth on public clouds. On the other hand, all above systems are based on transformer models and PyTorch, so they cannot be easily extended and generalized. Meanwhile, ZeRO has been implemented as a compiler pass in RAF, and our evaluation results have illustrated its efficiency against DeepSpeed.
For model parallelism, Megatron-LM [41] manually partitions transformer models for distributed training. GPipe [15] is a pipeline parallelism library that partitions a network with a sequence of layers to multiple devices for training. On the other hand, there are only a few existing work targeting distributed training compilation. Alpa [53] is built on JAX [6] and XLA, to optimize inter- and intra-operator parallelism. Unlike RAF that targets holistic optimizations, Alpa focuses on graph-level scheduling on distributed platforms and the idea can be integrated into RAF in the future.
**Compiler Optimizations for Deep Learning** While there are lots of compiler optimizations for deep learning workloads, we mainly discuss fusion and rematerialization due to their importance to training workloads. The methods of _operator fusion_ can be classified into three categories: pattern-based, rule-based, and dependency-based fusion. The fixed pattern-based fusion restrains the combinations of operators that can be fused. It is adopted by NVIDIA CUTLASS [32], cuDNN [31], TensorRT [33], and Intel oneDNN [16]. However, patterns need to be defined ahead-of-time manually and fails to cope with new operators. In comparison, TVM [8], XLA [49], DNNFusion [29], FusionStitching [54] embrace rule-based fusion that defines a set of rules over operators, helping expose more fusion. However, one drawback of rule-based fusion is its flexibility to support multiple backends. Lastly, dependency-based fusion leverages generic dependency analysis (e.g., polyhedral analysis [4, 28, 47, 52]) to achieve the best generality, but faces scalability due to the NP-hard algorithms. In RAF, our fusion mechanism with operator dialect combines pattern-based and rule-based fusion, taking the advantages of both methods.
_Rematerialization_ reduces the peak memory of training a model by recomputing intermediate tensors in the backward pass. Chen et al. [9] propose a heuristic to train an \(n\)-layer
Figure 12: Ablation study of RAF to DeepSpeed on multiple GPUs.
Figure 13: Operator-level profiling for RAF with custom model with 1.5 billion parameters on multiple GPUs.
linear network with \(O(\sqrt{n})\) memory cost by partitioning the network into segments and only storing the outputs of each segment in the forward pass. Gruslys et al. [13] present a dynamic programming approach to select checkpoints for recurrent neural networks. Kumar et al. [22] formulate rematerialization as an optimization problem and solves it with tree decomposition; while Checkmate [17] leverages integer linear programming (ILP). DTR [20] is a runtime solution that maintains tensor metadata and uses carefully-designed cost functions to dynamically evict and rematerialize tensors during model execution. On the other hand, DTR brings memory analysis overheads to the runtime, and may perform worse than static approaches on models without dynamism. The approach RAF adopted is similar to DTR but in compile time. In this way, we are able to conduct a complete liveness analysis to achieve smaller overheads than DTR.
## 7 Conclusion
This paper presents RAF, a deep learning compiler for training. RAF accepts vanilla models and performs in-house training graph generation, including automatic differentiation and mixed precision, to enable holistic optimizations for performance, memory and distributed training. RAF has an operator dialect mechanism that is capable of integrating third party kernel libraries as well as a tensor compiler, making sure to constantly catch up with the state-of-the-art kernel implementations. The evaluation results show that RAF achieves either better training throughput or larger batch sizes for popular transformer models on GPUs while preserving the numerically correctness. The source code of RAF is open source at [https://github.com/awslabs/raf](https://github.com/awslabs/raf).
|
2301.12742
|
Circular Coordinates for Density-Robust Analysis
|
Dimensionality reduction is a crucial technique in data analysis, as it
allows for the efficient visualization and understanding of high-dimensional
datasets. The circular coordinate is one of the topological data analysis
techniques associated with dimensionality reduction but can be sensitive to
variations in density. To address this issue, we propose new circular
coordinates to extract robust and density-independent features. Our new methods
generate a new coordinate system that depends on a shape of an underlying
manifold preserving topological structures. We demonstrate the effectiveness of
our methods through extensive experiments on synthetic and real-world datasets.
|
Taejin Paik, Jaemin Park
|
2023-01-30T09:17:30Z
|
http://arxiv.org/abs/2301.12742v1
|
# Circular coordinates for density-robust analysis
###### Abstract.
Dimensionality reduction is a crucial technique in data analysis, as it allows for the efficient visualization and understanding of high-dimensional datasets. The circular coordinate is one of the topological data analysis techniques associated with dimensionality reduction but can be sensitive to variations in density. To address this issue, we propose new circular coordinates to extract robust and density-independent features. Our new methods generate a new coordinate system that depends on a shape of an underlying manifold preserving topological structures. We demonstrate the effectiveness of our methods through extensive experiments on synthetic and real-world datasets.
## 1. Introduction
Dimensionality reduction allows us to understand high-dimensional data and gives us intuitive information about a dataset. One of the key challenges in this area is preserving the intrinsic topological structure. Different dimensionality reduction strategies try to handle this problem in different ways.
Principal component analysis (PCA) [1, 2] is one of the most basic techniques for linear dimensionality reduction. Given a dataset, PCA aims to find a projection to a low-dimensional vector space maximizing variance. As a non-linear dimensionality reduction, t-Distributed Stochastic Neighbor Embedding (t-SNE) [3] is commonly used to visualize high-dimensional data. It is a stochastic approach that aims to capture close points as close points in the low-dimensional embedding. Dimensionality reduction methods including the above dimensional reduction techniques, however, often fail to maintain the original topological structure when data points are sampled from an underlying manifold \(M\) with complex topology.
The circular coordinate, which was introduced in [4], deals in part with this problem by capturing 1-dimensional holes; if there are 1-dimensional holes in the underlying manifold \(M\), the coordinates give circle-valued maps \(\{\theta:X\to\mathbb{R}/\mathbb{Z}\}\) to identify the holes. This approach is motivated by the bijection
\[\langle\mathcal{K},K(\mathbb{Z},1)\rangle\cong H^{1}(\mathcal{K};\mathbb{Z})\]
for every CW-complex \(\mathcal{K}\) where \(\langle\mathcal{K},K(\mathbb{Z},1)\rangle\) is basepoint-preserving homotopy classes of maps from \(\mathcal{K}\) to the Eilenberg-MacLane space \(K(\mathbb{Z},1)\), which is the circle \(S^{1}\). That is, for each cocycle in \(H^{1}(X;\mathbb{Z})\), we can get a map \(f:X\to S^{1}\). Practically, for each cocycle \(\alpha\), the circular coordinate is obtained by finding \(L^{2}\)-norm minimizer among cocycles that are cohomologous to \(\alpha\). We show how circular coordinates help to reveal topological structures hidden in low-dimensional embeddings in Section 4.6.
It should be noted that the circular coordinates depend not only on the manifold's shape but also on the probability density function on the manifold. For instance, as shown in Figure 1, though we sample data points on the same circle, the circular coordinates are different depending on the probability density functions; in the low-density region, the circular coordinate changes quickly, whereas in the high-density region, it changes extremely slowly.
The results may be difficult to analyze if they vary with density if we want to explore the shape of manifolds embedded in Euclidean space.
In this research, we propose new circular coordinates that are dependent on the shape of the underlying Riemannian submanifold, rather than the probability density function on the manifold. We propose two distinct approaches for achieving this goal:
1. Obtaining a circular coordinate as a solution to a Dirichlet problem using a Laplacian matrix, which is obtained by approximating the Laplace-Beltrami operator of the underlying manifold.
2. Utilizing the \(L^{p}\)-norm with \(p>2\) instead of the \(L^{2}\)-norm in the optimization process to obtain a new circular coordinate.
In the Section 3, we provide some justifications for why each approach generates a new circular coordinate that is robust to changes in the probability density function. In the Section 4, we demonstrate the robustness of these methods through evaluations on synthetic datasets and a real-world dataset. Our source is available under [https://github.com/TJPaik/CircularCoordinates](https://github.com/TJPaik/CircularCoordinates).
## 2. Preliminaries
In this section, we briefly review three well-known theories: Hodge theory [5], persistent cohomology [4], and the circular coordinate.
### Cohomology and Hodge theory
Let \(X\) be a finite simplicial complex. The space of \(i\)-cochains \(\mathcal{C}^{i}(X;\mathbb{R})\) is defined as the vector space dual to the vector space of \(i\)-chains \(\mathcal{C}_{i}(X;\mathbb{R})\), and the coboundary map \(d_{i}\) is dual to the boundary map between \(\mathcal{C}_{i}(X;\mathbb{R})\) and \(\mathcal{C}_{i+1}(X;\mathbb{R})\). If the complex \(X\) is clear from the context, we abbreviate \(\mathcal{C}^{i}(X;\mathbb{R})\) by \(\mathcal{C}^{i}\). Specifically, the coboundary maps \(d_{i}:\mathcal{C}^{i}\to\mathcal{C}^{i+1}\) for \(i=0,1\) are
\[(d_{0}f)(xy) =f(y)-f(x),\text{ and }\] \[(d_{1}\alpha)(xyz) =\alpha(xy)-\alpha(xz)+\alpha(yz).\]
For \(\alpha\in\mathcal{C}^{1}\), we call \(\alpha\) a _cocycle_ if \(d_{1}\alpha=0\), i.e. \(\alpha\in\ker d_{1}\). We call \(\alpha\) a _coboundary_ if \(\alpha=d_{0}f\) for \(f\in\mathcal{C}^{0}\), i.e. \(\alpha\in\operatorname{im}d_{0}\). Since \(d_{1}d_{0}f=0\) for all \(f\in\mathcal{C}^{0}\), we have \(\operatorname{im}d_{0}\subset\ker d_{1}\). We now define _1-cohomology_ of \(X\) by
\[H^{1}(X;\mathbb{R})=\ker d_{1}/\operatorname{im}d_{0}.\]
We say \(\alpha,\beta\in\mathcal{C}^{1}\) are _cohomologous_ if \([\alpha]=[\beta]\in H^{1}(X;\mathbb{R})\).
Figure 1. Changes in circular coordinates with probability density on \(S^{1}\).
In this paper, we use the standard basis on \(\mathcal{C}^{i}\) and the dual basis of the standard basis on \((\mathcal{C}^{i})^{*}\). We assign an inner product on \(\mathcal{C}^{i}\) for each \(i\). Whenever an inner product is not specified, it will be assumed to be the standard inner product. It follows from linear algebra that \(\mathcal{C}^{1}\) can be decomposed (called the _Fredholm alternative_) as
\[\mathcal{C}^{1}\cong\ker d_{1}\oplus\operatorname{im}d_{1}^{*}, \tag{1}\]
where \(d_{i}^{*}\) is the adjoint operator of \(d_{i}\) for \(i=1,2\). Note that the adjoint operator depends on inner products. Moreover, there is the _Hodge decomposition_
\[\mathcal{C}^{1}\cong\operatorname{im}d_{1}^{*}\oplus\ker(d_{1}^{*}d_{1}+d_{0}d _{0}^{*})\oplus\operatorname{im}d_{0}. \tag{2}\]
Here, we call \(d_{1}^{*}d_{1}+d_{0}d_{0}^{*}\) the _(1-dimensional) Hodge Laplacian_ and denote it by \(\Delta_{1}\). Combining two decompositions (1) and (2), we have
\[\ker(d_{1}^{*}d_{1}+d_{0}d_{0}^{*})\oplus\operatorname{im}d_{0}=\ker d_{1}.\]
Thus, we have
\[H^{1}(X;\mathbb{R})=\ker d_{1}/\operatorname{im}d_{0}\cong\ker(d_{1}^{*}d_{1} +d_{0}d_{0}^{*}).\]
We note that each \(\alpha\in\ker\Delta_{1}\) is the representative of the corresponding equivalent class in \(\ker d_{1}/\operatorname{im}d_{0}\) with the minimal 2-norm. That is, for every \([\alpha]\in\ker d_{1}/\operatorname{im}d_{0}\), the representative
\[\alpha_{H}=\operatorname*{argmin}_{\overline{\alpha}}\left\{\|\overline{ \alpha}\|_{2}\mid\exists f\in\mathcal{C}^{0},\overline{\alpha}=\alpha+d_{0}f\right\}\]
is contained in \(\ker\Delta_{1}\) (see [5]) since \(\alpha_{H}\perp\operatorname{im}d_{0}\). We call \(\alpha_{H}\) the _harmonic cocycle_.
### Persistent cohomology
Now we consider a dataset \(X\) in the Euclidean space and a 1-parameter family of Vietoris-Rips complexes \(\{X^{\epsilon}\}\) where \(\epsilon\) is a scale parameter with the dataset. Let \(\epsilon_{1},\dots,\epsilon_{m}\) be critical values where the homotopy type of \(X^{\epsilon}\) changes. We can write this situation as follows:
\[X^{\epsilon_{1}}\xrightarrow{i_{1}}X^{\epsilon_{2}}\xrightarrow{i_{2}} \cdots\xrightarrow{i_{m-1}}X^{\epsilon_{m}}\]
where \(\to\) denotes the inclusion maps. The inclusion maps between \(X^{\epsilon}\)'s induce the homomorphisms between \(H^{1}(X^{\epsilon};\mathbb{R})\):
\[H^{1}(X^{\epsilon_{1}};\mathbb{R})\xleftarrow{i_{1}^{*}}H^{1}(X^{\epsilon_{2} };\mathbb{R})\xleftarrow{i_{2}^{*}}\cdots\xleftarrow{i_{m-1}^{*}}H^{1}(X^{ \epsilon_{m}};\mathbb{R})\]
For a nonzero cocycle class \([\alpha]\in H^{1}(X^{\epsilon_{k}};\mathbb{R})\), let
\[b_{\alpha} =\inf\left\{\epsilon\in\{\epsilon_{1},\dots,\epsilon_{k}\}:\exists [\beta]\neq 0\in H^{1}(X^{\epsilon};\mathbb{R})\text{ such that }i^{*}[\beta]=[\alpha]\right\}\] \[d_{\alpha} =\sup\left\{\epsilon\in\{\epsilon_{k},\dots,\epsilon_{m}\}:i^{*}[ \alpha]\neq 0\in H^{1}(X^{\epsilon};\mathbb{R})\right\},\]
where \(i^{*}\) is an induced homomorphism by an inclusion map. We call \(b_{\alpha}\) (\(d_{\alpha}\), respectively) a _birth_ (_death_, respectively) and the value \(d_{\alpha}-b_{\alpha}\)_life time_. If we collect all \(b_{\alpha}\) and \(d_{\alpha}\) for all nonzero cocycle and draw points \(\{(b_{\alpha},d_{\alpha})\}_{\alpha}\) on a coordinate plane, that is called _persistence diagram_.
### Circular coordinate
The circular coordinate is introduced in [4]. Given a dataset \(X\), its circular coordinate \(\theta:X\to\mathbb{R}/\mathbb{Z}\cong S^{1}\) is defined using a nonzero cocycle class \([\alpha]\in H^{1}(X^{\epsilon};\mathbb{Z}_{\mathbf{p}})\) with a fixed \(\epsilon\) and a prime \(\mathbf{p}\) if \([\alpha]\) lies in the image of the coefficient homomorphism \(H^{1}(X^{\epsilon};\mathbb{Z})\to H^{1}(X^{\epsilon};\mathbb{Z}_{\mathbf{p}})\). Practically, we take a large prime \(\mathbf{p}\). The brief algorithm is as follows. First, we find the harmonic cocycle \(\alpha_{H}\) which is cohomologous to \(\alpha\), i.e.
\[\alpha_{H}=\operatorname*{argmin}_{\overline{\alpha}}\left\{\|\overline{ \alpha}\|_{2}\mid\exists f\in\mathcal{C}^{0}(X^{\epsilon};\mathbb{R}), \overline{\alpha}=\alpha+d_{0}f\right\}. \tag{3}\]
Then we fix a vertex \(x\) and assign \(\theta(x)=0\). For a vertex \(y\), we assign \(\theta(y)=\sum_{i=1}^{n}\alpha_{H}(e_{i})\), where \(e_{1}\cdots e_{n}\) is an edge path starting from \(x\) to \(y\). The circular coordinate \(\theta\) is well-defined since \(\alpha_{H}\) is cohomologous to a cocycle with integer coefficients. We note that we can use \(f\) in (3) as the circular coordinate.
Now, we interpret the circular coordinate in terms of Laplacian. Since \(\alpha_{H}\in\operatorname{im}d_{0}^{\perp}=\ker d_{0}^{*}\), we have
\[0=d_{0}^{*}\alpha_{H}=d_{0}^{*}\alpha+d_{0}^{*}d_{0}f.\]
Here, we call \(d_{0}^{*}d_{0}\) the _(0-dimensional) graph Laplacian_ and denote it by \(\Delta_{0}\). Thus the circular coordinate \(f\) is a solution of the Dirichlet problem
\[\Delta_{0}f=-d_{0}^{*}\alpha. \tag{4}\]
The solution of (4) exists and is unique. More formally,
**Theorem 2.1**.: _Let \(\alpha\in\mathcal{C}^{1}\) be a cocycle. Then, a solution to the Dirichlet problem on the graph(1-skeleton of the Vietoris-Rips complex)_
\[\Delta_{0}f=d_{0}^{*}\alpha\]
_exists, and if the graph is connected, then the solution is unique up to scalar addition._
See Appendix A for the proof.
## 3. Methods
### Weighted circular coordinate
A _weighted circular coordinate_ is a generalization of the circular coordinate that can take into account a density of a dataset. By assigning weights to each edge in a Vietoris-Rips complex, we can get a new weighted circular coordinate that considers a density of a point cloud in the following way.
Let us assume that we have a simplicial complex \(X\) and positive weights \(w(e)\) on each edge \(e\) of \(X\). We define a weight matrix \(W\) by a diagonal matrix whose diagonal entries are the weights. Formally, we write
\[W=\operatorname{diag}\left\{w(e)\;:\;e\text{ is an edge of }X\right\}.\]
We consider a _weighted cochain complex_
\[0\xrightarrow{0}\mathcal{C}^{0}\xrightarrow{Wd_{0}}\mathcal{C}^{1} \xrightarrow{d_{1}W^{-1}}\mathcal{C}^{2} \tag{5}\]
where \(Wd_{0}\) and \(d_{1}W^{-1}\) are defined as follows:
\[(Wd_{0})f(xy) =w(xy)(f(y)-f(x)), \forall f\in\mathcal{C}^{0};\] \[(d_{1}W^{-1})\alpha(xyz) =\frac{1}{w(xy)}\alpha(xy)-\frac{1}{w(xz)}\alpha(xz)+\frac{1}{w( yz)}\alpha(yz), \forall\alpha\in\mathcal{C}^{1}.\]
The _weighted Hodge Laplacian_\(\Delta_{W,1}\) is defined by
\[\Delta_{W,1}=(d_{1}W^{-1})^{*}(d_{1}W^{-1})+(Wd_{0})(Wd_{0})^{*}.\]
Note that the weighted harmonic cocycle \(\alpha_{W,H}\in\ker\Delta_{W,1}\) is the minimal 2-norm cocycle among its cohomology class. With this observation, we introduce a weighted circular coordinate.
Since \(W\alpha\in\ker(d_{1}W^{-1})\) for every \(\alpha\in\ker d_{1}\), we can think of the harmonic element for \(W\alpha\) on this chain complex and \(f^{\prime}\in\mathcal{C}^{0}\) satisfying
\[\|W\alpha+Wd_{0}f^{\prime}\|_{2}=\|W(\alpha+d_{0}f^{\prime})\|_{2}\leq\|W( \alpha+d_{0}g)\|_{2}\]
for every \(g\in\mathcal{C}^{0}\) as we do above.
We can understand this in terms of an inner product in the following way. Assume that we have a different inner product \(\langle\cdot,\cdot\rangle_{q}\), instead of the standard inner product on \(\mathcal{C}^{1}\). Then, we can represent the inner product as a matrix from
\[\langle v,w\rangle_{q}=v^{t}Qw\]
where \(Q\) is a real symmetric positive definite matrix. From the Cholesky decomposition, we have \(Q=W_{0}^{t}W_{0}\) for a real upper triangular matrix \(W_{0}\). Therefore, we have
\[\langle\alpha+d_{0}f^{\prime\prime},\alpha+d_{0}f^{\prime\prime} \rangle_{q} =(\alpha+d_{0}f^{\prime\prime})^{t}W_{0}^{t}W_{0}(\alpha+d_{0}f^{ \prime\prime})\] \[=(W_{0}\alpha+W_{0}d_{0}f^{\prime\prime})^{t}(W_{0}\alpha+W_{0}d _{0}f^{\prime\prime})\] \[=\|W_{0}\alpha+W_{0}d_{0}f^{\prime\prime}\|_{2}^{2}\]
for \(f^{\prime\prime}\in\mathcal{C}^{0}\), and the corresponding harmonic element using the inner product \(\langle\cdot,\cdot\rangle_{q}\) is the same as the harmonic element on the chain complex (5) if \(W=W_{0}\).
For a dataset, we seek to obtain a weighted circular coordinate that is robust to the density of the dataset. To achieve our goal of obtaining a circular coordinate that depends only on the shape of the dataset, we adjust the weights of the edges in the graph constructed from a Vietoris-Rips complex. Now, we introduce how to set weights for each edge by understanding the graph Laplacian as an approximation of a Laplace-Beltrami operator \(\Delta_{M}\) of a manifold \(M\).
Data points in Euclidean space can be thought of as samples from a probability density function on a manifold. In other words, the data points are drawn independently and identically from a probability distribution defined on a submanifold of Euclidean space. Vietoris-Rips complex is a topological space constructed by connecting nearby data points with edges and higher simplices. It can be used to approximate the topology of the underlying manifold.
The key idea is that a weighted graph Laplacian that is defined on the 1-skeleton of the Vietoris-Rips complex approximates the Laplace-Beltrami operator of the underlying manifold. If we approximate the Laplace-Beltrami operator and find a harmonic solution on the underlying manifold, it will be a solution that depends only on the shape of the manifold regardless of the density of the data. Therefore, we need to approximate the Laplace-Beltrami operator with the given data. Before proceeding, we introduce some notations and give some explanations.
Assume that we have \(n\) points \(x_{1},\ldots,x_{n}\in M\) where \(M\) is a \(k\)-dimensional Riemannian submanifold of \(\mathbb{R}^{m}\) and a probability density function \(P:M\to\mathbb{R}\) with \(\inf_{x\in M}P(x)>0\). We take a Vietoris-Rips complex constructed from the \(n\) points with a scale parameter \(\epsilon>0\) and the 1-skeleton of the Vietoris-Rips complex which is a graph. In our setting, we consider
each edge of the graph to have a direction so that we can give a different weight for each direction.
Given the \(n\) points and \(t>0\), we construct a weighted directed graph taking the weight of the edge connecting \(x_{i}\) and \(x_{j}\) to be \(w_{ij}=\frac{1}{P(x_{j})}g_{ij}\) where \(g_{ij}=\frac{1}{(4\pi t)^{k/2}}e^{-\frac{\|x_{i}-x_{j}\|^{2}}{4t}}\). We set \(w_{ij}=0\) if \(x_{i}\) and \(x_{j}\) are not connected. Then the corresponding _weighted directed graph Laplacian matrix_ is defined as
\[\left(L_{n}^{t}\right)_{ij}=\begin{cases}-w_{ij}&\text{ if }i\neq j\\ \sum_{k\neq i}w_{ik}&\text{ if }i=j.\end{cases}\]
Note that the matrix \(L_{n}^{t}\) can be understood as an operator on functions defined on the vertices:
\[L_{n}^{t}f\left(x_{i}\right)=\sum_{j}w_{ij}(f(x_{i})-f(x_{j})),\]
and naturally, on functions defined on the manifold \(M\):
\[\mathbf{L}_{n}^{t}f\left(x\right)=\sum_{j}w_{t}(x,x_{j})(f(x)-f(x_{j}))\]
where \(w_{t}(x,y)=\frac{1}{P(y)}g_{t}(x,y)\) and \(g_{t}(x,y)=\frac{1}{(4\pi t)^{k/2}}e^{-\frac{\|x-y\|^{2}}{4t}}\) if \(\|x-y\|<\epsilon\) and \(g_{t}(x,y)=0\) for \(\|x-y\|\geq\epsilon\) defined on \(M\times M\). We can easily see that \(L_{n}^{t}f(x_{i})=\mathbf{L}_{n}^{t}f(x_{i})\) for every \(i=1,\ldots,n\).
Another note is that the matrix \(L_{n}^{t}\) can be decomposed as \(L_{n}^{t}=P(P^{-1}D-P^{-1}GP^{-1})\) where \(P\) is a diagonal matrix with \(P_{ii}=P(x_{i})\), \(D\) is a diagonal matrix with \((D)_{ii}=\sum_{j}g_{ij}/P(x_{j})\), and \((G)_{ij}=g_{ij}\). We denote the matrix \(P^{-1}D-P^{-1}GP^{-1}\) by \(L_{n,W}^{t}\). Now, we can approximate the Laplace-Beltrami operator of the underlying manifold:
**Theorem 3.1**.: _Let \(M\) be a \(k\)-dimensional compact Riemannian submanifold in \(\mathbb{R}^{n}\) and \(P:M\to\mathbb{R}\) be a smooth probability distribution function on \(M\) with \(\inf_{x\in M}P(x)>0\). If data points \(x_{1},\ldots,x_{n}\) are independent and identically distributed samples drawn from the distribution \(P\) and \(f:M\to\mathbb{R}\) is a smooth function, then for \(x\in M\) and \(t_{n}=n^{-\frac{1}{k+2+\alpha}}\), where \(\alpha>0\), we have_
\[\frac{1}{nt_{n}}\mathbf{L}_{n}^{t_{n}}f(x)\xrightarrow{p}\Delta_{M}f(x)\]
_as \(n\) goes to infinity._
For proof, see Appendix A. From the above theorem, we can think of \(\frac{1}{nt_{n}}L_{n}^{t}\) as a discrete approximation of \(\Delta_{M}\) for appropriate \(t\). Since \(\Delta_{M}\) is the Laplace-Beltrami operator of the underlying Riemannian manifold regardless of the density of the data, if we use \(L_{n}^{t}\) to solve a Dirichlet problem for a cocycle, we find a circular coordinate that relies more on the shape of the manifold and robust to the density distribution.
We provide an interpretation of the matrix \(L_{n}^{t}\) in the theorem that follows. Before that, we define a diagonal matrix \(Q_{1}\) where \((Q_{1})_{ii}=(L_{n,W}^{t})_{jk}\) where \(i\) is the index of the edge connecting \(x_{j}\) and \(x_{k}\).
**Theorem 3.2**.: _Let us assign an inner product on \(\mathcal{C}^{0}\) by \(\langle v,w\rangle=v^{t}P^{-1}w\), and on \(\mathcal{C}^{1}\) by \(\langle v,w\rangle=v^{t}Q_{1}w\). Then the graph Laplacian \(d_{0}^{*}d_{0}\) is \(L_{n}^{t}\)._
See Appendix A for the proof. Since \(L_{n}^{t}\) can be understood as a graph Laplacian, we can make and solve a Dirichlet problem from Theorem 2.1 using a cocycle as follows:
**Corollary 3.3**.: _Let \(\alpha\) be a cocycle in \(\mathcal{C}^{1}\) and take the inner products defined on Theorem 3.2. A solution to the Dirichlet problem on the graph_
\[L_{n}^{t}f=d_{0}^{*}\alpha\]
_exists, and if the graph is connected, then the solution is unique up to scalar addition._
Practically, we approximate \(P(x_{i})\) by \(\frac{1}{n}\sum_{j}g_{ij}\). Indeed, \(\frac{1}{n}\sum_{j}g_{ij}\) converges to
\[d_{t}(x_{i}):=\int_{M}g_{t}(x_{i},x)P(x)\;dV(x)\]
in probability as \(n\to\infty\) where \(dV\) is the volume form of \(M\) induced from the ambient Euclidean space satisfying \(\int_{M}P(x)\;dV(x)=1\), and \(d_{t}(x_{i})\) converges to \(P(x_{i})\) as \(t\) goes to \(0\). Specifically, we denote the weighted circular coordinate using the weighted directed graph Laplacian matrix by _WDGL-circular coordinate_. For each cocycle, we can obtain the WDGL-circular coordinate equivalently by finding the harmonic element corresponding to the cocycle using the inner product \(\langle v,w\rangle=v^{t}Q_{1}w\) on \(\mathcal{C}^{1}\) from the relation between the Dirichlet problem and circular coordinates.
We also experiment with several other weight candidates. The weights on edges can be chosen based on various criteria, such as the distance between the points connected by the edge or the number of points in the region surrounding the edge. In our experiments, we show that weighted circular coordinates give a density-robust result when we use the following weights:
1. The weight of an edge is the \(\frac{1}{D_{0}+D_{1}}\) where \(D_{0}\) and \(D_{1}\) are the degrees of vertices attached to the edge.
2. The weight of an edge is the \(\frac{1}{\sqrt{D_{0}D_{1}}}\) where \(D_{0}\) and \(D_{1}\) are the degrees of vertices attached to the edge.
We present results using the above two weights in figures in Appendix C, and use notations "1/(D0 + D1)" and "1/sqrt(D0 D1)" to denote the above two weights respectively in the figures.
### \(L^{p}\)-circular coordinate
A circular coordinate can be considered in terms of a harmonic cocycle representing a cohomology class [4]. Let us denote \(L^{p}\)-norm by \(\|\cdot\|_{p}\). Given a simplicial complex \(X\) and a cochain \(\alpha\in\mathcal{C}^{1}\), the corresponding harmonic cocycle is
\[\operatorname*{argmin}_{\overline{\alpha}}\left\{\|\overline{\alpha}\|_{2}\; |\;\exists f\in\mathcal{C}^{0},\overline{\alpha}=\alpha+d_{0}f\right\},\]
the minimal cocycle among all cohomologous cocycle with \(\alpha\) concerning the \(L^{2}\)-norm. The function \(f\in\mathcal{C}^{0}\) that attains the minimum is the circular coordinate. One of the variations we can consider is a minimizer with \(L^{p}\)-norm with different \(p\), \(L^{p}\)-_circular coordinate_. That is, we can look at
\[\operatorname*{argmin}_{\overline{\alpha}}\left\{\|\overline{\alpha}\|_{p}\;| \;\exists f\in\mathcal{C}^{0},\overline{\alpha}=\alpha+d_{0}f\right\}. \tag{6}\]
Previous research [6] shows that a circular coordinate with \(L^{1}\)-norm or mixed norm becomes more "locally constant". In this work, we study the behavior of a circular coordinate with different \(p\). Our several experiments show that the higher the \(p\) is, the more robust the circular coordinate to the density.
To get the corresponding element \(f\in\mathcal{C}^{0}\) for \(p=1\), we can consider the problem as a linear programming problem and solve it in linear time [7, 6]. For \(p=2\), the problem (6)
is the well-known least-squares problem, and LSQR [8] can be used as has been observed in [4]. In the case of a general \(p>2\), we can use the gradient descent algorithm. We specify our computation in Algorithm 1.
Now we introduce an iterative way to get a \(L^{\infty}\)-circular coordinate. Note that \(\|\overline{\alpha}\|_{\infty}=\max\limits_{e}\{|\overline{\alpha}(e)|\}\). We have the following theorem:
**Theorem 3.4**.: _Suppose \(f\) is a complex measurable function on \(X\) and \(\mu\) is a positive measure on \(X\). Assume that \(\|f\|_{r}<\infty\) for some \(r<\infty\). Then_
\[\|f\|_{p}\to\|f\|_{\infty}\quad\text{as}\ \ p\to\infty.\]
For proof, see Appendix A. From the above theorem, a \(L^{p}\)-circular coordinate with large \(p\) can be an approximation of a \(L^{\infty}\)-circular coordinate. From this observation, we propose an iterative way to get a \(L^{\infty}\)-circular coordinate. That is, we can obtain a \(L^{\infty}\)-circular coordinate by sequentially optimizing \(L^{p}\)-circular coordinate while increasing \(p\) in some range, and finally, we optimize the function \(f\) to get the \(L^{\infty}\)-circular coordinate. We write down our algorithm in Algorithm 2. In practice, we increase \(p\) up to about \(50\) to obtain an estimated loss because of underflow or overflow issues. We confirm that for many hyperparameter settings, this algorithm gives faster convergence to the \(L^{\infty}\)-circular coordinate.
As a similar methodology, we can use the softmax function with temperatures:
**Theorem 3.5**.: _Let \(h:\mathbb{R}^{n}\to\mathbb{R}^{n}\) be a function satisfying \((h(x))_{i}=|x_{i}|\), and \(s\) be the softmax function. Then for \(v\in\mathbb{R}^{n}\), \((s\circ h)(tv)\cdot h(v)\) converges to \(\|v\|_{\infty}\) as \(t\) goes to infinity._
See Appendix A for the proof. Similarly, after initializing a vector \(f\in\mathbb{R}^{n}\), we can optimize \((s\circ h)(tf)\cdot h(f)\) while increasing the parameter \(t\). We specify our method in Algorithm 3. In our experiment, we do not benefit much when using the softmax function; the convergence is usually not faster than in other methods.
In our algorithms, we use a hyperparameter \(\tau\) to determine whether the function \(f\) is converged or not. If the difference between the previous loss and the current loss is smaller than \(\tau\), we determine that the function is converged.
#### 3.2.1. \(p\)-harmonic on manifolds
In order to gain a deeper understanding of the concept of the \(L^{p}\)-circular coordinate, let us examine an example of the analog on manifolds. We consider a \(n\)-dimensional compact Riemannian manifold \((M,g)\), so we can define the usual smooth Hodge theory. Fixing a cohomology class \([\alpha]\in H^{k}_{dR}(M)\) and a representative \(\alpha\), we want to choose a representative \(\alpha_{H}\) such that its \(p\)-norm with \(p>1\) rather than its \(2\)-norm is minimal:
\[\alpha_{H}:=\operatorname*{argmin}\limits_{\overline{\alpha}}\left\{\int_{M} \|\overline{\alpha}\|^{p}\ dV\ |\ \exists f\in\Omega^{k-1}(M),\overline{\alpha}=\alpha+df\right\}\]
where \(dV\) is a volume form. Therefore, we have
\[\frac{d}{dt}\int_{M}\|\alpha_{H}+tdf\|^{p}\ dV\ \bigg{|}_{t=0}=0.\]
for every \(f\in\Omega^{k-1}(M)\), and from the Euler-Lagrange equation, we have
\[d\alpha_{H}=0\quad\text{and}\quad\delta(\|\alpha_{H}\|^{p-2}\alpha_{H})=0 \tag{7}\]
since
\[\int_{M}\langle\alpha_{H},df\rangle\|\alpha_{H}\|^{p-2}\ dV=\int_{M}\langle \delta(\|\alpha_{H}\|^{p-2}\alpha_{H}),f\rangle\ dV\]
where \(\delta\) is the codifferential operator and \(\|\alpha\|^{2}\) is defined as \(\sum_{i,j}g^{ij}\alpha_{i}\alpha_{j}\) for a 1-form \(\alpha=\sum_{i=1}^{n}\alpha_{i}\ dx^{i}\). An \(k\)-form \(\alpha_{H}\) satisfying (7) is called \(p\)_-harmonic \(k\)-form_. We note that this definition is reduced to the \(p\)-laplacian in the case of Euclidean space. We will not investigate the regularity of the above equation and only look at a simple toy example to see what the effect of \(p\) on a minimizer is. As a manifold, we take \(M=S^{1}\times D^{n-1}\), and we take \(x^{1}\) to be the coordinate on the circle. For \(\rho>0\) on \(M\), define the metric
\[g=\rho^{2/n}\sum_{j=1}^{n}dx^{j}\otimes dx^{j},\]
which is simply a conformal rescaling of the standard Euclidean metric. We understand the change in metric as the change in probability density function on the manifold. The resulting density is \(\sqrt{\det g}=\rho\). For simplicity, we assume that the density only depends on \(x^{1}\).
Now let's see what \(p\)-harmonic 1-form \(\alpha\) generating \(H^{1}_{dR}(M)\) looks like. We have \(\alpha_{H}=\sum_{i}\alpha_{i}dx^{i}\). To write out the \(p\)-harmonic equations, we need the Hodge star \(\star\) to write out \(\delta=(-1)^{n(k-1)+1}\star d\star\) for \(k\)-forms. Since we have
\[\star dx^{i}=\sum_{j=1}^{n}(-1)^{j+1}g^{ij}\sqrt{\det g}\ dx^{1}\wedge\ldots \wedge\widehat{dx^{j}}\wedge\ldots\wedge dx^{n},\]
the equation \(\delta(\|\alpha_{H}\|^{p-2}\alpha_{H})=0\) is equivalent to
\[d\left(\|\alpha_{H}\|^{p-2}\sum_{i,j=1}^{n}(-1)^{j+1}\alpha_{i}g^{ij}\sqrt{ \det g}\ dx^{1}\wedge\ldots\wedge\widehat{dx^{j}}\wedge\ldots\wedge dx^{n} \right)=0.\]
If we use the form of our metric, this simplifies to
\[d\left(\rho^{\frac{n-p}{n}}\sum_{i=1}^{n}|\alpha_{i}|^{p-2}\sum_{j=1}^{n}(-1)^ {j+1}\alpha_{j}\ dx^{1}\wedge\ldots\wedge\widehat{dx^{j}}\wedge\ldots\wedge dx ^{n}\right)=0.\]
We find that if we set
\[\alpha_{H}=\rho^{\frac{p-n}{n(p-1)}}(x^{1})dx^{1},\]
then it satisfies (7) since
\[\rho^{\frac{n-p}{n}}\left(\sum_{i=1}^{n}|\alpha_{i}|^{p-2}\right)\left(\sum_{ j=1}^{n}(-1)^{j+1}\alpha_{j}\right)=\rho^{\frac{n-p}{n}}(\rho^{\frac{p-n}{n(p-1) }})^{p-2}\rho^{\frac{p-n}{n(p-1)}}=1.\]
For \(n=1\), the \(p\)-harmonic representative is just \(\rho(x^{1})dx^{1}\), so independent of \(p\), and has linear dependence on the density induced by the Riemannian metric. For fixed large \(n\), the \(p\)-harmonic representative has density dependence of the form \(\rho^{1/n}\), which means that it is almost independent of the density for large \(n\). On the other hand, the usual harmonic representative has density dependence of the form \(\rho^{2/n-1}\), which is very different. We don't know how well this toy example generalizes, but the toy example suggests that the \(p\)-harmonic representative of a cohomology class has less density dependence than the usual harmonic representative for large \(p\), provided the dimension \(n\) is also large. In the discrete setting, we may think of the Vietoris-Rips complex as a high-dimensional manifold if the scale parameter is sufficiently large.
## 4. Experiments
In this section, we test our methods on several synthetic data including a noisy circle, a noisy trefoil knot, two conjoined circles, and a torus dataset. For real data analysis, we present experimental results on the COIL-100 dataset [9] with low-dimensional embeddings.
For the synthetic datasets, we use correlation scatter plots to show the results. A correlation scatter plot is a graphical representation of a relationship between two circular coordinates. It consists of a set of points plotted on a coordinate plane, with one circular coordinate plotted on the \(x\)-axis and the other circular coordinate plotted on the \(y\)-axis. The scatter plot is used to compare two circular coordinates. In our experimental results, most scatter plots are made to show the comparison between the actual circular coordinate and the circular coordinate inferred from our algorithms. Each circular coordinate is placed on the original dataset using a color map to represent the results. In our figures, we use the cyclic HSV color wheel.
### Experimental details
We have options to use various software to implement parts of our algorithms, and Ripser [10] and SciPy [11] are specifically used to get a cocycle and for the LSQR algorithm. For optimizing \(L^{p}\)-norm, we use PyTorch [12]. To visualize our experimental results, we use the t-SNE of Scikit-learn [13] and matplotlib [14] library.
For a non-zero cocycle \(\alpha\), we choose \(\epsilon=\frac{b_{\alpha}+d_{\alpha}}{2}\) where \(b_{\alpha}\) and \(d_{\alpha}\) are the birth and death of \(\alpha\) respectively. To get a persistence diagram, we use a prime of 47. If we use the WDGL method, we heuristically set \(t\) to 0.2 times the average Euclidean distance between connected vertices.
### Noisy circle
We begin with a noisy circle with uneven density. With the circle, we test our density-robust circular coordinate algorithms. The circle is parametrized by
\[\left\{\begin{array}{l}x=\sin(t)\\ y=\cos(t),\end{array}\right.\]
and \(t\) is sampled from \(\mathcal{N}\left(\pi,\left(0.4\pi\right)^{2}\right)\), and we add Gaussian noise with a mean of 0 and a standard deviation of 0.07. In the experiment, we sample 300 points.
Figure 2. Results for noisy circle dataset; original circular coordinate (left), correlation scatter plots when we use original method, WDGL, and \(L^{\infty}\)-norm optimization (middle), WDGL-circular coordinate (right).
We present the original circular coordinate on the dataset, correlation scatter plots when using various methods, and the WDGL-circular coordinate in Figure 2. In this experiment, the results of \(L^{\infty}\)-norm optimization and WDGL method are similar as shown in the correlation scatter plot, and both look linear.
We also experiment using different weights to get different weighted circular coordinates. As we explain in the last part of Section 3.1, we try two weights to get weighted circular coordinates, and those results are similar to the result when we use the WDGL method.
For \(L^{p}\) norm variation, we try \(p=2,4,6,10,20\), and \(\infty\). Note that \(L^{2}\) norm optimizing produces the same result as the original circular coordinate. From the experiment on this dataset, we confirm that the higher the \(p\) value, the more linear the correlation scatter plot appears.
To provide more experimental results, we collect the correlation scatter plots for weighted circular coordinates and \(L^{p}\) norm variation, and present the plots in Figure 12.
Now, we study the convergence speed for a \(L^{\infty}\)-circular coordinate. A simple approach to get a \(L^{\infty}\)-circular coordinate is just using Algorithm 1. In the algorithm, the \(L^{\infty}\) loss is \(\|\alpha+d_{0}f\|_{\infty}=\max\limits_{e}\{|\alpha(e)+(d_{0}f)(e)|\}\), therefore you can optimize the function \(f\).
Instead, we can optimize the function \(f\) using Algorithm 2. We experiment with several hyperparameters of \(\Delta\) and \(\eta\), and we observe that of all the results, the algorithm that increases \(p\) from \(2\) to \(50\) appears to be the fastest. For the experimental details, see Figure 10
We can also get a \(L^{\infty}\)-circular coordinate using Algorithm 3. See Figure 11 for details. From the experimental results, we find that the algorithm with softmax with temperature sometimes converges faster than Algorithm 1. Still, in most cases, they exhibit similar or slower convergence speed than Algorithm 1.
From the experimental results, we conclude that we can speed up the convergence if we initialize the function \(f\) with the original circular coordinate, which is \(L^{2}\)-circular coordinate, and then we use Algorithm 2.
Figure 3. Results for noisy trefoil knot dataset; original circular coordinate (left), correlation scatter plots when we use original method, WDGL, and \(L^{\infty}\)-norm optimization (middle), WDGL-circular coordinate (right).
### Noisy trefoil knot
Next, we test our algorithms on a noisy trefoil knot. We use parametrization
\[\left\{\begin{array}{l}x=\cos(t)+2\cos(2t)\\ y=\sin(t)-2\sin(2t)\\ z=2\sin(3t),\end{array}\right.\]
and \(t\) is sampled from \(\mathcal{N}\left(\pi,(0.4\pi)^{2}\right)\), and we add Gaussian noise with a mean of \(0\) and a standard deviation of \(0.04\). In the experiment, we sample \(900\) points.
We show the original circular coordinate on the dataset, correlation scatter plots with different methods, and the WDGL-circular coordinate in Figure 3. As with the previous experimental results on the noisy circle dataset, the results of \(L^{\infty}\)-norm optimization and WDGL-circular coordinate are similar, and those look linear.
We carry out two further experiments on the weighted circular coordinate and experiment on the \(L^{p}\)-norm variation technique using the identical \(p\), both of which are similar to the experimental setup of the prior noisy circle dataset. We present the results in Figure 14, and the conclusion is also similar to the noisy circle dataset; the three weighted circular coordinates look similar to each other, and a higher \(p\) value leads to a more linear correlation scatter plot.
### Two conjoined circles
Next, we test our algorithms on two conjoined circles dataset. In the dataset, there are clear two circles and we can observe those in a persistence diagram induced from the dataset. Therefore, we conduct the algorithms two times for each cocycle. To generate the dataset, we make \(2\) circles as we do in the noisy circle example, and after taking random rotations for each circle, we attach the two circles to each other. For each circle, we have coordinate information and it is used to make correlation scatter plots.
We present the original circular coordinates, corresponding circular coordinates, and WDGL-circular coordinates for each cocycle in Figure 4.
Figure 4. Results for two conjoined circles dataset; original circular coordinates (left column), corresponding circular coordinates (middle column), WDGL-circular coordinates (right column).
On this data, the \(L^{\infty}\)-circular coordinate does not look smooth, but it tends to be similar to the WDGL-circular coordinate overall.
We conduct two supplementary experiments on the weighted circular coordinate and experiment with the \(L^{p}\)-norm variation approach, and we present the result in Figure 13.
### Torus
We use torus data as the final example of synthetic data to illustrate our methodology. This dataset has two cocycles, as in the previous dataset; the Betti number of the torus is two, and this can be seen in the persistence diagram. The torus is parametrized by
\[\left\{\begin{array}{l}x=(4+2\cos(s))\cos(t)\\ y=(4+2\cos(s))\sin(t)\\ z=2\sin(s),\end{array}\right.\]
and we use Gaussian mixture model; we sample \((s,t)\) from a probability density function \(p(x)=\frac{1}{2}\left(\mathcal{N}(x\mid(\pi,0),\Sigma)+\mathcal{N}(x\mid(0,\pi ),\Sigma)\right)\) where \(\Sigma\) is \(\mathrm{diag}(0.4\pi,0.4\pi)^{2}\). In this experiment, we sample 800 points.
We show the original circular coordinates, corresponding circular coordinates, and the WDGL-circular coordinate for each cocycle in Figure 5. There are 2 cocycles corresponding to the meridian direction and the longitude direction. Note that when using the WDGL method, the result of the meridian directional cocycle is not very linear in the correlation scatter plot. The reason is that we are not dealing with a flat torus; the uniform probability distribution on
Figure 5. Results for torus dataset; original circular coordinates (left column), corresponding circular coordinates (middle column), WDGL-circular coordinates (right column).
the torus is not induced by the uniform probability distribution on \([0,2\pi]\times[0,2\pi]\). However, using \(L^{\infty}\)-norm optimization, we get the linear result in the correlation scatter plot using the meridian directional cocycle.
As in the previous dataset, we perform two further experiments on the weighted circular coordinate and \(L^{p}\)-circular coordinates. The results are shown in Figure 15.
### Coil-100
In this experiment, we use the COIL-100 [9] dataset as a real dataset to test our method. Each object has 72 images, each taken every 5 degrees as the object is rotated 360 degrees. Therefore, it is natural to consider that the discretized \(S^{1}\) for each item is embedded in the space of the images. The dataset was created by the Columbia Object Image Library (COIL) at Columbia University and is commonly used for object recognition and classification tasks in machine learning and computer vision research.
We obtain circular coordinates using Euclidean distance between the images. To show the results we use t-SNE and PCA, which are popular techniques for dimensionality reduction to obtain a scatter plot of the images to visualize the high-dimensional data.
Using the circular coordinates obtained earlier, we express them in color on the t-SNE or PCA result for several cocycles. This allows us to identify topological structures hidden in the result of the dimensionality reduction.
Furthermore, we select several examples of cocycles that show more hidden topological structures well in the \(L^{\infty}\)-circular coordinate than in the original circular coordinate method.
First, we present two circular coordinates on t-SNE in Figure 6. Many cycles are apparent in the t-SNE result as a "\(S^{1}\) shape". However, some cycles of some objects appear cut off even in t-SNE. There are several potential causes for this; in the initialization of the embedding, far points are difficult to merge, and trying to embed many objects at once can cause conflicts with each other. To identify the circular structures in the low-dimensional embeddings, we evaluate the circular coordinates for each cocycle in the persistence diagram. We show the persistence diagram in Figure 7.
We obtain circular coordinates by using the entries in the persistence diagram. Then, we choose some of these circular coordinates to help us to see the topological structure in the embeddings.
Figure 6.
After sampling one of the disconnected objects in the t-SNE, we conduct an experiment. We present the result in Figure 8. The cycle of this object looks entangled in PCA, but the circular coordinate identifies the cycle's location. In the t-SNE result, Although the cycle looks to be broken in the embedding, the circular coordinate reveals that the two portions are actually from the same cycle.
The following example in Figure 9 illustrates a situation in which \(L^{\infty}\)-circular coordinate is useful. In this instance, one cycle is split into two in the t-SNE result. Unfortunately, in the original circular coordinate, the value of the circular coordinate in the left part is similar to the value of the circular coordinate in the non-cycle part, making it challenging to recognize that the left part is also a part of the cycle.
However, in the case of \(L^{\infty}\)-circular coordinate, the circular coordinate on the left and right portions is smoothly and evenly distributed, suggesting that the two parts constitute one cycle.
Figure 8. The outcome of the circular coordinate is represented by the two figures on the left with PCA and t-SNE ((A), (C)). We magnify where the points of the corresponding object are gathered and show them by adjusting the point size respectively ((B), (D)).
Figure 7. Persistence diagram on COIL-100 data using Euclidean distance.
## 5. Discussion
In our study, we conducted a thorough investigation into the circular coordinate to analyze data. As part of this investigation, we looked at how these approaches handle data that has uneven density, or where the distribution of data points is not consistent throughout the dataset.
Through our observations, we were able to identify areas where the results produced by these methods could be improved. In order to address this issue, we developed two new approaches that are more robust to different densities of data. These new approaches are designed to produce more accurate and reliable results even when the data has uneven density.
In the weighted circular coordinate method, the circular coordinate changes sensitively depending on how we choose weight. Therefore, it is important to choose the weight for each edge. In such a sense, we present a circular coordinate using WDGL, and it works as we expected in synthetic datasets. But the variable \(t_{n}\) in Theorem 3.1 remains our hyper-parameter; when \(n\) goes to infinity, we know that our method will work well if we use \(t_{n}=n^{-\frac{1}{k+2+\alpha}}\) for any \(\alpha>0\), but we need to determine \(t_{n}\) manually for a relatively small limited amount of data.
For \(L^{p}\)-circular coordinate, optimization time is a major obstacle. We know experimentally and theoretically that \(L^{p}\)-circular coordinate is robust to change of density functions as \(p\) increases, but it takes quite a long time to optimize this. For this reason, we studied how to speed up optimization and we achieved quite a lot of speed-up, but it still takes a lot of time to get \(L^{\infty}\)-circular coordinate.
|
2303.03036
|
Deep Clustering with a Constraint for Topological Invariance based on
Symmetric InfoNCE
|
We consider the scenario of deep clustering, in which the available prior
knowledge is limited. In this scenario, few existing state-of-the-art deep
clustering methods can perform well for both non-complex topology and complex
topology datasets. To address the problem, we propose a constraint utilizing
symmetric InfoNCE, which helps an objective of deep clustering method in the
scenario train the model so as to be efficient for not only non-complex
topology but also complex topology datasets. Additionally, we provide several
theoretical explanations of the reason why the constraint can enhances
performance of deep clustering methods. To confirm the effectiveness of the
proposed constraint, we introduce a deep clustering method named MIST, which is
a combination of an existing deep clustering method and our constraint. Our
numerical experiments via MIST demonstrate that the constraint is effective. In
addition, MIST outperforms other state-of-the-art deep clustering methods for
most of the commonly used ten benchmark datasets.
|
Yuhui Zhang, Yuichiro Wada, Hiroki Waida, Kaito Goto, Yusaku Hino, Takafumi Kanamori
|
2023-03-06T11:05:21Z
|
http://arxiv.org/abs/2303.03036v1
|
# Deep Clustering with a Constraint for Topological Invariance based on Symmetric InfoNCE
# Deep Clustering with a Constraint for Topological Invariance based on Symmetric InfoNCE
Yuhui Zhang\({}^{1}\), \({}^{*}\), Yuichiro Wada\({}^{2}\), \({}^{3}\), \({}^{*}\), Hiroki Waida\({}^{1}\), \({}^{*}\)
**Kaito Goto\({}^{1}\), \({}^{\dagger}\), Yusaku Hino\({}^{1}\), \({}^{\dagger}\), Takafumi Kanamori\({}^{1},3,\)**
\({}^{1}\)Tokyo Institute of Technology, Tokyo, Japan
\({}^{2}\)Fujitsu Limited, Kanagawa, Japan
\({}^{3}\)RIKEN AIP, Tokyo, Japan
**Keywords:** Clustering, Deep Learning, Mutual Information, Contrastive Learning, Metric Learning
+
Footnote †: \({}^{*}\)Equally Contributions
Part of this work was done while K. Goto and Y. Hino were master course students at Tokyo Institute of Technology.Corresponding author
**Abstract**
We consider the scenario of deep clustering, in which the available prior knowledge is limited. In this scenario, few existing state-of-the-art deep clustering methods can perform well for both non-complex topology and complex topology datasets. To address the problem, we propose a constraint utilizing symmetric InfoNCE, which helps an objective of deep clustering method in the scenario train the model so as to be efficient for not only non-complex topology but also complex topology datasets. Additionally, we provide several theoretical explanations of the reason why the constraint can enhances performance of deep clustering methods. To confirm the effectiveness of the proposed constraint, we introduce a deep clustering method named MIST, which is a combination of an existing deep clustering method and our constraint. Our numerical experiments via MIST demonstrate that the constraint is effective. In addition, MIST outperforms other state-of-the-art deep clustering methods for most of the commonly used ten benchmark datasets.
## 1 Introduction
### Background
Clustering is one of the most popular and oldest research fields of machine learning. Given unlabeled data points, the goal of clustering is to group them according to some
criterion. In addition, in most of the cases clustering is performed with unlabeled datasets. Until today, many clustering algorithms have been proposed (MacQueen, 1967; Day, 1969; Girolami, 2002; Wang et al., 2003; Ng et al., 2002; Ester et al., 1996; Sibson, 1973). For a given unlabeled dataset, the number of clusters, and a distance metric, K-means (MacQueen, 1967) aims to partition the dataset into the given number clusters. Especially when the squared Euclidean distance is employed as the metric, it returns convex-shaped clusters within short running time. GMMC (Gaussian Mixture Model Clustering) (Day, 1969) assigns labels to estimated clusters after fitting GMM to an unlabeled dataset, where the number of clusters can be automatically determined by Bayesian Information Criterion (Schwarz, 1978). The kernel K-means (Girolami, 2002), kernel GMMC (Wang et al., 2003) and SC (Spectral Clustering) (Ng et al., 2002) can deal with more complicated shapes compared to K-means and GMMC. On the other hand, as examples categorized into a clustering method that does not require the number of clusters, DBSCAN (Ester et al., 1996) and Hierarchical Clustering (Sibson, 1973) are listed.
Although the above-mentioned classical clustering methods are useful for low-dimensional small datasets, they often fail to handle large and high-dimensional datasets (e.g., image/text datasets). Due to the development of deep learning techniques for DNNs (Deep Neural Networks), we can now handle large datasets with high-dimension (Lecun et al., 2015). Note that a clustering method using DNNs is refereed to as Deep Clustering method.
### Our Scenario with Deep Clustering
The most popular scenario for deep clustering is the domain-specific scenario (Scenario1) (Mukherjee et al., 2019; Ji et al., 2019; Asano et al., 2019; Van Gansbeke et al., 2020; Caron et al., 2020; Yang et al., 2020; Monnier et al., 2020; Li et al., 2021; Dang et al., 2021). In this scenario, an unlabeled dataset of a specific domain and its number of clusters are given, while specific rich knowledge in the domain can be available. The dataset is often represented by raw data. An example of the specific knowledge in the image domain is efficient domain-specific data-augmentation techniques (Ji et al., 2019). It is also known that CNN (Convolutional Neural Network) (LeCun et al., 1989) can be an efficient DNN to extract useful features from raw image data. In this scenario, most of the authors have proposed _end-to-end_ methods, while some have done _sequential_ methods. In the category of the end-to-end methods, a model is defined by an efficient DNN for a specific domain, where the input and output of the DNN are (raw) data and its predicted cluster label, respectively. The model is trained under a particular criterion as utilizing the domain-specific knowledge. In the category of the sequential methods, the clustering DNN is often constructed in the following three steps: 1) Create a DNN model that extracts features from data in a specific domain, followed by an MLP (Multi-Layer Perceptron) predicting cluster labels. 2) Train the feature-extracting DNN using an unlabeled dataset and domain-specific knowledge. Then, freeze the set of trainable parameters in the feature extracting DNN. 3) Train the MLP with the features obtained from the feature extracting DNN, and domain-specific knowledge.
The secondly important scenario is called the non-domain-specific scenario (Scenario2) (Springenberg, 2015; Xie et al., 2016; Jiang et al., 2017; Guo et al., 2017; Hu et al.,
2017; Shaham et al., 2018; Nazi et al., 2019; Gupta et al., 2020; McConville et al., 2020). In this scenario, an unlabeled dataset and the number of clusters are given, while a few generic assumptions for a dataset can be available, such as 1) the cluster assumption, 2) the manifold assumption, and 3) the smoothness assumption; see Chapelle et al. (2006) for details. In addition, unlabeled data is often represented by a feature vector. As well as Scenario1, many authors have proposed end-to-end methods where an MLP model is trained by utilizing some generic assumption, while some have done sequential methods.
A scenario apart from Scenario1 and Scenario2 is reviewed in Appendix A.1.
In this study, we focus on Scenario2 due to the following two reasons. The first reason is that we do not always encounter Scenario1 in practice. The second one is that if we prepare an efficient deep clustering algorithm of Scenario2, this algorithm can be incorporated into the third step of the sequential method of Scenario1.
### Motivation behind and Goal
To understand the pros and cons of the recent state-of-the-art methods under Scenario2, we conduct preliminary experiments with eight deep clustering methods shown in Table 1. Among the eight methods, SCAN, IIC, CatGAN, and SELA are originally proposed in Scenario1. Therefore, we redefined the four methods to keep fairness in comparison; see Appendix E.3 for details of the alternative definitions that fit to our setting. For the datasets, we employed ten datasets as shown in the first row of Table 2. Here, Two-Moons and Two-Rings are synthetic, and the remaining eight are real-world; see more details in Section 4.2 and Appendix E.1. Throughout our experiments, we have found that the real-world datasets can be clustered well by performing K-means with the Euclidean distance, while the synthetic datasets cannot. We therefore regard the synthetic (resp. real-world) datasets as having _complex topology_ (resp. _non-complex topology_). Intuitively, a dataset is topologically non-complex if it is clustered well by K-means with the Euclidean distance. Otherwise, the dataset is thought to have a complex topology. For more details of complex and non-complex topologies, see Appendix E.2.
The experimental results are shown in Table 2. As shown in the table, for the complex topology datasets, all the eight deep clustering methods of Table 1 except for SpectralNet fail to cluster data points, compared to SC of a classical method. Note that, among the eight compared deep clustering methods, ones tha
\begin{table}
\begin{tabular}{c c c c} & Type & Explanation & Example(s) \\ \hline \multirow{3}{*}{\(\mathfrak{S}_{1}\)} & \(\mathfrak{L}_{1}\) & Plain embedding based & DEC (Xie et al., 2016), SCAN (Van Gansbeke et al., 2020) \\ & \(\mathfrak{L}_{2}\) & Spectral embedding based & SpectralNet (Shaham et al., 2018) \\ \hline \multirow{3}{*}{\(\mathfrak{S}_{2}\)} & \(\mathfrak{L}_{3}\) & Variational bound based & VaDE (Jiang et al., 2017) \\ & \(\mathfrak{L}_{4}\) & Mutual information based & IMSAT (Hu et al., 2017), IIC (Ji et al., 2019) \\ \cline{1-1} & \(\mathfrak{L}_{5}\) & Generative adversarial net based & CatGAN (Springenberg, 2015) \\ \cline{1-1} & \(\mathfrak{L}_{6}\) & Optimal transport based & SELA (Asano et al., 2019) \\ \hline \end{tabular}
\end{table}
Table 1: Representative deep clustering methods (either sequential or end-to-end). They are categorized into six types; \(\mathfrak{L}_{1}\) to \(\mathfrak{L}_{6}\). The example methods are shown for each type. Here, Seq. is abbreviation of the word ”Sequential”.
(K-Nearest Neighbor) graph tend to perform better for the complex topology datasets than ones that do not. Here, the methods incorporating the graph are SpectralNet, IMCAT, IIC, and SCAN. On the other hand, for the non-complex topology datasets, only IMSAT sufficiently outperforms the three classical methods on average. In addition, the average clustering performance of IMSAT over the ten datasets is the best among the eight methods. As Table 2 suggests, to the best of our knowledge, almost none of previous deep clustering methods sufficiently perform well for both non-complex and complex datasets. Those results motivate our study.
Our aim is to propose a constraint that helps an objective of deep clustering method in Scenario2 train the model so as to be efficient for not only non-complex topology but also complex topology datasets. Such a versatile deep clustering objective can be helpful for users.
### Contributions
To achieve our goal, we propose the constraint for topological invariance. For two data points close to each other, the corresponding class probabilities computed by an MLP should be close. For example, in b) of Figure 1, any pair of two red points is close, while any pair of a red and a blue point are apart from each other. This constraint is introduced as a regularization based on the maximization of _symmetric InfoNCE_ between the two probability vectors; see Section 3.3. In order to define the two probability vectors, we introduce two kinds of _paired data_. One is used for a non-complex topology dataset, which is based on a K-NN graph with the Euclidean metric; see Definition 3. The other is used for a complex topology dataset, and it is based on a K-NN graph with the geodesic metric on the graph; see Definition 4. Both graphs are defined only with an unlabeled dataset. The geodesic metric is defined by the graph-shortest-path distance on the K-NN graph constructed with the Euclidean distance.
\begin{table}
\begin{tabular}{l l c c c c c c c c c} \hline & & Two-Moons & Two-Rings & MNIST & STL & CIFAR10 & CIFAR100 & Omniglot & 20news & SVHN & Reuters10K \\ \hline \multirow{4}{*}{\begin{tabular}{c} \end{tabular} } & K-means & 75.1 & 50.0 & 53.2 & 85.6 & 34.4 & 21.5 & 12.0 & 28.5 & 17.9 & 54.1 \\ & SC & **100.0** & **100.0** & 63.7 & 83.1 & 36.6 & - & - & - & 27.0 & 43.5 \\ & GMMC & 85.9 & 50.3 & 37.7 & 83.5 & 36.7 & 22.5 & 7.6 & **39.0** & 14.2 & 67.7 \\ \hline \multirow{4}{*}{\begin{tabular}{c} \end{tabular} } & DEC & 70.3(7.1) & 50.7(0.3) & \(\uparrow\)84.3 & \(\uparrow\)78.1(0.1) & \(\uparrow\)46.9(0.9) & \(\uparrow\)14.3(0.6) & \(\uparrow\)15.7(0.3) & 30.1(2.8) & \(\uparrow\)11.9(0.4) & \(\uparrow\)67.3(0.2) \\ & SpectralNet & **100(0)** & 99.9(0.0) & \(\uparrow\)82.6(3.0) & 90.4(2.1) & 44.3(0.6) & 22.7(0.3) & 2.5(0.1) & 6.3(0.1) & 10.4(0.1) & \(\uparrow\)66.1(1.7) \\ & VaDE & 50.0(0.0) & 50.0(0.0) & 83.0(2.6) & 68.8(12.7) & 39.5(0.7) & 12.13(0.2) & 1.0(0.0) & 12.7(5.1) & 32.9(3.2) & 70.5(2.5) \\ & IMSAT & 86.3(14.8) & 71.3(20.4) & 98.4(0.4) & 93.8(0.5) & 45.0(0.5) & 27.2(0.4) & **24.6(0.7)** & 37.4(1.4) & 54.8(5.1) & 72.7(4.6) \\ & IICS & 77.2(18.4) & 66.2(21.5) & 45.4(8.3) & 39.0(8.7) & 23.9(4.8) & 4.4(0.9) & 2.3(0.4) & 14.9(5.3) & 17.1(1.1) & 58.3(2.2) \\ & CatGAN\(\lx@sectionsign\) & 81.6(5.3) & 53.7(2.5) & 15.2(3.5) & 32.9(3.0) & 15.1(2.4) & 5.1(0.5) & 3.3(0.2) & 19.5(6.5) & 20.4(0.9) & 43.6(7.3) \\ & SELA\(\lx@sectionsign\) & 62.7(9.5) & 52.6(0.1) & 46.1(3.4) & 68.6(0.4) & 29.7(0.1) & 18.8(0.3) & 11.3(0.2) & 20.2(0.1) & 19.3(0.3) & 49.1(1.7) \\ & SCAN\(\lx@sectionsign\) & 85.7(22.6) & 75.1(23.1) & 82.1(3.7) & 92.8(0.5) & 43.3(0.6) & 24.6(0.1) & 17.6(0.4) & 38.4(1.1) & 23.2(1.6) & 63.4(4.2) \\ \hline \multirow{4}{*}{
\begin{tabular}{c} \end{tabular} } & MIST via \(\hat{I}_{\text{one}}\) & **100(0)** & 95.2(2.0) & 98.0(1.0) & 94.2(0.4) & 48.9(0.7) & **27.8(0.5)** & 24.0(1.0) & 38.3(2.8) & 58.7(3.5) & 72.7(3.3) \\ & **MIST** (ours) & **100(0)** & 93.3(16.3) & **98.6(0.1)** & **94.5(0.1)** & **49.8(2.1)** & **27.8(0.5)** & **24.6(1.1)** & 38.8(2.3) & **60.4(4.2)** & **73.4(4.6)** \\ \hline \end{tabular}
\end{table}
Table 2: Comparison of classical and deep clustering methods in terms of clustering accuracy (% One and seven trials are respectively conducted for the classical (top 3 methods) and the deep clustering methods, respectively. Mean and standard deviation of their accuracy are reported. Symbol ”-” means that result was not returned by the clustering algorithm within one hour of running. Numbers with \(\dagger\) are copied from the corresponding studies. The symbol \(\lx@sectionsign\) means that an original method is redefined for Scenario2.
We emphasize that under Scenario2, it is impossible to incorporate powerful domain-specific knowledge into a deep clustering method. In addition, the maximization of the symmetric InfoNCE has not been studied yet in the context of deep clustering. Moreover, we present the legitimacy of our topological invariant constraint by showing several theoretical findings from mainly two perspectives: 1) in Section 3.1, the motivations and the potential of the proposed constraint are clarified from the standpoint of statistical dependency measured by MI (Mutual Information), and 2) in Section 3.2, an extended result of the theory on contrastive representation learning derived from Wang et al. (2022) is presented to discuss advantages of the symmetric InfoNCE over InfoNCE for deep clustering. Note that InfoNCE (van den Oord et al., 2018) was initially used for domain-specific representation learning such as vision tasks, NLP, and reinforcement learning. Here, a purpose of representation learning is to extract useful features that can be applied to a variety range of machine learning methods (Bengio et al., 2013). Whereas, the purpose of clustering is to annotate the cluster labels to unlabeled data points.
The main contributions are summarized as follows:
1. We propose a topological invariant constraint via the symmetric InfoNCE for the purpose of deep clustering in Scenario2, and then show the advantage by
Figure 1: Two-dimensional visualizations of clustering results. In the first row except for a), three visualizations are obtained via the following procedure: using the trained MLP, compute the output for each feature in MNIST, where dimension of the feature is 784. Then, the outputs are transformed into two-dimensional vectors by UMAP. Thereafter, true labels are assigned to those vectors. As for a), the original features are directly transformed into two-dimensional vectors by UMAP, and then the labels are assigned to the transformed vectors. For the second row, the true labels (resp. predicted cluster labels by the trained MLP) are assigned to the original features for obtaining b) (resp. d), f), and h)). Note that the original features of Two-Rings belong to \(\mathbb{R}^{2}\).
providing analysis from several theoretical aspects.
2. To evaluate the proposed constraint in numerical experiments, by applying the constraint to IMSAT, we define a deep clustering method named _MIST (Mutual Information maximization with local Smoothness and Topologically invariant constraints)_. In the experiments, we confirm that the proposed constraint enhances the accuracy of a deep clustering method. Furthermore, to the best of our knowledge, MIST achieves state-of-the-art clustering accuracy in Scenario2 for not only non-complex topology datasets but also complex topology datasets.
In Figure 1, a positive impact of the topological invariant constraint toward IMSAT is visualized via UMAP (McInnes et al., 2018); compare d) and h) in the figure. See further details of Figure 1 and more two-dimensional visualizations of MIST for other datasets in Appendix E.4.
This paper is organized as follows. In Section 2, we overview related works. In Section 3, we explain details of the topological invariant constraint, and then show the theoretical properties. In numerical experiments of Section 4, we define MIST. Then, we evaluate the proposed constraint via MIST using two synthetic datasets and eight real-world datasets. In the same section, some case studies are also provided. In Section 4.7, we conclude this paper.
## 2 Related Works
In Section 2.1, we briefly explain representative deep clustering methods shown as examples in Table 1 since they are compared methods in numerical experiments of Section 4. Then, details of InfoNCE, which is closely related to our topological invariant constraint, is introduced in Section 2.2.
### Representative Deep Clustering Methods
Let us start from sequential methods of \(\mathfrak{T}_{1}\) and \(\mathfrak{T}_{2}\) in Table 1. In DEC (Xie et al., 2016) of \(\mathfrak{T}_{1}\), at first, a stacked denoising Auto-Encoder (AE) is trained with a set of unlabeled data points to extract the feature. Using the trained encoder, we can have the feature vectors. Then, K-means is used on the vectors in order to obtain the set of centroids. After that, being assisted by the centroids, the encoder is refined for the clustering. In SCAN (Van Gansbeke et al., 2020) of \(\mathfrak{T}_{1}\), a ResNet (He et al., 2016) is trained using augmented raw image datasets under SimCLR (Chen et al., 2020) criterion to extract the features. Then, the clustering MLP added to the trained ResNet is tuned by maximizing Shannon-entropy of the cluster label while two data points in a nearest neighbor relationship are forced to have same cluster label. Then, in SpectralNet (Shaham et al., 2018) of \(\mathfrak{T}_{2}\), at first, a Siamese network is trained by the predefined similarity scores on the K-NN graph. Then, being assisted by the trained Siamese network, a clustering DNN is trained. Note that the two networks (i.e., Siamese net and clustering net) are categorized into this method.
With regard to end-to-end methods of \(\mathfrak{T}_{3}\) to \(\mathfrak{T}_{6}\), in VaDE (Jiang et al., 2017) of \(\mathfrak{T}_{3}\), a variational AE is trained so that the latent representation of unlabeled data points has the
Gaussian mixture distribution. Here, the number of mixture components is equal to the number of clusters. For IMSAT and IIC of \(\mathfrak{T}_{4}\), in IMSAT (Hu et al., 2017), the clustering model is trained via maximization of the MI between a data point and the cluster label, while regularizing the model to be locally smooth; see Appendix A.3. Likewise, IIC (Ji et al., 2019) returns the estimated cluster labels using the trained model for clustering. The training criterion is based on maximization of the MI between the cluster label of a raw image and the cluster label of the transformed raw image; see Appendix A.2. IIC employs a CNN-based clustering model to take advantages of image-specific prior knowledge. Furthermore, in CatGAN (Springenberg, 2015) of \(\mathfrak{T}_{5}\), the neural network for clustering is trained to be robust against noisy data. Here, the noisy data is defined as a set of fake data points obtained from the generator that is trained to mimic the distribution of original data. Lastly, in SELA (Asano et al., 2019) of \(\mathfrak{T}_{6}\), a ResNet is trained for clustering using an augmented unlabeled dataset with pseudo labels under the cross-entropy minimization criterion. The pseudo labels are updated at the end of every epoch by solving an optimal transporting problem.
### Info Noise Contrastive Estimation
In representation learning, InfoNCE (Info Noise Contrastive Estimation) based on NCE has recently become a popular objective. The (\(q\)-)InfoNCE of the random variables \(Z\) and \(Z^{\prime}\) is defined by
\[I_{\mathrm{nce},q}(Z;Z^{\prime})=\mathbb{E}[q(Z,Z^{\prime})]-\mathbb{E}_{Z} \big{[}\log\mathbb{E}_{Z^{\prime}}[e^{q(Z,Z^{\prime})}]\big{]}, \tag{1}\]
where \(q\) is called _critic function_ that quantifies the dependency between \(Z\) and \(Z^{\prime}\). For any critic, \(q\)-InfoNCE provides a lower bound of an MI. Furthermore, we can see that the maximum value of \(I_{\mathrm{nce},q}\) is the MI, which is attained by \(q(z,z^{\prime})=\log p(z|z^{\prime})+[\text{any function of }z]\); see Poole et al. (2019); Belghazi et al. (2018) and Eq.(16) for details. Here, \(p(z|z^{\prime})\) is the conditional probability of \(z\) given \(z^{\prime}\). When it comes to the image processing (Chen et al., 2020; Grill et al., 2020), the observations, \(z\) and \(z^{\prime}\), are often given as different views or augmentations of an image. For example, \(z\) and \(z^{\prime}\) are observed by rotating, cropping, or saturating the same source image. Such a pair of images are regarded as positive samples (pair). A pair of transformed images coming from different source images are negative samples (pair).
Suppose we have samples \(z_{1},\cdots,z_{m}\) and \(z^{\prime}_{1},\cdots,z^{\prime}_{m}\), such that \(z_{i}\) and \(z^{\prime}_{i}\) are all positive samples for \(i=1,\cdots,m\) and \(z_{i}\) and \(z^{\prime}_{j}\) for \(i\neq j\) are negative samples. Then, InfoNCE is empirically approximated by
\[\hat{I}_{\mathrm{nce},q}=\frac{1}{m}\sum_{i=1}^{m}\log\frac{e^{q\big{(}z_{i},z^ {\prime}_{i}\big{)}}}{\frac{1}{m}\sum_{j=1}^{m}e^{q\big{(}z_{i},z^{\prime}_{j }\big{)}}}. \tag{2}\]
In order to approximate the MI by InfoNCE, one can use a parameterized model with a critic function \(q\). In the original work of InfoNCE (van den Oord et al., 2018) the critic \(q_{W}(z,z^{\prime})=z^{T}Wz^{\prime}\) with the weight matrix \(W\) is employed. Then, the maximum value of \(\hat{I}_{\mathrm{nce},q_{W}}\) w.r.t. \(W\) is computed to estimate the MI. As pointed out by van den Oord et al. (2018); Poole et al. (2019); Tschannen et al. (2019), The empirical InfoNCE is bounded above by \(\log m\), making the bound loose when \(m\) is small, or the MI \(I(Z;Z^{\prime})\) is large.
## 3 Proposed Constraint and its Theoretical Analysis
### Notations
In the following, \(\mathcal{D}=\{x_{i}\}_{i=1}^{n}\) (\(\forall i;x_{i}\in\mathbb{R}^{d}\)) is a set of unlabeled data, where \(n\) is the number of data points and \(d\) is the dimension of a data point. The number of clusters is denoted by \(C\). Here, let \(y_{i}\) denote the true label of \(x_{i}\). Let us define \(p\left(y|x\right)\) as the conditional discrete probability of a cluster label \(y\in\{1,2,...,C\}\) for a data point \(x\in\mathbb{R}^{d}\). The random variable corresponding to \(x\) (resp. \(y\)) is denoted by \(X\) (resp. \(Y\)). Let \(\Delta^{C-1}=\{z\in\mathbb{R}^{C}\mid z\geq 0,z^{\top}\mathbf{1}=1\}\) be the \((C-1)\)-dimensional probability simplex, where \(\mathbf{1}\) is the \(C\)-dimensional vector \((1,1,...,1)^{\top}\).
**Definition 1** (MLP model \(g_{\theta}\)): _Consider a DNN model \(g_{\theta}(x):\mathbb{R}^{d}\rightarrow\Delta^{C-1}\) with trainable set of parameters \(\theta\), where the activation for the last layer is defined by the \(C\)-dimensional softmax function. The \(y\)-th element of \(g_{\theta}(x)\) is denoted by \(g_{\theta}^{y}(x)\). Let \(\theta^{*}\) denote the trained set of parameters via a clustering objective, using an unlabeled dataset \(\mathcal{D}\). The predicted cluster label of \(x_{i}\in\mathcal{D}\) is defined by \(\hat{y}_{i}=\operatorname*{argmax}_{y\in\{1,\cdots,C\}}g_{\theta^{*}}^{y}(x_{i})\)._
### Preliminary
Consider Scenario2 of Section 1.2, where a set of unlabeled data \(\mathcal{D}=\{x_{i}\}_{i=1}^{n},x_{i}\in\mathbb{R}^{d}\) and the number of clusters \(C\) are given, while a few generic assumptions for the dataset can be available. We firstly in Section 3.3 introduce the topological invariant constraint based on symmetric InfoNCE and an MLP model \(g_{\theta}\). Then, in Section 3.4, some relations between the symmetric InfoNCE and the corresponding MI are theoretically analyzed. Thereafter, based on the analysis, we explain theoretical advantages of the symmetric InfoNCE over existing popular constraints such as IIC and InfoNCE in terms of deep clustering.
Before stating the mathematical definitions and the properties, we briefly explain why the symmetric InfoNCE can enhance a deep clustering method as a topological invariant constraint. As mentioned in Section 1.4, the topological invariant constraint is expected to regularize \(g_{\theta}\) so as to be \(g_{\theta}(X)\approx g_{\theta}(X^{\prime})\in\Delta^{C-1}\) for any geodesically-close two data points \(X,X^{\prime}\in\mathcal{D}\) in the original space \(\mathbb{R}^{d}\). In other words, predicted cluster labels of \(X\) and \(X^{\prime}\) are enforced to be same. For the regularization, InfoNCE and its variants are potentially useful. The reason is that in representation learning InfoNCE is empirically successful for making the following two feature vectors close to each other: 1) a feature vector returned by a DNN with a raw data as an input, 2) a feature returned by the same DNN with an augmented data from the raw data (van den Oord et al., 2018; Chen et al., 2020). Note that feature vectors described in 1) and 2) are not in \(\Delta^{C-1}\) but usually in the high-dimensional Euclidean space. In this study, the symmetric InfoNCE between \(X\) and \(X^{\prime}\) is proposed as a constraint for topological invariance. The pair \((X,X^{\prime})\) is given by \((X,T(X))\), where \(T(X)\) is a transformation of \(X\): some practical tranformations are introduced in Definition 3 and 4 of Section 3.3.
### Topological Invariant Constraint
We aim to design a constraint for topological invariance that should satisfy the following condition; if clusters of \(\mathcal{D}\) have a non-complex (resp. complex) topology, the constraint assists a model \(g_{\theta}\) to predict the same cluster labels for \(x\in\mathcal{D}\) and \(x^{\prime}\in\mathcal{D}\) whenever \(x\) and \(x^{\prime}\) are close to each other in terms of the Euclidean (resp. geodesic) distance. In the sequel, we define the constraint via symmetric InfoNCE. Then, we investigate its theoretical properties.
Firstly, let us define a function \(q:\Delta^{C-1}\times\Delta^{C-1}\rightarrow\mathbb{R}\) as follows:
\[q(z,z^{\prime})=\log\left(\exp_{\alpha}\left(\tau(z^{\top}z^{\prime}-1)\right) \right), \tag{3}\]
where \(\alpha\in\mathbb{R}\) and \(0\leq\tau\leq|1-\alpha|^{-1}\). In addition, for \(u\in\mathbb{R}\), \(\exp_{\alpha}(u)\) is defined by \([1+(1-\alpha)u]_{+}^{1/(1-\alpha)}\) for \(\alpha\neq 1\) and \(e^{u}\) for \(\alpha=1\), where \([\ \cdot\ ]_{+}=\max\{\cdot,0\}\). The function \(q\) of Eq.(3) w.r.t. \(z\) and \(z^{\prime}\) is maximized if and only if \(z\) and \(z^{\prime}\) are the same one-hot vector. On the other hand, it is minimized if and only if \(z^{\top}z^{\prime}=0\) (i.e., \(z\) and \(z^{\prime}\) are orthogonal to each other).
We then define transformation \(T\) for constructing a pair of geodesically-close two data points \((X,T(X))\) based on \(X\) as follows.
**Definition 2** (Transformation \(T\)): _Let \(X\) be a \(d\)-dimensional random variable. Then, \(T:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}\) denote the transformation of \(X\), and it is also a random variable. The realization is denoted by \(t:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}\). Given a data point \(x\), the function \(t\) is sampled from the conditional probability \(p(t|x)\)._
The probability \(p(t|x)\) is defined through a generative process. In this study, two processes, \(\mathcal{T}_{\mathfrak{e}}\) and \(\mathcal{T}_{\mathfrak{g}}\), are considered. The first (resp. second) one, \(\mathcal{T}_{\mathfrak{e}}\) (resp. \(\mathcal{T}_{\mathfrak{g}}\)), is defined using the K-NN graph with the Euclidean (resp. geodesic) distance, and is employed for non-complex (resp. complex) topology datasets.
**Definition 3** (Generative process \(\mathcal{T}_{\mathfrak{e}}\)): _Given an unlabeled dataset \(\mathcal{D}=\{x_{i}\}_{i=1}^{n}\), a natural number \(K_{0}\), and \(\beta\in[0,1)\) as inputs, then the generative process of a transformation is defined as follows. 1) At the beginning, build a K-NN graph with \(K=K_{0}\) on \(\mathcal{D}\) based on the Euclidean distance. 2) For all \(k=1+\lfloor\beta K_{0}\rfloor,\cdots,K_{0}\), define a function \(t^{(i\to k)}:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}\) by \(x_{i}^{(k)}=t^{(i\to k)}(x_{i})\), where \(x_{i}^{(k)}\) is a \(k\)-th nearest neighbor data points of \(x_{i}\) on the graph. 3) Define the conditional distribution \(p(t|x_{i})\) as the uniform distribution on \(t^{(i\to k)},k=1+\lfloor\beta K_{0}\rfloor,\cdots,K_{0}\)._
**Definition 4** (Generative process \(\mathcal{T}_{\mathfrak{g}}\)): _Given an unlabeled dataset \(\mathcal{D}=\{x_{i}\}_{i=1}^{n}\), a natural number \(K_{0}\), and \(\beta\in[0,1)\) as inputs, then firstly build a K-NN graph based on the Euclidean distance with \(K=K_{0}\) on \(\mathcal{D}\). Then, in order to approximate the geodesic distance between \(x_{i}\) and \(x_{j}\), compute the graph-shortest-path distance. Let \(\mathfrak{g}_{ij}\) be the approximated geodesic distance between \(x_{i}\) and \(x_{j}\), and \(\mathfrak{G}\) be an \(n\times n\) matrix \((\mathfrak{g}_{ij})_{i,j=1,\cdots,n}\). For each \(i\), let \(\mathcal{M}_{i}=\{j\ |\ 0<\mathfrak{g}_{ij}<\infty\}\) be the set of indices where each \(x_{j}\) is a neighborhood of \(x_{i}\) under the geodesic distance. For each \(i\), the generative process of the transformation \(t\) is given as follows. 1) For all \(k=\lfloor\mathcal{M}_{i}\rfloor-\lfloor\beta K_{\mathfrak{g}}\rfloor+1, \cdots,\lfloor\mathcal{M}_{i}\rfloor\), define a function \(t^{(i\to k)}:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}\) by \(x_{i}^{(k)}=t^{(i\to k)}(x_{i})\)
_where \(x_{i}^{(k)}\) is a \(k\)-th geodesically nearest neighbor data points from \(x_{i}\) on \(\mathfrak{G}\) except in the case of \(\mathfrak{g}_{ij}=\infty\) and \(\mathfrak{g}_{ij}=0\). 2) Define the conditional distribution \(p(t|x_{i})\) as the uniform distribution on \(t^{(i\to k)},k=|\mathcal{M}_{i}|-\lfloor\beta K_{\mathfrak{g}}\rfloor+1, \cdots,|\mathcal{M}_{i}|\)._
The time and memory complexities with \(\mathcal{T}_{\mathfrak{e}}\) and \(\mathcal{T}_{\mathfrak{g}}\) are provided in Appendix D.2. Intuitively, when \(\beta=0\), \(\mathcal{T}_{\mathfrak{e}}\) picks a random neighbor of \(x\) as \(T(x)\) in the \(K_{0}\)-nearest neighbor graph, while \(\mathcal{T}_{\mathfrak{g}}\) picks the a random neighbor by the geodesic metric induced by the \(K_{0}\)-nearest neighbor graph. Larger \(\beta\) disables \(\mathcal{T}_{\mathfrak{e}}\) and \(\mathcal{T}_{\mathfrak{g}}\) from picking closest neighbors.
Using the function \(q\) of Eq.(3), let us define \(I_{\mathrm{nce}}\equiv I_{\mathrm{nce},q}(g_{\theta}(X);g_{\theta}(T(X)))\) and \(I^{\prime}_{\mathrm{nce}}\equiv I_{\mathrm{nce},q}(g_{\theta}(T(X));g_{\theta }(X))\); see \(I_{\mathrm{nce},q}\) in Eq.(1). We then define the symmetric InfoNCE by \(\left(I_{\mathrm{nce}}+I^{\prime}_{\mathrm{nce}}\right)/2\). Then, the topological invariant constraint is defined as follows:
\[-M\leq-\frac{I_{\mathrm{nce}}+I^{\prime}_{\mathrm{nce}}}{2}\leq-M+\delta, \tag{4}\]
where \(\delta\) is a small fixed positive value, and \(M=\sup_{\theta}\left(I_{\mathrm{nce}}+I^{\prime}_{\mathrm{nce}}\right)/2\).
In practice, given a mini-batch \(\mathcal{B}\subseteq\mathcal{D}\), we can approximate \(-\left(I_{\mathrm{nce}}+I^{\prime}_{\mathrm{nce}}\right)/2\) by \(-(\hat{I}_{\mathrm{nce}}+\hat{I}^{\prime}_{\mathrm{nce}})/2\) (recall Eq.(2) for \(\hat{I}_{\mathrm{nce}}\)), where \(-\hat{I}_{\mathrm{nce}}\) is given by
\[-\hat{I}_{\mathrm{nce}}=-\frac{1}{|\mathcal{B}|}\sum_{x_{i}\in\mathcal{B}} \log\frac{e^{q(g_{\theta}(x_{i}),g_{\theta}(t_{i}(x_{i})))}}{\frac{1}{| \mathcal{B}|}\sum_{x_{j}\in\mathcal{B}}e^{q(g_{\theta}(x_{i}),g_{\theta}(t_{j }(x_{j})))}} \tag{5}\]
with the sampled transformation function \(t_{i}\) from \(p(t|x_{i})\) and \(-\hat{I}^{\prime}_{\mathrm{nce}}\) is given by switching two inputs in the function \(q\) of Eq.(5). Here, \(|\mathcal{B}|\) denotes the cardinality of \(\mathcal{B}\).
Figure 2: Illustration of the effect by minimizing point-wise positive loss \(\ell_{\mathrm{ps}}(x_{i})\) and point-wise negative loss \(\ell_{\mathrm{ng}}(x_{i})\). In both i) and ii), the colors (blue, magenta, and yellow) mean different labels, and the light-colored manifolds express true clusters. For i) (resp. ii)), the set of clusters builds Three-Blobs (resp. Two-Rings), and it is an example of non-complex (resp. complex) topology datasets. A pair of the small circle and triangle symbols with the same color means a pair of a data point \(x\) and the transformed data point \(t(x)\), where such pair is constructed by \(\mathcal{T}_{\mathfrak{e}}\) of Definition 3 (resp. \(\mathcal{T}_{\mathfrak{g}}\) of Definition 4) in i) (resp. ii)). The two data points connected by the red dash line (resp. blue straight or curved line) are enforced to be distant (resp. close) to each other by minimizing \(\ell_{\mathrm{ng}}(x_{i})\) (resp. \(\ell_{\mathrm{ps}}(x_{i})\)).
To understand the above empirical symmetric InfoNCE more, we decompose \(-(\hat{I}_{\text{nce}}+\hat{I}_{\text{nce}}^{\prime})/2\) into the following three terms:
\[-\log|\mathcal{B}|\underbrace{-\frac{1}{|\mathcal{B}|}\sum_{x_{i}\in\mathcal{B} }q\left(g_{\theta}(x_{i}),g_{\theta}\left(t_{i}(x_{i})\right)\right)}_{L_{\text {ps}}:\text{ positive loss}}+\underbrace{\frac{1}{|\mathcal{B}|}\sum_{x_{i}\in \mathcal{B}}\frac{1}{2}\log\left(\sum_{x_{j}\in\mathcal{B}}\sum_{x_{i^{\prime} }\in\mathcal{B}}e^{a(i,i^{\prime},j)}\right)}_{L_{\text{ng}}:\text{ negative loss}}, \tag{6}\]
where \(a\left(i,i^{\prime},j\right)=q\left(g_{\theta}(x_{i}),g_{\theta}\left(t_{j}(x_ {j})\right)\right)+q\left(g_{\theta}(x_{i^{\prime}}),g_{\theta}\left(t_{i}(x_ {i})\right)\right)\). Note that the decomposition is based on the fact that \(q(z,z^{\prime})=q(z^{\prime},z)\) for all \(z,z^{\prime}\in\Delta^{C-1}\). In Eq.(6), we call the second and the third term _positive loss_ and _negative loss_, respectively. These names are natural in the sense of metric learning (Sohn, 2016). Indeed, making \(L_{\text{ps}}\) smaller w.r.t. \(\theta\) leads to \(g_{\theta}(x_{i})\approx g_{\theta}\left(t_{i}(x_{i})\right)\) for all \(i\) due to the definition of \(q\). Thus, since \(t_{i}(x_{i})\) is a neighbor data point of \(x_{i}\), via minimization of \(L_{\text{ps}}\), the model predicts the same cluster labels for \(x_{i}\) and \(t_{i}(x_{i})\). Here, the neighborhood is defined with the Euclidean (resp. the geodesical) neighborhood on K-NN graph of \(\mathcal{D}\) through \(\mathcal{T}_{\mathfrak{c}}\) (resp. \(\mathcal{T}_{\mathfrak{g}}\)). On the other hand, making \(L_{\text{ng}}\) smaller leads to \(g_{\theta}(x_{i})\neq g_{\theta}\left(t_{j}(x_{j})\right)\) for all \(i,j\) and \(g_{\theta}(x_{i^{\prime}})\neq g_{\theta}\left(t_{i}(x_{i})\right)\) for all \(i^{\prime}\) and \(i\) due to the definition of \(q\). Thus, via minimization of \(L_{\text{ng}}\), the model can return non-degenerate clusters (i.e., not all the predicted cluster labels are the same).
In Figure 2, for simplicity, we illustrate effects brought by minimizing point-wise positive loss \(\ell_{\text{ps}}(x_{i})\) and point-wise negative loss \(\ell_{\text{ng}}(x_{i})\), which are defined in Eq.(6) as follows; \(\ell_{\text{ps}}(x_{i})=-q\left(g_{\theta}(x_{i}),g_{\theta}\left(t_{i}(x_{i} )\right)\right),\ \ell_{\text{ng}}(x_{i})=\frac{1}{2}\log\left(\sum_{x_{j}\in\mathcal{B}}\sum_{x _{i^{\prime}}\in\mathcal{B}}e^{a(i,i^{\prime},j)}\right).\)
### Theoretical Analysis
In this section, we investigate theoretical properties of the symmetric InfoNCE loss. In Section 3.1, we study the relationship between MI and symmetric InfoNCE. In Section 3.2, we show a theoretical difference between InfoNCE and symmetric InfoNCE.
### Relationship between Symmetric InfoNCE and MI
First we make clear the reason for selecting Eq.(3) as a critic function. Our explanation begins by deriving the optimal critic of the symmetric InfoNCE loss.
**Proposition 1**: _Let \(Z\) and \(Z^{\prime}\) denote two random variables having the joint probability density \(p\). Let \(I_{\text{nce},q}(Z;Z^{\prime})\) the InfoNCE loss defined in Eq.(1). Let us define \(I_{\text{nce},q}(Z^{\prime};Z)\) by switching \(Z\) and \(Z^{\prime}\) of \(I_{\text{nce},q}(Z;Z^{\prime})\). Then, the following MI, \(I(Z;Z^{\prime}):=\mathbb{E}_{p(Z,Z^{\prime})}\left[\log\frac{p(Z,Z^{\prime})} {p(Z)p(Z^{\prime})}\right],\) is an upper bound of the symmetric InfoNCE, \(\frac{I_{\text{nce},q}(Z;Z^{\prime})+I_{\text{nce},q}(Z^{\prime};Z)}{2}\). Moreover, if the function \(q\) satisfies_
\[q(z,z^{\prime})=\log\frac{p(z,z^{\prime})}{p(z)p(z^{\prime})}+c,\,c\in\mathbb{ R}, \tag{7}\]
_then the equality \(I(Z;Z^{\prime})=\frac{I_{\text{nce},q}(Z;Z^{\prime})+I_{\text{nce},q}(Z^{ \prime};Z)}{2}\) holds. In other words, \(q\) satisfying Eq.(7) is the optimal critic._
The proof is shown in Appendix B.1.
Consider \(g_{\theta}(X)\) and \(g_{\theta}(T(X))\) as \(Z\) and \(Z^{\prime}\) of Proposition 1, respectively. Then, the symmetric InfoNCE \(\left(I_{\rm nce}+I^{\prime}_{\rm nce}\right)/2\) of Eq.(4) can be upper-bounded by
\[I(g_{\theta}(X);g_{\theta}(T(X))). \tag{8}\]
Thus, maximization of the symmetric InfoNCE (i.e., the constraint of Eq.(4)) is a reasonable approach to maximize the MI. Note that the computation of \(I(g_{\theta}(X);g_{\theta}(T(X)))\) is difficult, since density-estimation on \(\Delta^{C-1}\) is required.
It is interesting that the optimal critic of the symmetric InfoNCE loss is the pointwise MI of \(g_{\theta}(X)\) and \(g_{\theta}(T(X))\) up to an additive constant. Moreover, we remark that the function \(q\) of Eq.(3) is in fact designed based on the optimal critic, Eq.(7), of the symmetric InfoNCE. As shown in the equation, the optimal critic of the symmetric InfoNCE is \(q^{*}(z,z^{\prime})=\log\frac{p(z,z^{\prime})}{p(z)p(z^{\prime})}+c,c\in\mathbb{R}\). Thus, the joint probability density \(p(z,z^{\prime})\) is expressed by \(p(z,z^{\prime})\propto p(z)p(z^{\prime})e^{q^{*}(z,z^{\prime})}\). Hence, the critic function adjusts the statistical dependency between \(z\) and \(z^{\prime}\). In our study, we suppose that \(z,z^{\prime}\in\Delta^{C-1}\), and the critic \(q(z,z^{\prime})\) is expressed as an increasing function of \(z^{\top}z^{\prime}\). When \(z\) and \(z^{\prime}\) are both the same one-hot vector in \(\Delta^{C-1}\), \(p(z,z^{\prime})\) is assumed to be large. On the other hand, if \(z^{\top}z^{\prime}=0\), \(p(z,z^{\prime})\) is assumed to take a small value. We also introduce a one-dimensional parameter \(\alpha\) for the critic \(q_{\alpha}\) to tune the intensity of the dependency. Although there are many choices of critic functions, we here employ the \(\alpha\)-exponential function, because \(\exp_{\alpha}\) can express a wide range of common probabilities in statistics only by one parameter; see details of \(\alpha\)-exponential function in Naudts (2009); Amari and Ohara (2011); Matsuzoe and Ohara (2012). Eventually, the model of the critic is given by \(p_{\alpha}(z,z^{\prime})\propto p(z)p(z^{\prime})\exp_{\alpha}\left(\tau(z^{ \top}z^{\prime}-1)\right)\), where \(\alpha\in\mathbb{R}\) and \(0\leq\tau\leq|1-\alpha|^{-1}\). Note that the normalization constant of \(p_{\alpha}\) is no need when we compute the symmetric InfoNCE. In our experiments, we consider both \(\alpha\) and \(\tau\) as the hyper-parameters.
**Remark 1**: _The cosine-similarity function \(s(z,z^{\prime})=z^{\top}z^{\prime}/\|z\|_{2}\|z^{\prime}\|_{2},z,z^{\prime}\in \Delta^{C-1}\) is commonly used in the context of representation learning (Chen et al., 2020; Bai et al., 2021). However, we do not use the cosine-similarity function as the critic function \(q\) in Eq.(3). This is because in our problem the cosine-similarity function is not relevant to estimate the one-hot vector by the model \(g_{\theta}(x)\). Indeed, for \(q(z,z^{\prime})=s(z,z^{\prime})\) and \(C=2\), the pair \((z,z^{\prime})\) satisfying \(z=z^{\prime}=(1/2,1/2)^{\top}\in\Delta^{C-1}\) is a maximizer of \(q(z,z^{\prime})\), i.e., there exists a pair of non-one-hot vectors \(z\) and \(z^{\prime}\) that minimizes \(L_{\rm ps}\) in Eq.(6)._
Next, we investigate a few more properties of the symmetric InfoNCE loss from the perspective of MI. First we present a theoretical comparison between the symmetric InfoNCE loss and IIC (see IIC in Section 2.1 and Appendix A.2).
**Proposition 2**: _Consider a feature \(X\) and its transformation function \(T\) of Definition 2. Let \(Y\) (resp. \(Y^{\prime}\)) denote a cluster label of \(X\) (resp. \(T(X)\)). Then, the following inequality holds:_
\[I(Y;Y^{\prime})\leq I(g_{\theta}(X);g_{\theta}(T(X))), \tag{9}\]
_where \(g_{\theta}\) is the same model as introduce in Definition 1._
The proof is shown in Appendix B.2.
The above data processing inequality guarantees that \(I(g_{\theta}(X);g_{\theta}(T(X)))\) brings richer information than \(I(Y;Y^{\prime})\) used in IIC. Since our constraint is related to \(I(g_{\theta}(X);g_{\theta}(T(X)))\), Eq.(9) indicates the advantage of ours over IIC. To discuss the consequence of Proposition 2 in more detail, we provide a statistical analysis on the gap between the following two quantities:
1. The maximum value \(I(g_{\theta}(X);g_{\theta}(T(X)))\) w.r.t. \(\theta\),
2. The mutual information evaluated at \(\widehat{\theta}\), where \(\widehat{\theta}\) is the parameter maximizing the empirical symmetric InfoNCE.
To the best of our knowledge, such statistical analysis is not provided in previous theoretical studies related to InfoNCE.
**Theorem 1** (Informal version): _Consider the empirical symmetric InfoNCE of Section 3.3 with a critic \(q\in\mathcal{Q}\) for a dataset \(\mathcal{D}=\{x_{i}\}_{i=1}^{n}\). Here, \(\mathcal{Q}\) is a set of critics defined as follows: \(\mathcal{Q}=\{\phi_{(\alpha,\tau)}\,:\,(\alpha,\tau)\in\Xi\}\), where \(\phi_{(\alpha,\tau)}(z^{\top}z^{\prime})=\log\left(\exp_{\alpha}\left(\tau(z^{ \top}z^{\prime}-1)\right)\right)\) (see Eq.(3)), and \(\Xi\) is a set of all possible \((\alpha,\tau)\) pairs. Let \(\widehat{I}_{\mathrm{sym,nce},q}(\theta)\) denote the empirical symmetric InfoNCE, where \(\theta\) is a set of parameters in \(g_{\theta}\) of Definition 1. Let us define \(\widehat{\theta}\) by \(\widehat{\theta}=\arg\max_{\theta}\sup_{q\in\mathcal{Q}}\widehat{I}_{\mathrm{ sym,nce},q}(\theta).\) We define \(\theta^{*}\) as the maximizer of \(I(g_{\theta}(X);g_{\theta}(T(X)))\) w.r.t \(\theta\). Suppose that \(0\leq\delta\) is a constant. Then, with the probability greater than \(1-\delta\), the gap between \(I(g_{\theta^{*}}(X);g_{\theta^{*}}(T(X)))\) and \(I(g_{\widehat{\theta}}(X);g_{\widehat{\theta}}(T(X)))\) is given by_
\[\begin{split}& I(g_{\theta^{*}}(X);g_{\theta^{*}}(T(X)))-I(g_{ \widehat{\theta}}(X);g_{\widehat{\theta}}(T(X)))\\ &\leq(\mathrm{Approx.\ Err.})+(\mathrm{Gen.\ Err.})+c\,\sqrt{ \frac{\log(1/\delta)}{n}},\end{split} \tag{10}\]
_where \(c>0\) is a constant, and Approx. Err. (resp. Gen. Err.) is short for Approximation Error (resp. Generalization Error). Note that the generalization error term \((\mathrm{Gen.\ Err.})\) consists of Rademacher complexities with a set of neural network models._
See Appendix B.3 for the proof of the formal version.
From Theorem 1, the gap indeed gets close if the following A1) and A2) hold:
1. \((\mathrm{Approx.\ Err.})\) of Eq.(10) is small (i.e., the set \(\mathcal{Q}\) contains a rich quantity of critic functions).
2. \((\mathrm{Gen.\ Err.})\) and \(\sqrt{\frac{\log(1/\delta)}{n}}\) of Eq.(10) are small.
It is known that the Rademacher complexity of a kind of neural network models is \(O(n^{-1/2})\); see Bartlett and Mendelson (2002). Thus, the condition A2) can be satisfied if the sample size \(n\) is large enough. Moreover, by combining Proposition 2 with Theorem 1, we obtain the following implication: if \(n\) is sufficiently large, then the gap between the MI, \(I(g_{\theta^{*}}(X);g_{\theta^{*}}(T(X)))\), and the plug-in estimator with the optimal estimator \(\widehat{\theta}\) of the empirical symmetric InfoNCE is reduced. On the other hand, from Proposition 2, the MI of the pair \(Y\) and \(Y^{\prime}\) is always less than or equal to that of the
pair \(g_{\theta}(X)\) and \(g_{\theta}(T(X))\). Since IIC is an empirical estimator of the MI, \(I(Y,Y^{\prime})\), the statistical dependency via MI of the probability vectors \(g_{\theta}(X)\) and \(g_{\theta}(T(X))\) obtained by optimizing the symmetric InfoNCE can be greater than that of \(Y\) and \(Y^{\prime}\) learned through the optimization of IIC. Therefore, the symmetric InfoNCE has a more potential to work as a topologically invariant constraint for deep clustering than other MIs such as IIC.
Note that in the almost same way as Theorem 1, it is possible to derive a similar result for the gap between the following two: 1) \(I(g_{\theta^{*}}(X);g_{\theta^{*}}(T(X)))\) and 2) \(\max_{\theta}\sup_{g\in\mathcal{Q}}\widehat{I}_{\mathrm{sym,nce},q}(\theta)\). This fact indicates that if the upper bound derived in a similar way to Eq.(10) is small enough, then the empirical symmetric InfoNCE has a potential to strengthen the dependency between \(g_{\theta}(X)\) and \(g_{\theta}(T(X))\).
### Further Motivations behind the Symmetric InfoNCE Loss
We also leverage the theoretical result on contrastive representation learning from Wang et al. (2022), in order to explain the difference between InfoNCE and symmetric InfoNCE.
**Theorem 2**: _Let us define \(X\in\mathbb{R}^{d}\) and \(Y\in\{1,\cdots,C\}\) as described in Section 3.1. Let \(Z=g_{\theta}(X)\) and \(Z^{\prime}=g_{\theta}(T(X))\), where \(g_{\theta}:\mathbb{R}^{d}\rightarrow\Delta^{C-1}\) and \(T:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}\) are given by Definition 1 and 2, respectively. The symmetric InfoNCE \((I_{\mathrm{nce}}+I^{\prime}_{\mathrm{nce}})/2\) of Eq.(4) is supposed to set \(\alpha=1\) and a fixed \(\tau\) for the critic function of Eq.(3). Assume that \(p(Y)\) is a uniform distribution. Let \(\mathcal{L}^{\widetilde{\mu}}_{\mathrm{CE,Raw}}(g_{\theta})\) denote the mean supervised loss, which is given by \(\mathcal{L}^{\widetilde{\mu}}_{\mathrm{CE,Raw}}(g_{\theta})=-\mathbb{E}_{p(Z,Y)}\left[\log\frac{\exp(Z^{\top}\widetilde{\mu}_{Y})}{\sum_{k=1}^{C}\exp(Z^{ \top}\widetilde{\mu}_{k})}\right],\) where \(\widetilde{\mu}_{k}=\tau\cdot\mathbb{E}_{p(Z^{\prime}|Y=k)}[Z^{\prime}],k\in \{1,\cdots,C\}\). In other words, \(\mathcal{L}^{\widetilde{\mu}}_{\mathrm{CE,Raw}}(g_{\theta})\) is the cross-entropy loss via a linear evaluation layer, whose parameters are \(\widetilde{\mu}=\left(\widetilde{\mu}_{1},\cdots,\widetilde{\mu}_{C}\right) \in\mathbb{R}^{C\times C}\). Similarly we define \(\mathcal{L}^{\widetilde{\mu}}_{\mathrm{CE,Aug}}(g_{\theta})\) by \(\mathcal{L}^{\mu}_{\mathrm{CE,Aug}}(g_{\theta})=-\mathbb{E}_{p(Z^{\prime},Y )}\left[\log\frac{\exp(Z^{\prime\top}\mu_{Y})}{\sum_{k=1}^{C}\exp(Z^{\prime \top}\mu_{k})}\right],\) where \(\mu_{k}=\tau\cdot\mathbb{E}_{p(Z|Y=k)}[Z],k\in\{1,\cdots,C\}\), and \(\mu=(\mu_{1},\cdots,\mu_{C})\). Let us introduce the symmetric mean supervised loss as \(\mathcal{L}^{\mu,\widetilde{\mu}}_{\mathrm{SCE}}(g_{\theta})=(\mathcal{L}^{ \widetilde{\mu}}_{\mathrm{CE,Raw}}(g_{\theta})+\mathcal{L}^{\mu}_{\mathrm{CE,Aug }}(g_{\theta}))/2\). Then, we have_
\[-\frac{I_{\mathrm{nce}}+I^{\prime}_{\mathrm{nce}}}{2}-\frac{1}{2} \left(\sqrt{\mathrm{Var}(Z|Y)}+\sqrt{\mathrm{Var}(Z^{\prime}|Y)}\right)-\frac {1}{2e}\mathrm{Var}(\exp(\tau Z^{\top}Z^{\prime}))\] \[\leq\mathcal{L}^{\mu,\widetilde{\mu}}_{\mathrm{SCE}}(g_{\theta}) -\log C\] \[\leq-\frac{I_{\mathrm{nce}}+I^{\prime}_{\mathrm{nce}}}{2}+\frac{1} {2}\left(\sqrt{\mathrm{Var}(Z|Y)}+\sqrt{\mathrm{Var}(Z^{\prime}|Y)}\right),\]
_where_
\[\mathrm{Var}(Z|Y) =\mathbb{E}_{p(Y)}[\mathbb{E}_{p(Z|Y)}[\|\tau Z-\mu_{Y}\|_{ \infty}^{2}]],\] \[\mathrm{Var}(Z^{\prime}|Y) =\mathbb{E}_{p(Y)}[\mathbb{E}_{p(Z^{\prime}|Y)}[\|\tau Z^{\prime }-\mu_{Y}\|_{\infty}^{2}]],\] \[\mathrm{Var}(\exp(\tau Z^{\top}Z^{\prime})) =\mathbb{E}_{p(Z)p(Z^{\prime})}[(\exp(\tau Z^{\top}Z^{\prime})- \mathbb{E}_{p(Z)p(Z^{\prime})}[\exp(\tau Z^{\top}Z^{\prime})])^{2}].\]
The proof is shown in Appendix B.4.
**Remark 2**: _In Theorem 2, the critic function with \(\alpha=1\) is considered for the sake of simplicity. We can derive almost the same upper and lower bounds for the symmetric InfoNCE using the critic of Eq.(3) with \(\alpha\) such that \(1-1/\tau<\alpha<1\). The proof is the same as that of Theorem 2. We use the concavity of the function \(u\mapsto\log\exp_{\alpha}(u)\) and the inequality \(\log\exp_{\alpha}(x+y)\leq\log\exp_{\alpha}(x)+|y|/(1-(1-\alpha)\tau)\) for \(x,x+y\in[-\tau,0]\)._
Our result includes four technical differences and modifications from Wang et al. (2022) as follows: 1) Theorem 2 is intended for the symmetric InfoNCE loss. 2) We do not assume that any positive pair \((Z,Z^{\prime})\sim p(Z,Z^{\prime})\) has the identical label distribution given the representation (i.e., we do not rely on the assumption \(p(Y|Z)=p(Y|Z^{\prime})\)). Note that the assumption of \(p(Y|Z)=p(Y|Z^{\prime})\) will not hold in practical settings. For instance, suppose that we have an image \(X\). If \(X\) is cropped, then the cropped image \(T(X)\) may have lost some information included in \(X\), which would result in the case where the distribution of \(X\) and that of \(T(X)\) do not agree. 3) In the proof of Theorem 2 (see Proposition 4 in Appendix B.4), we use the sharpened Jensen's inequality (Liao and Berg, 2019) in order to make our proof simpler. On the other hand, Theorem 4.2 of Wang et al. (2022) is obtained by utilizing Corollary 3.5 of Budimir et al. (2000). 4) We consider the case in which the distribution of a random variable representing unlabeled data and one of its augmentation data are the same. In our setup, if \(p(Z,Y)=p(Z^{\prime},Y)\) holds, then we have \(\mathcal{L}_{\mathrm{CE,Raw}}^{\tilde{\mu}}(g_{\theta})=\mathcal{L}_{\mathrm{ CE,Aug}}^{\mu}(g_{\theta})\). In general, however, the probability distribution of \(Z\) and \(Z^{\prime}\) are not necessarily the same. More precisely, let \((\Omega,\mathcal{F},P)\) be a probability space and \(X\) be a random variable on \(\Omega\). Then let us consider the push-forward distribution \(Z_{\#}P\) and \(Z^{\prime}_{\#}P\). Since the transformation map \(T\) is also a random variable, generally these distributions are distinct from each other. We avoid this issue by starting from the general setting.
Furthermore, our result gives the following novel insight into the theoretical understanding of the symmetric InfoNCE: the symmetric InfoNCE reduces both \(\mathcal{L}_{\mathrm{CE,Raw}}^{\tilde{\mu}}(g_{\theta})\) and \(\mathcal{L}_{\mathrm{CE,Aug}}^{\mu}(g_{\theta})\) at the same time. This property could explain why the symmetric InfoNCE performs more stable in practice than InfoNCE as a constraint of deep clustering methods: see also Table 2 that shows the comparison of InfoNCE (MIST via \(\hat{I}_{\mathrm{nce}}\)) and symmetric InfoNCE (MIST).
For further comparison between symmetric InfoNCE, InfoNCE, and SimCLR (Chen et al., 2020), see Appendix C.
## 4 Numerical Experiments
Throughout this section, we aim to evaluate the efficiency of the symmetric InfoNCE as topological invariant constraint for a deep clustering method. To this end, at first in Section 4.1, we define a deep clustering method of Scenario2 named MIST by applying the symmetric InfoNCE to IMSAT (Hu et al., 2017). The reason why we employ IMSAT is that it performs the best on average among deep clustering methods in Table 1. Then, in Section 4.5, we compare MIST and IMSAT in terms of clustering accuracy to observe the benefits of the symmetric InfoNCE, while comparing MIST with the other representative methods as well. Thereafter in Section 4.6, we conduct ablation studies on MIST objective to understand the effect of each term in Eq.(11). At
last in Section 4.7, using MIST, we examine robustness of important hyper-parameters in the symmetric InfoNCE.
### MIST: Application of Symmetric InfoNCE to IMSAT
Given a mini-batch \(\mathcal{B}\subseteq\mathcal{D}\), by applying our empirical symmetric InfoNCE of Eq.(6) to the objective of IMSAT (see the objective in Eq.(15) of Appendix 1.2), we define the following objective of MIST:
\[\theta^{*}=\operatorname*{argmin}_{\theta}\Biggl{[}\underbrace{R_{\text{vat}} \left(\mathcal{B};\theta\right)}_{\bigodot}-\mu\Biggl{\{}\eta\underbrace{H(Y )}_{\text{\textcircled{B}}}-\underbrace{H(Y|X)}_{\text{\textcircled{C}}}-\gamma \underbrace{\left(L_{\text{ps}}+L_{\text{ng}}\right)}_{\text{\textcircled{B}}} \Biggr{\}}\Biggr{]}, \tag{11}\]
where \(\mu\), \(\eta\) and \(\gamma\) are positive hyper-parameters. The symbol \(\textcircled{A}\) expresses VAT (Virtual Adversarial Training) loss (Miyato et al., 2019); see Eq.(13) of Appendix 1.1. In addition, \(\textcircled{B}\) and \(\textcircled{C}\) mean Shannon entropy (Cover, 1999) w.r.t. a cluster label \(Y\) and conditional entropy of \(Y\) given a feature \(X\), respectively. Moreover, minimization of the symbol \(\textcircled{B}\) is equivalent to maximization of the empirical symmetric InfoNCE. Note that the major difference between MIST and IMSAT is the introduction of term \(\textcircled{B}\). The minimization problem of Eq.(11) is solved via SGD (Stochastic Gradient Descent) (Shalev-Shwartz and Ben-David, 2013) in our numerical experiments. See Appendix D.1 for further details of MIST objective, the pseudo algorithm (Algorithm 1), and the diagram (Table 5).
### Dataset Description and Evaluation Metric
We use two synthetic datasets and eight real-world benchmark datasets in our experiments. All the ten datasets are given as feature vectors. For the synthetic datasets, we employ Two-Moons and Two-Rings of scikit-learn (Geron, 2019). The real-world datasets are MNIST (LeCun et al., 1998), SVHN (Netzer et al., 2011), STL (Coates et al., 2011), CIFAR10 (Torralba et al., 2008), CIFAR100 (Torralba et al., 2008), Omniglot (Lake et al., 2011), 20news (Lang, 1995) and Reuters10K (Lewis et al., 2004). The first six real-world datasets originally belong to the image domain and the last two originally belong to the text domain. As for the characteristic of each dataset, Two-Moons and Two-Rings are low-dimensional datasets with complex topology. MNIST, STL, and CIFAR10 are balanced datasets with the small number of clusters. CIFAR100, Omniglot, and 20news are balanced datasets with the large number of clusters. SVHN and Reuters10K are imbalanced datasets. For further details of the above ten datasets, see Appendix E.1.
In the unsupervised learning scenario, we adopt the standard metric for evaluating clustering performance, which measures how close the estimated cluster labels are to the ground truth under a permutation. For an unlabeled dataset \(\left\{x_{i}\right\}_{i=1}^{n}\), let \(\left\{y_{i}\right\}_{i=1}^{n}\) and \(\left\{\hat{y}_{i}\right\}_{i=1}^{n}\) be its true cluster label set and estimated cluster label set, respectively. Suppose that the both true \(y_{i}\) and estimated cluster labels \(\hat{y}_{i}\) take the same range \(\left\{1,\cdots,C\right\}\). The _clustering accuracy_ ACC is defined by \(\operatorname{ACC}\left(\%\right)=100\times\max_{\sigma}\frac{\sum_{i=1}^{n} \mathbb{I}[y_{i}=\sigma(\hat{y}_{i})]}{n}\), where \(\sigma\) ranges over all permutations of cluster labels, and \(\mathbb{I}[\ \cdot\ ]\) is the indicator function. The
optimal assignment of \(\sigma\) can be computed using the Kuhn-Munkres algorithm (Kuhn, 1955).
### Statistical Model and Optimization
Throughout all our experiments, we fix our clustering neural network model \(g_{\theta}(x)\in\Delta^{C-1}\) to the following simple and commonly used MLP architecture with softmax (Hinton et al., 2012): \(d-1200-1200-C\), where \(d\) is the dimension of the feature vector. We apply the ReLU activation function (Nair and Hinton, 2010) and BatchNorm (Ioffe and Szegedy, 2015) for all hidden layers. In addition, the initial set with \(\theta\) is defined by He-initialization (He et al., 2015). For optimizing the model, we employ Adam optimizer (Kingma and Ba, 2015), and set \(0.002\) as the learning rate.
We implemented MIST1 using Python with PyTorch library (Ketkar and Moolayil, 2017). All experiments are evaluated with NVIDIA TITAN RTX GPU, which has a 24GiB GDDR6 video memory.
Footnote 1: [https://github.com/betairylia/MIST](https://github.com/betairylia/MIST) [Last accessed 23-July-2022]
### Compared Methods
As baseline methods, we employ the following three classical clustering methods: K-means (MacQueen, 1967), SC (Ng et al., 2002) and GMMC (Day, 1969). For deep clustering methods, we employ representative deep clustering methods from \(\mathfrak{T}_{1}\) to \(\mathfrak{T}_{6}\) of Table 1, MIST via \(\hat{I}_{\rm nce}\), and MIST (our method) of Eq.(11). Here, MIST via \(\hat{I}_{\rm nce}\) is defined by replacing \(-(L_{\rm ps}+L_{\rm ng})\) of Eq.(11) by \(-\hat{I}_{\rm nce}\) of Eq.(5). The reason why MIST via \(\hat{I}_{\rm nce}\) is employed is to check how much more efficiently symmetric InfoNCE can enhance a deep clustering method over the original InfoNCE. In both MIST and MIST via \(\hat{I}_{\rm nce}\), \(\mathcal{T}_{\mathfrak{g}}\) of Definition 4 (resp. \(\mathcal{T}_{\mathfrak{e}}\) of Definition 3) is employed for synthetic datasets (resp. real-world datasets). For further details of hyper-parameter tuning with MIST and MIST via \(\hat{I}_{\rm nce}\), see Appendix E.5. Moreover, from \(\mathfrak{T}_{2}\), \(\mathfrak{T}_{3}\), \(\mathfrak{T}_{5}\), and \(\mathfrak{T}_{6}\), SpectralNet (Shaham et al., 2018), VaDE (Jiang et al., 2017), CatGAN (Springenberg, 2015) and SELA (Asano et al., 2019) are respectively examined. From \(\mathfrak{T}_{1}\), DEC (Xie et al., 2016) and SCAN (Van Gansbeke et al., 2020) are examined. From \(\mathfrak{T}_{4}\), IMSAT (Hu et al., 2017) and IIC (Ji et al., 2019) are examined. Note that SCAN, IIC, CatGAN, and SELA were originally proposed in Scenario1 of Section 1.2. Therefore, we redefine those methods to make them fit to Scenario2 in our experiments. The redefinitions and implementation details of all the existing methods are described in Appendix E.3
### Analysis from Table 2
As briefly explained in Section 1.3, the average clustering accuracy and its standard deviation on each dataset for the corresponding clustering method are reported in Table 2. At first, since MIST clearly outperforms IMSAT for almost all the dataset, we can observe benefit of the symmetric InfoNCE. Especially for Two-Moons and Two-Rings (two complex topology datasets), it should be emphasized that the symmetric
InfoNCE with \(\mathcal{T}_{\mathfrak{g}}\) of Definition 4 brings significant enhancement to IMSAT. In addition, for CIFAR10 and SVHN, it brings a noticeable gain to IMSAT.
With comparison between MIST and SpectralNet, MIST cannot perform as stable as SpectralNet for Two-Rings dataset. However, MIST with a DNN needs a smaller memory complexity than SpectralNet with two DNNs. Moreover, the average performance of MIST on the eight real-world datasets are much better than that of SpectralNet.
Furthermore, through comparison between MIST and MIST via \(\hat{I}_{\mathrm{nce}}\), we can observe that the symmetric InfoNCE enhances IMSAT more than InfoNCE does on average. The observation matches Theorem 2.
### Ablation Study for MIST Objective
Recall 1 to 2 in Eq.(11). Here, we examine six variants of MIST objective of Eq.(11), which are shown in the first column of Table 3. For example, (3, 4) means that only 2 and 4 are used to define a variant of the MIST objective, where 2 and 4 are linearly combined using a coefficient hyper-parameter. The detail of hyper-parameter tuning for each combination is described in Appendix E.5. For the study, Two-Rings, MNIST, CIFAR10, 20news, and SVHN are employed.
Firstly, by two comparisons of (3, 4) vs. (3, 4) and (1, 2) vs. MIST results in Table 2, we see positive effect of the symmetric InfoNCE across the five datasets on average. Especially for the complex topology dataset (i.e., Two-Rings), the effect is very positive. Secondly, the result of (3, 4) vs. (1, 2) indicates that VAT (Miyato et al., 2019) positively works for clustering tasks. Thirdly, via (3) vs. (3, 4), effect of maximizing \(H(Y)\) is positive. For further analysis with 1 to 2, see Appendix D.
To sum up, although the combination of (1, 2, 2), (4), i.e., IMSAT, provides competitive clustering performance for non-complex topology datasets, the symmetric InfoNCE can bring benefit to the combination for not only the non-complex topology datasets but also the complex topology dataset.
\begin{table}
\begin{tabular}{c c c c c c} \hline & Two-Rings & MNIST & CIFAR10 & 20news & SVHN \\ \hline (3, 4) & 76.4(16.7) & 72.7(4.8) & 40.7(2.9) & 21.9(3.2) & 23.3(0.2) \\ (3, 4) & 58.7(9.6) & 58.5(3.5) & 40.3(3.5) & 25.1(2.8) & 26.8(3.2) \\ (3, 4) & 83.4(23.5) & 81.9(4.3) & 44.1(0.5) & 40.1(1.1) & 24.9(0.2) \\ (3, 4) & 100(0) & 70.6(2.9) & 35.8(4.9) & 35.7(1.7) & 44.8(4.8) \\ (4, 4) & 69.0(21.9) & 98.7(0.0) & 44.9(0.6) & 35.8(1.9) & 54.8(2.8) \\ (5, 4) & 83.4(23.4) & 75.0(4.3) & 45.1(1.8) & 31.6(0.4) & 21.0(2.5) \\ \hline \end{tabular}
\end{table}
Table 3: Results of the ablation study for MIST objective. The number outside (resp. inside) of brackets expresses clustering accuracy (resp. standard deviation) over three trials. In the first column, six combinations based on 1 to 2 in Eq.(11) are shown, and each of the six defines a variant of MIST objective of Eq.(11).
### Robustness for \(K_{0}\), \(\alpha\), and \(\gamma\)
Let us consider the influence of the hyper-parameters, \(K_{0},\alpha\), and \(\gamma\), in the MIST objective of Eq.(11) on the clustering performance. We evaluate how these hyper-parameters affect the clustering accuracy when other hyper-parameters are unchanged. In the study, some candidates of the three hyper-parameters are examined for Two-Rings, MNIST, CIFAR10, 20news, and SVHN.
1. The number of neighbors, \(K_{0}\), is used in both \(\mathcal{T}_{\epsilon}\) of Definition 3 and \(\mathcal{T}_{\mathfrak{g}}\) of Definition 4. For Two-Rings, MNIST, CIFAR10, and SVHN (resp. 20news), the candidates, \(K_{0}=5,10,15,50\) (resp. \(K_{0}=10,50,100,150\)), are examined. The results are shown in Table 4.
2. The hyper-parameter \(\alpha\) is used in the critic function of Eq.(3). The candidates, \(\alpha=0,1,2\), are examined. The results are shown in Table 5.
3. The importance weight, \(\gamma\), is used for the symmetric InfoNCE in MIST objective Eq.(11). The candidates for real-world datasets (resp. synthetic dataset) are \(\gamma=0.1,0.5,1.0\) (resp. \(\gamma=1,5,10\)). The results are shown in Table 6.
\begin{table}
\begin{tabular}{c c c c c c} \hline \(\gamma\) & Two-Rings & MNIST & CIFAR10 & 20news & SVHN \\ \hline \(0.1(1)\) & 83.8(22.9) & 93.7(7.0) & 49.0(1.5) & 40.7(1.1) & 52.5(4.6) \\ \(0.5(5)\) & 94.4(8.0) & 98.0(1.0) & 48.7(1.0) & 35.0(2.3) & 59.6(3.6) \\ \(1.0(10)\) & 100(0) & 97.9(1.1) & 46.5(0.8) & 37.4(0.7) & 59.8(3.5) \\ \hline \end{tabular}
\end{table}
Table 6: Results of robustness study for \(\gamma\) in MIST objective of Eq.(11). The average clustering accuracy and std over three trials are shown. In the first column, number outside (resp. inside) of brackets means value of \(\gamma\) used for real-world datasets (resp. synthetic dataset).
\begin{table}
\begin{tabular}{c c c c c c} \hline \(K_{0}\) & Two-Rings & MNIST & CIFAR10 & 20news & SVHN \\ \hline \(5(10)\) & 83.5(23.4) & 98.2(0.4) & 48.0(0.9) & 34.2(1.5) & 55.1(2.0) \\ \(10(50)\) & 83.5(23.3) & 96.6(5.7) & 47.5(0.9) & 36.5(1.0) & 55.9(1.7) \\ \(15(100)\) & 100(0) & 98.4(0.0) & 47.8(1.4) & 38.8(0.9) & 56.3(3.2) \\ \(50(150)\) & 50.7(0.5) & 93.6(7.2) & 48.6(1.8) & 36.9(2.2) & 63.3(1.2) \\ \hline \end{tabular}
\end{table}
Table 4: Results of robustness study for number of neighbors \(K_{0}\) in Definition 3 and 4. The average clustering accuracy and std over three trials are shown. In the first column, hyper-parameter value outside (resp. inside) of brackets is used for Two-Rings, MNIST, CIFAR10 and SVHN (resp. 20news).
\begin{table}
\begin{tabular}{c c c c c c} \hline \(\alpha\) & Two-Rings & MNIST & CIFAR10 & 20news & SVHN \\ \hline \(0\) & 67.2(23.2) & 98.7(0.0) & 49.5(0.3) & 38.1(1.8) & 61.4(2.1) \\ \(1\) & 100(0) & 97.6(1.1) & 48.4(0.4) & 39.9(3.3) & 57.0(1.5) \\ \(2\) & 66.7(23.5) & 97.8(1.3) & 46.6(0.4) & 39.5(2.5) & 57.6(2.4) \\ \hline \end{tabular}
\end{table}
Table 5: Results of robustness study for \(\alpha\) in Eq.(3). The average clustering accuracy and std over three trials are shown.
Other hyper-parameters are the same as those used in MIST of Table 2. Details are shown in Table 10 of Appendix E.5.
Firstly, as we can see that for most datasets in Table 4, MIST is robust to the change of \(K_{0}\). The exception is Two-Rings. The clustering accuracy of MIST with \(K_{0}=50\) is much lower than that with \(K_{0}=15\). A possible reason is that the K-NN graph with a large \(K_{0}\) has edges connecting two data points belonging to different rings. Therefore, maximization of the symmetric InfoNCE based on such a K-NN graph can negatively affect the clustering performance. Secondly, Table 5 indicates that for all real-world datasets, MIST is robust to the change of \(\alpha\) that controls the intensity of the correlation. For Two-Rings, however, the performance of MIST is sensitive to \(\alpha\). Finally, Table 6 shows that for all the datasets, MIST is stable to the change of \(\gamma\).
## Conclusion
In this study, to achieve the goal described in the end of Section 1.3, we proposed topological invariant constraint, which is based on the symmetric InfoNCE, in Section 3.3. Then, the theoretical advantages are intensively discussed from a deep clustering point of view in Section 3.4. In numerical experiments of Section 4, the efficiency of topologically invariant constraint is confirmed, using MIST defined by combining the constraint and IMSAT.
Future work will refine the symmetric InfoNCE to have fewer hyper-parameters for better and more robust generalization across datasets. Also, it is worthwhile to investigate a more advanced transformation function to deal with high-dimensional datasets with complex topology. Furthermore, developing an efficient way of incorporating information than the MI will enhance the reliability and prediction performance of deep clustering methods.
## Acknowledgments
This work was supported by Japan Society for the Promotion of Science under KAKENHI Grant Number 17H00764, 19H04071, and 20H00576.
## Appendix A Review of Related Works
### Deep Clustering Methods without Number of Clusters
Except for Scenario1 and Scenario2 where the number of clusters is given, some authors assume that the number of clusters is not given (Chen, 2015; Yang et al., 2016; Caron et al., 2018; Mautz et al., 2019; Avgerinos et al., 2020). For example, in DLNC (Chen, 2015), for a given unlabeled dataset, the feature is extracted by a deep belief network. Then, the obtained feature vectors are clustered by NMMC (Nonparametric Maximum Margin Clustering) with the estimated number of clusters. In
DeepCluster (Caron et al., 2018), starting from an excessive number of clusters, the appropriate number of clusters is estimated.
### Invariant Information Clustering
Given image data \(\mathcal{D}=\{x_{i}\}_{i=1}^{n}\) and the number of clusters \(C\), IIC (Invariant Information Clustering) (Ji et al., 2019) returns the estimated cluster labels \(\{\hat{y}_{i}\}_{i=1}^{n}\) using the trained model for clustering. The training criterion is based on the maximization of the MI between the cluster label of a raw image and the cluster label of the transformed raw image. IIC employs the clustering model of \(g_{\theta}(x)\) (see Definition 1), where a CNN is used so at to take advantages of image-specific prior knowledge.
To be more precise, to learn the parameter \(\theta\) of the model, IIC maximizes the MI, \(I(Y;Y^{\prime})\), between random variables \(Y\) and \(Y^{\prime}\) that take an element in \(\{1,\cdots,C\}\). Here, \(Y\) denotes the random variable of the cluster label with raw image \(X\in\mathcal{X}\). Let \(T:\mathcal{X}\rightarrow\mathcal{X}\) be an image-specific transformation function, and then \(Y^{\prime}\) denotes the random variable of the cluster label for the transformed raw image; \(T(X)\). In IIC, the conditional probability \(p(y|x)\) is modeled by \(g_{\theta}(x)\). During the SGD-based optimization stage, given a mini-batch \(\mathcal{B}\subseteq\mathcal{D}\), \(I(Y;Y^{\prime})\) is computed as follows:
1. Define \(p\left(y,y^{\prime}|x,T(x)\right)=g_{\theta}^{y}(x)g_{\theta}^{y^{\prime}}(T(x))\), where \(y\) and \(y^{\prime}\) are the cluster labels of \(x\) and \(T(x)\), respectively.
2. Compute \(p(y,y^{\prime})=\frac{1}{|\mathcal{B}|}\sum_{x_{i}\in\mathcal{B}}g_{\theta}^{y }(x_{i})g_{\theta}^{y^{\prime}}\left(T(x_{i})\right)\).
3. Define \(\bar{p}(y,y^{\prime})\) as the symmetrized probability \((p(y,y^{\prime})+p(y^{\prime},y))/2\).
4. Compute the MI \(I(Y;Y^{\prime})\) from \(\bar{p}(y,y^{\prime})\).
Then, the parameter \(\theta\) of the model is found by maximizing \(I(Y;Y^{\prime})\) w.r.t. \(\theta\). Note that an appropriate transformation \(T\) is obtained using image-specific knowledge, such as scaling, skewing, rotation, flipping, etc.
### Information Maximization for Self-Augmented Training
In this section, we introduce IMSAT (Information Maximization for Self-Augmented Training) (Hu et al., 2017). To do so, in Appendix 1.1, firstly we introduce VAT (Miyato et al., 2019), which is an essential regularizer for IMSAT. Then, we explain the objective of IMSAT in Appendix 1.2.
### Virtual Adversarial Training
Virtual Adversarial Training is a regularizer forcing the smoothness on a given model in the following sense:
\[x_{i}\approx x_{j}\Rightarrow\forall y\in\{1,\cdots,C\};\ g_{\theta}^{y}(x_{i })\approx g_{\theta}^{y}(x_{j}), \tag{12}\]
where \(g_{\theta}\) is defined by Definition 1. It should be emphasized that we can train with VAT without labels. Let \(D_{KL}\left(p_{1}\|p_{2}\right)\) denote the KL (Kullback-Leibler) divergence (Cover,
1999) between two probability vectors \(p_{1}\in\Delta^{C-1}\) and \(p_{2}\in\Delta^{C-1}\). During the SGD based optimization stage, given a mini-batch \(\mathcal{B}\subseteq\mathcal{D}\), the VAT loss, \(R_{\rm{vat}}(\mathcal{B};\theta)\), is defined as,
\[R_{\rm{vat}}(\mathcal{B};\theta)=\frac{1}{|\mathcal{B}|}\sum_{x_{i}\in \mathcal{B}}D_{KL}\left(g_{\theta_{l}}(x_{i})\|g_{\theta}(x_{i}+r_{i}^{\rm{adv }})\right), \tag{13}\]
where \(r_{i}^{\rm{adv}}=\arg\max_{\|r\|_{2}\leq\epsilon_{i}}D_{KL}\left(g_{\theta_{l }}(x_{i})\|g_{\theta_{l}}(x_{i}+r)\right)\), and \(\theta_{l}\) is the parameter obtained at the \(l\)-th update. The radius \(\epsilon_{i}\) depends on \(x_{i}\), and in practice it is estimated via K-NN graph on \(\mathcal{D}\); see Hu et al. (2017) for details.
The approximated \(r_{i}^{\rm{adv}}\) can be computed by the following three steps;
1. Generate a random unit-vector \(u\in\mathbb{R}^{d}\),
2. Compute \(v_{i}=\nabla_{r}D_{KL}\left(g_{\theta_{l}}(x_{i})\|g_{\theta_{l}}(x_{i}+r) \right)|_{r=\xi u}\) using the back-propagation,
3. \(r_{i}^{\rm{adv}}=\epsilon_{i}v_{i}/\|v_{i}\|_{2}\),
where \(\xi>0\) is a small positive value.
### Objective of IMSAT
Given \(\mathcal{D}=\{x_{i}\}_{i=1}^{n}\) and the number of clusters \(C\), IMSAT provides estimated cluster labels, \(\{\hat{y}_{i}\}_{i=1}^{n}\), \(\hat{y}_{i}\in\{1,\cdots,C\}\), using \(g_{\theta}(x)\) of Definition 1 (statistical model for clustering). In IMSAT, \(g_{\theta}(x)\) is the simple MLP with the structure \(d-1200-1200-C\). Using the trained model \(g_{\theta^{*}}(x)\), we have \(\hat{y}_{i}=\operatorname*{argmax}_{y\in\{1,\cdots,C\}}g_{\theta^{*}}^{y}(x_ {i})\).
As for training criterion of the parameter \(\theta\), IMSAT maximizes the MI, \(I(X;Y)\), with the VAT regularization. In order to compute \(I(X;Y)\), we assume the following two assumptions: 1) the conditional probability \(p(y|x)\) is modeled by \(g_{\theta}(x)\), and 2) the marginal probability \(p(x)\) is approximated by the uniform distribution on \(\mathcal{D}\). Thereafter, \(I(X;Y)\) is decomposed into \(I(X;Y)=H(Y)-H(Y|X)\). Here \(H(Y)\) is Shannon entropy and \(H(Y|X)\) is the conditional entropy (Cover, 1999). During the SGD-based optimization, given a mini-batch \(\mathcal{B}\subseteq\mathcal{D}\), \(H(Y)\) and \(H(Y|X)\) are respectively computed as follows:
\[-\sum_{y=1}^{C}p_{\theta}(y)\log p_{\theta}(y)\ {\rm{and}}\ -\frac{1}{| \mathcal{B}|}\sum_{x_{i}\in\mathcal{B}}\sum_{y=1}^{C}g_{\theta}^{y}(x_{i})\log g _{\theta}^{y}(x_{i}), \tag{14}\]
where \(p_{\theta}(y)\) is the approximate marginal probability, \(\frac{1}{|\mathcal{B}|}\sum_{x_{i}\in\mathcal{B}}g_{\theta}^{y}(x_{i})\). The parameter \(\theta\) of the model is found by solving the following minimization problem,
\[\min_{\theta}\left\{R_{\rm{vat}}(\mathcal{B};\theta)-\mu\left(\eta H(Y)-H(Y|X )\right)\right\}, \tag{15}\]
where \(\mu\) and \(\eta\) are positive hyper-parameters.
Proofs for Section 3
### Proof of Proposition 1
From the definition of the MI, \(I(Z;Z^{\prime})=I(Z^{\prime};Z)\) holds. In addition, we have \(I(Z;Z^{\prime})\geq I_{\mathrm{nce},q}(Z;Z^{\prime})\) and \(I(Z^{\prime};Z)\geq I_{\mathrm{nce},q}(Z^{\prime};Z)\) for any function \(q\). Therefore, the following inequality holds:
\[I(Z;Z^{\prime})=\frac{I(Z;Z^{\prime})+I(Z^{\prime};Z)}{2}\] \[\geq\frac{I_{\mathrm{nce},q}(Z;Z^{\prime})+I_{\mathrm{nce},q}(Z^{ \prime};Z)}{2}.\]
Next, check the optimality. In order to do so, let us review the following inequality (Poole et al., 2019):
\[I_{\mathrm{nce},q}(Z;Z^{\prime}) =\mathbb{E}\left[\log\frac{p(Z^{\prime})e^{q(Z,Z^{\prime})}}{ \mathbb{E}_{Z^{\prime}}[e^{q(Z,Z^{\prime})}]}-\log p(Z^{\prime})\right]\] \[=\mathbb{E}\left[\log\frac{p(Z^{\prime})e^{q(Z,Z^{\prime})}}{ \mathbb{E}_{Z^{\prime}}[e^{q(Z,Z^{\prime})}]}\right]+H(Z^{\prime})\] \[\leq\mathbb{E}\left[\log p(Z^{\prime}|Z)\right]+H(Z^{\prime})\] \[=\mathbb{E}_{p(Z,Z^{\prime})}\left[\log\frac{p(Z,Z^{\prime})}{p(Z )p(Z^{\prime})}\right].\]
The last inequality comes from the non-negativity of the KL-divergence. Therefore, for any \(q\), InfoNCE provides a lower bound of \(I(Z;Z^{\prime})\). The equality holds if
\[q(z,z^{\prime})=\log p(z|z^{\prime})+[\text{function of }z]. \tag{16}\]
Thus, if \(q\) satisfies \(q(z,z^{\prime})=q(z^{\prime},z)\), i.e., \(\log p(z|z^{\prime})+h_{0}(z)=\log p(z^{\prime}|z)+h_{1}(z^{\prime})\) for some function \(h_{0}(z)\) and \(h_{1}(z^{\prime})\), then the equality between the symmetric InfoNCE and \(I(Z;Z^{\prime})\) holds. As a result, the critic \(q\), which is defined as \(q(z,z^{\prime})=\log\frac{p(z,z^{\prime})}{p(z)p(z^{\prime})}+c,\,c\in\mathbb{R}\), is the optimal critic.
### Proof of Proposition 2
Let us introduce data processing inequality. Suppose that the random variables \(X,Z\) are conditionally independent for a given \(Y\). This situation is expressed by
\[X\leftrightarrow Y\leftrightarrow Z.\]
Under the above assumption, the data processing inequality \(I(X;Y)\geq I(X;Z)\) holds for the MI. In our formulation, the pair of random variables, \(X\) and \(X^{\prime}\), is transformed to the conditional probabilities, \(p(\cdot|X)=g_{\theta}(X)\) and \(p(\cdot|X^{\prime})=g_{\theta}(X^{\prime})\), on the \(C\)-dimensional simplex \(\Delta^{C-1}\). Then, the cluster label \(Y\) (resp. \(Y^{\prime}\)) is assumed to be generated from \(p(\cdot|X)\) (resp. \(p(\cdot|X^{\prime})\)). This data generation process satisfies the following relationship:
\[Y\leftrightarrow p(\cdot\mid X)\leftrightarrow(X,X^{\prime}) \leftrightarrow p(\cdot\mid X^{\prime})\leftrightarrow Y^{\prime}.\]
Therefore, for \(X^{\prime}=T(X)\), the data processing inequality leads to
\[I(Y;Y^{\prime})\leq I(p(\cdot\mid X);p(\cdot\mid X^{\prime}))=I(g_{\theta}(X);g_{ \theta}(T(X))).\]
### Estimation Error of the Symmetric InfoNCE
The symmetric InfoNCE provides an approximation of MI. Here, let us theoretically investigate the estimation error rate of the symmetric InfoNCE with a learnable critic.
Suppose we have training data \(x_{1},\ldots,x_{n}\) and their perturbation, \(x^{\prime}_{i}:=t_{i}(x_{i}),\,i\in[n]:=\{1,\ldots,n\}\), where \(t_{i}\) is a randomly generated map. We assume that \(t_{1},\ldots,t_{n}\) are i.i.d. Recall that the empirical approximation of the InfoNCE loss \(I_{\mathrm{nce},q}\) is given by
\[\widehat{I}_{\mathrm{nce},q}(\theta)=\frac{1}{n}\sum_{i}q(g_{\theta}(x_{i}),g_ {\theta}(x^{\prime}_{i}))-\frac{1}{n}\sum_{i}\log\bigg{(}\frac{1}{n}\sum_{j} e^{q(g_{\theta}(x_{i}),g_{\theta}(x^{\prime}_{j}))}\bigg{)}.\]
The symmetric InfoNCE is defined as \((I_{\mathrm{nce},q}+I^{\prime}_{\mathrm{nce},q})/2\) and its empirical approximation is
\[\frac{\widehat{I}_{\mathrm{nce},q}(\theta)+\widehat{I}^{\prime}_ {\mathrm{nce},q}(\theta)}{2}\\ =\frac{1}{n}\sum_{i}q(g_{\theta}(x_{i}),g_{\theta}(x^{\prime}_{i }))-\frac{1}{2n}\Bigg{(}\sum_{i}\log\bigg{(}\frac{1}{n}\sum_{j}e^{q(g_{\theta} (x_{i}),g_{\theta}(x^{\prime}_{j}))}\bigg{)}\\ +\sum_{i}\log\bigg{(}\frac{1}{n}\sum_{j}e^{q(g_{\theta}(x^{\prime }_{i}),g_{\theta}(x_{j}))}\bigg{)}\Bigg{)}.\]
Let \(I_{\mathrm{sym},\mathrm{nce},q}\) and \(\widehat{I}_{\mathrm{sym},\mathrm{nce},q}\) denote the symmetric InfoNCE and the empirical approximation, respectively. Let \(\mathcal{Q}\) be a set of critics. The MI is approximated by
\[I_{\mathcal{Q}}(\theta)=\sup_{q\in\mathcal{Q}}I_{\mathrm{sym},\mathrm{nce},q} (\theta).\]
The empirical approximation of \(I_{\mathcal{Q}}(\theta)\) is given by \(\widehat{I}_{\mathcal{Q}}(\theta)=\sup_{q\in\mathcal{Q}}\widehat{I}_{\mathrm{ sym},\mathrm{nce},q}(\theta)\). Then, the parameter \(\widehat{\theta}\) of the model is given by the maximizer of \(\widehat{I}_{\mathcal{Q}}(\theta)\), i.e.,
\[\max_{\theta\in\Theta}\widehat{I}_{\mathcal{Q}}(\theta)\ \longrightarrow\ \widehat{\theta}.\]
Let \(I(\theta)\) be the mutual information between \(g_{\theta}(X)\) and \(g_{\theta}(X^{\prime})\). The maximizer of \(I(\theta)\) (resp. \(I_{\mathcal{Q}}(\theta)\)) is denoted by \(\theta^{*}\) (resp. \(\theta_{\mathcal{Q}}\in\Theta\)).
We evaluate the mutual information at \(\widehat{\theta}\), i.e., \(I(\widehat{\theta})\). From the definition, we have
\[0\leq I(\theta^{*})-I(\widehat{\theta})\leq I(\theta^{*})-I_{\mathcal{Q}}( \widehat{\theta})\leq\underbrace{I(\theta^{*})-I_{\mathcal{Q}}(\theta_{ \mathcal{Q}})}_{\text{approximation error}\geq 0}+\underbrace{I_{\mathcal{Q}}( \theta_{\mathcal{Q}})-I_{\mathcal{Q}}(\widehat{\theta})}_{\text{estimation error}\geq 0}. \tag{17}\]
We consider the estimation error bound. The optimality of \(\widehat{\theta}\) leads to
\[0\leq I_{\mathcal{Q}}(\theta_{\mathcal{Q}})-I_{\mathcal{Q}}(\widehat{\theta })\leq I_{\mathcal{Q}}(\theta_{\mathcal{Q}})-\widehat{I}_{\mathcal{Q}}(\theta _{\mathcal{Q}})+\widehat{I}_{\mathcal{Q}}(\theta_{\mathcal{Q}})-\widehat{I}_ {\mathcal{Q}}(\widehat{\theta})+\widehat{I}_{\mathcal{Q}}(\widehat{\theta})-I _{\mathcal{Q}}(\widehat{\theta})\]
\[\leq I_{\mathcal{Q}}(\theta_{\mathcal{Q}})-\widehat{I}_{\mathcal{Q}}( \theta_{\mathcal{Q}})+\widehat{I}_{\mathcal{Q}}(\widehat{\theta})-I_{\mathcal{Q }}(\widehat{\theta})\leq 2\sup_{\theta\in\Theta}|I_{\mathcal{Q}}(\theta)- \widehat{I}_{\mathcal{Q}}(\theta)|. \tag{18}\]
Let us evaluate the worst-case gap between \(I_{\mathcal{Q}}(\theta)\) and \(\widehat{I}_{\mathcal{Q}}(\theta)\):
\[I_{\mathcal{Q}}(\theta)-\widehat{I}_{\mathcal{Q}}(\theta) =\sup_{q}\inf_{q^{\prime}}I_{\text{sym,nce},q}(\theta)-\widehat{I }_{\text{sym,nce},q^{\prime}}(\theta)\] \[\leq\sup_{q}I_{\text{sym,nce},q}(\theta)-\widehat{I}_{\text{sym,nce},q}(\theta).\]
Likewise, we have \(\widehat{I}_{\mathcal{Q}}(\theta)-I_{\mathcal{Q}}(\theta)\leq\sup_{q} \widehat{I}_{\text{sym,nce},q}(\theta)-I_{\text{sym,nce},q}(\theta)\). Therefore,
\[\sup_{\theta\in\Theta}|I_{\mathcal{Q}}(\theta)-\widehat{I}_{\mathcal{Q}}( \theta)|\leq\sup_{\theta\in\Theta,q\in\mathcal{Q}}|I_{\text{sym,nce},q}( \theta)-\widehat{I}_{\text{sym,nce},q}(\theta)|.\]
To derive the convergence rate, we use the Uniform Law of Large Numbers (ULLN) (Mohri et al., 2018) to the following function classes,
\[\mathcal{G} =\{(x,t)\mapsto q(g_{\theta}(x),g_{\theta}(t(x)))\,:\,\theta\in \Theta,\;q\in\mathcal{Q}\},\] \[\exp\circ\mathcal{G} =\{(x,t)\mapsto\exp\{q(r,g_{\theta}(t(x)))\}\,:\,\theta\in\Theta, \;q\in\mathcal{Q},\,r\in\Delta^{C-1}\}.\]
Suppose that the model \((g_{\theta}^{y})_{y\in[C]}\) with any permutation of cluster label is realized by the other parameter \(\theta^{\prime}\). For instance, when \(C=2\), for any \(\theta\) there exists \(\theta^{\prime}\) such that \((g_{\theta}^{2},g_{\theta}^{1})=(g_{\theta^{\prime}}^{1},g_{\theta^{\prime}}^ {2})\) holds. Then, let us define the following function class \(\mathcal{N}\) by
\[\mathcal{N}=\{x\mapsto g_{\theta,1}(x)\,:\,\theta\in\Theta\},\]
where \(g_{\theta,1}\) is the first element of \(g_{\theta}\). We evaluate the estimation error bound in terms of the Rademacher complexity of \(\mathcal{N}\). See Bartlett and Mendelson (2002); Mohri et al. (2018) for details of Rademacher complexity.
We assume the following conditions:
* Any \(q(r,r^{\prime})\) in \(\mathcal{Q}\) is expressed as \(q(r,r^{\prime})=\phi_{q}(r^{\top}r^{\prime})\) for \(r,r^{\prime}\in\Delta^{C-1}\), where \(\phi_{q}:[0,1]\to[a,b]\). We assume that the range of \(\phi_{q}\) is uniformly bounded in the interval \([a,b]\).
* The Lipschitz constant \(\|\phi_{q}\|_{\text{Lip}}\) of \(\phi_{q}\) is uniformly bounded, i.e., \[\sup_{q\in\mathcal{Q}}\|\phi_{q}\|_{\text{Lip}}\leq L<\infty.\]
We consider the Rademacher complexity of \(\mathcal{G}\) and \(\exp\circ\mathcal{G}\). Let \(\sigma_{i},i\in[n]\) be i.i.d. Rademacher random variables. Given \(D=\{(x_{i},x_{i}^{\prime}),\,i\in[n]\}\), the empirical Rademacher complexity is
\[\widehat{\mathfrak{R}}_{D}(\mathcal{G}) =\mathbb{E}_{\sigma}\bigg{[}\sup_{\theta\in\Theta,q\in\mathcal{Q }}\frac{1}{n}\sum_{i}\sigma_{i}q(g_{\theta}(x_{i}),g_{\theta}(x_{i}))\bigg{]}\] \[\leq\mathbb{E}_{\sigma}\bigg{[}\sup_{\theta\in\Theta,q\in \mathcal{Q},r\in\Delta^{C-1}}\frac{1}{n}\sum_{i}\sigma_{i}q(r,g_{\theta}(x_{i} ))\bigg{]},\]
\[\widehat{\mathfrak{R}}_{D}(\exp\circ\mathcal{G}) =\mathbb{E}_{\sigma}\bigg{[}\sup_{\theta\in\Theta,q\in\mathcal{Q},r \in\Delta^{C-1}}\frac{1}{n}\sum_{i}\sigma_{i}\exp\{q(r,g_{\theta}(x_{i}))\} \bigg{]}\] \[\leq e^{b}\mathbb{E}_{\sigma}\bigg{[}\sup_{\theta\in\Theta,q\in \mathcal{Q},r\in\Delta^{C-1}}\frac{1}{n}\sum_{i}\sigma_{i}q(r,g_{\theta}(x_{i} ))\bigg{]}.\]
The inequality in the second line is obtained by Talagrand's lemma (Mohri et al., 2018). Due to the assumption on the function \(q(r,r^{\prime})\), for \(g_{\theta}=(g_{\theta,1},\ldots,g_{\theta,C})\in\Delta^{C-1}\) we have
\[\mathbb{E}_{\sigma}\bigg{[}\sup_{\theta,q,r}\frac{1}{n}\sum_{i} \sigma_{i}q(r,g_{\theta}(x_{i}))\bigg{]} =\mathbb{E}_{\sigma}\bigg{[}\sup_{\theta,q,r}\frac{1}{n}\sum_{i} \sigma_{i}\phi_{q}(r^{\top}g_{\theta}(x_{i}))\bigg{]}\] \[\leq L\mathbb{E}_{\sigma}\bigg{[}\sup_{\theta,r}\frac{1}{n}\sum_{i }\sigma_{i}r^{\top}g_{\theta}(x_{i})\bigg{]}\] \[=L\mathbb{E}_{\sigma}\bigg{[}\sup_{\theta}\max_{c\in[C]}\frac{1} {n}\sum_{i}\sigma_{i}g_{\theta,c}(x_{i})\bigg{]}\] \[=L\mathbb{E}_{\sigma}\bigg{[}\sup_{\theta}\frac{1}{n}\sum_{i} \sigma_{i}g_{\theta,1}(x_{i})\bigg{]}.\]
In the last inequality, again Talagrand's lemma is used. Note that since we deal with a general case in which the probability distribution of \(x_{i}\) and \(x^{\prime}_{i}\) may not be equal to each other, it is worth considering a counterpart w.r.t. the probability distribution of \(x_{i}\), i.e., we have
\[\widehat{\mathfrak{R}}_{D}(\mathcal{G}) =\mathbb{E}_{\sigma}\bigg{[}\sup_{\theta\in\Theta,q\in\mathcal{Q} }\frac{1}{n}\sum_{i}\sigma_{i}q(g_{\theta}(x_{i}),g_{\theta}(x_{i}^{\prime})) \bigg{]}\] \[\leq\mathbb{E}_{\sigma}\bigg{[}\sup_{\theta\in\Theta,q\in \mathcal{Q},r\in\Delta^{C-1}}\frac{1}{n}\sum_{i}\sigma_{i}q(r,g_{\theta}(x_{i} ^{\prime}))\bigg{]},\] \[\widehat{\mathfrak{R}}^{\prime}_{D}(\exp\circ\mathcal{G}) =\mathbb{E}_{\sigma}\bigg{[}\sup_{\theta\in\Theta,q\in\mathcal{Q},r\in\Delta^{C-1}}\frac{1}{n}\sum_{i}\sigma_{i}\exp\{q(r,g_{\theta}(x_{i}^{ \prime}))\}\bigg{]}\] \[\leq e^{b}\mathbb{E}_{\sigma}\bigg{[}\sup_{\theta\in\Theta,q\in \mathcal{Q},r\in\Delta^{C-1}}\frac{1}{n}\sum_{i}\sigma_{i}q(r,g_{\theta}(x_{i} ^{\prime}))\bigg{]},\]
and,
\[\mathbb{E}_{\sigma}\bigg{[}\sup_{\theta,q,r}\frac{1}{n}\sum_{i} \sigma_{i}q(r,g_{\theta}(x_{i}^{\prime}))\bigg{]} =\mathbb{E}_{\sigma}\bigg{[}\sup_{\theta,q,r}\frac{1}{n}\sum_{i} \sigma_{i}\phi_{q}(r^{\top}g_{\theta}(x_{i}^{\prime}))\bigg{]}\] \[\leq L\mathbb{E}_{\sigma}\bigg{[}\sup_{\theta,r}\frac{1}{n}\sum_{i }\sigma_{i}r^{\top}g_{\theta}(x_{i}^{\prime})\bigg{]}\] \[=L\mathbb{E}_{\sigma}\bigg{[}\sup_{\theta}\max_{c\in[C]}\frac{1} {n}\sum_{i}\sigma_{i}g_{\theta,c}(x_{i}^{\prime})\bigg{]}\] \[=L\mathbb{E}_{\sigma}\bigg{[}\sup_{\theta}\frac{1}{n}\sum_{i} \sigma_{i}g_{\theta,1}(x_{i}^{\prime})\bigg{]}.\]
For our purpose it is sufficient to find the Rademacher complexity \(\mathfrak{R}_{n}(\mathcal{N})\) (resp. \(\mathfrak{R}^{\prime}_{n}(\mathcal{N})\)) of \(\mathcal{N}\) w.r.t. the probability distribution of \(x_{i}\) (resp. \(x^{\prime}_{i}\)). For the standard neural network models, both the Rademacher complexities \(\mathfrak{R}_{n}(\mathcal{N})\) and \(\mathfrak{R}^{\prime}_{n}(\mathcal{N})\) are of the order of \(n^{-1/2}\) and the coefficient depends on the maximum norm of the weight (Shalev-Shwartz and Ben-David, 2013). From the above calculation, we have
\[\mathfrak{R}_{n}(\mathcal{G})\leq c\,\mathfrak{R}_{n}(\mathcal{N}),\quad \mathfrak{R}_{n}(\exp\circ\mathcal{G})\leq c\,\mathfrak{R}_{n}(\mathcal{N}),\]
where \(c\) is a positive constant depending on \(b\) and \(L\). Note that the same argument holds for \(\mathfrak{R}^{\prime}_{n}(\mathcal{N})\). In the below, \(c\) is a positive constant that can be different line by line. Furthermore, let us evaluate the Rademacher complexity of the function set
\[x\;\longmapsto\;\log\mathbb{E}_{X^{\prime}}\big{[}e^{q(g_{\theta}(x)^{\top}g_ {\theta}(X^{\prime}))}\big{]}.\]
We use the upper bound of \(\mathfrak{R}_{n}(\exp\circ\mathcal{G})\). The logarithmic function is Lipschitz continuous on the bounded interval \([e^{a},e^{b}]\) and Lipschitz constant is bounded above by \(e^{-a}\) on the interval. The empirical Rademacher complexity is given by
\[\mathbb{E}_{\sigma}\bigg{[}\sup_{\theta,q}\frac{1}{n}\sum_{i=1}^{ n}\sigma_{i}\log\mathbb{E}_{X^{\prime}}\bigg{[}e^{q(g_{\theta}(x_{i})^{\top}g_{ \theta}(X^{\prime}))}\bigg{]}\bigg{]}\] \[\leq e^{-a}\mathbb{E}_{\sigma}\bigg{[}\exp_{\theta,q}\frac{1}{n} \sum_{i=1}^{n}\sigma_{i}e^{q(g_{\theta}(x_{i})^{\top}r)}\bigg{]}\] \[\leq Le^{b-a}\,\mathbb{E}_{\sigma}\bigg{[}\sup_{\theta}\frac{1}{n }\sum_{i=1}^{n}\sigma_{i}g_{\theta,1}(x_{i})\bigg{]}. \tag{19}\]
From the above calculation, the following theorem holds.
**Theorem 3**: _Assume the condition (A) and (B). Let us define \(\varepsilon_{\delta,n}^{\mathcal{N}}\) as_
\[\varepsilon_{\delta,n}^{\mathcal{N}}=\frac{1}{2}\left(\mathfrak{R}_{n}( \mathcal{N})+\mathfrak{R}^{\prime}_{n}(\mathcal{N})\right)+\sqrt{\frac{\log(1 /\delta)}{n}},\]
_where \(\mathfrak{R}_{n}(\mathcal{N})\) (resp. \(\mathfrak{R}^{\prime}_{n}(\mathcal{N})\)) is the Rademacher complexity of \(\mathcal{N}\) for \(n\) samples following the probability distribution of \(x_{i}\) (resp. the probability distribution of \(x^{\prime}_{i}\)). Then, with the probability greater than \(1-\delta\), we have_
\[I_{\mathcal{Q}}(\theta_{\mathcal{Q}})-I_{\mathcal{Q}}(\widehat{\theta})\leq c \,\varepsilon_{\delta,n}^{\mathcal{N}},\]
_where \(c\) is a positive constant depending on \(a,b\) and \(L\)._
_Proof._ The proof of Theorem 3 is the following. From the definition of the symmetric InfoNCE, we have
\[\sup_{\theta}|I_{\mathcal{Q}}(\theta)-\widehat{I}_{\mathcal{Q}}(\theta)|\]
\[\leq e^{-a}\sup_{r\in\Delta^{\mathcal{C}-1}}\left|\frac{1}{n}\sum_{j=1}^{n}e^{q(r,g_{\theta}(x_{j}^{\prime}))}-\mathbb{E}_{X^{\prime}}\big{[}e^{q(r,g_{\theta}(X ^{\prime}))}\big{]}\right|+c\,\varepsilon_{n,\delta}^{\mathcal{N}}\] \[\leq c\,\varepsilon_{n,\delta}^{\mathcal{N}}.\]
Similarly, with the probability greater than \(1-\delta/2\) we have
\[\left|\frac{1}{n}\sum_{i=1}^{n}\log\frac{1}{n}\sum_{j=1}^{n}e^{q(g_{\theta}(x _{i}^{\prime}),g_{\theta}(x_{j}))}-\mathbb{E}_{X^{\prime}}\log\mathbb{E}_{X} \big{[}e^{q(g_{\theta}(X^{\prime})^{\top}g_{\theta}(X))}\big{]}\right|\leq c\, \varepsilon_{n,\delta}^{\mathcal{N}}.\]
Eventually, the worst-case error \(\sup_{\theta}|I_{\mathcal{Q}}(\theta)-\widehat{I}_{\mathcal{Q}}(\theta)|\) is bounded above by \(\varepsilon_{n,\delta}^{\mathcal{N}}\) up to a positive factor. The above bound with inequalities in Eq.(17) and (18) lead to the conclusion.
We then show that the critic functions defined in Eq.(3) satisfy both the condition (A) and (B). Recall the definition of the critic functions:
\[q(z,z^{\prime})=\log\left(\exp_{\alpha}\left(\tau(z^{\top}z^{\prime}-1)\right) \right),\quad\mathrm{where}\quad\alpha\in\mathbb{R},\quad\tau\geq 0.\]
**Lemma 1**: _Given real values \(l\geq 0\), \(w>0\), \(0<d<1\) and \(s<0\), define \(\Xi=\{(\alpha,\tau)\in\mathbb{R}\times\mathbb{R}_{\geq 0}\,:\,\tau\leq l,\,w\leq| \alpha-1|,\,s\leq(1-\alpha)\tau\leq 1-d\}\cup\{(1,\tau)\,:\,0\leq\tau\leq l\}\), \(\phi_{(\alpha,\tau)}=\log\left(\exp_{\alpha}\left(\tau(z^{\top}z^{\prime}-1) \right)\right)\) and \(\mathcal{Q}=\{q:\Delta^{C-1}\times\Delta^{C-1}\rightarrow\mathbb{R}\,:\, \exists(\alpha,\tau)\in\Xi\,\,\mathrm{s.t.}\,q=\phi_{(\alpha,\tau)}\}\). Then every \(q\in\mathcal{Q}\) satisfies both the condition (A) and (B)._
_Proof._ Let \(q\in\mathcal{Q}\). From the definition of \(\mathcal{Q}\), there exists some \((\alpha,\tau)\in\Xi\) such that \(q\) is expressed as \(q(r,r^{\prime})=\phi_{(\alpha,\tau)}(r^{\top}r^{\prime})\) for any \(r,r^{\prime}\in\Delta^{C-1}\). Moreover, \(\phi_{(\alpha,\tau)}\) is uniformly bounded in the following closed interval
\[[\min\{\log d^{1/(1-\alpha)},\log{(1-s)}^{1/(1-\alpha)}\},\max\{\log d^{1/(1- \alpha)},\log{(1-s)}^{1/(1-\alpha)}\}],\]
when \(\alpha\neq 1\), and in \([-l,0]\) when \(\alpha=1\). Therefore, \(q\) satisfies the condition (A). Let us show every \(q=\phi_{(\alpha,\tau)}\in\mathcal{Q}\) also satisfies the condition (B). When \(\alpha=1\), from the definition of the function \(\exp_{\alpha}\), we have \(\phi_{(\alpha,\tau)}(x)=\tau(x-1)\) on \(x\in[0,1]\). Hence, \(\|\phi_{(\alpha,\tau)}\|_{\mathrm{Lip}}\leq\tau\leq l<\infty\). When \(\alpha\neq 1\), since \(\phi_{(\alpha,\tau)}(x)=(1-\alpha)^{-1}\log(1+(1-\alpha)\tau(x-1))\) on \(x\in[0,1]\) we have:
* When \(0\leq(1-\alpha)\tau\leq 1-d\), we have \(\|\phi_{(\alpha,\tau)}\|_{\mathrm{Lip}}\leq\tau/(1-(1-\alpha)\tau)\leq\tau/d<\infty\).
* When \(s\leq(1-\alpha)\tau<0\), then we have \(\|\phi_{(\alpha,\tau)}\|_{\mathrm{Lip}}\leq\tau<\infty\).
Therefore, there exists a non-negative constant \(L\) such that
\[\sup_{\phi_{(\alpha,\tau)}\in\mathcal{Q}}\|\phi_{(\alpha,\tau)}\|_{\mathrm{ Lip}}\leq L<\infty.\]
This implies that \(q=\phi_{(\alpha,\tau)}\) satisfies the condition (B). \(\Box\)
Now we are ready to show the main result on the statistical analysis in this section.
**Theorem 4** (Formal version of Theorem 1): _Let \(\mathcal{Q}\) be the set defined in the setting of Lemma 1. Then, with the probability greater than \(1-\delta\), we have_
\[I_{\mathcal{Q}}(\theta_{\mathcal{Q}})-I_{\mathcal{Q}}(\widehat{\theta})\leq c \,\varepsilon_{\delta,n}^{\mathcal{N}},\]
_where \(c\) is a positive constant depending on \(a,b\) and \(L\). As a result, with probability at least \(1-\delta\) the gap between \(I(\theta^{*})\) and \(I(\widehat{\theta})\) is given by_
\[0\leq I(\theta^{*})-I(\widehat{\theta})\leq I(\theta^{*})-I_{\mathcal{Q}}( \theta_{\mathcal{Q}})+c\,\varepsilon_{n,\delta}^{\mathcal{N}}.\]
_Proof._ From Theorem 3 and Lemma 1, we obtain the claim. \(\Box\)
### Proof of Theorem 2
We show Theorem 2 based on the results by Wang et al. (2022). Recall the definition of the critic function Eq.(3); when \(\alpha=1\), the critic function \(q(g_{\theta}(X),g_{\theta}(T(X)))\) is just \(q(g_{\theta}(X),g_{\theta}(T(X)))=\tau(g_{\theta}(X)^{\top}g_{\theta}(T(X))-1)\). In this case, the symmetric InfoNCE loss, \(-\frac{1}{2}(I_{\mathrm{nce}}+I^{\prime}_{\mathrm{nce}})\), is written as
\[-\mathbb{E}_{p(Z,Z^{\prime})}[\tau(Z^{\top}Z^{\prime}-1)]+\frac{1 }{2}(\mathbb{E}_{p(Z)}[\log\mathbb{E}_{p(Z^{\prime})}[\exp(\tau(Z^{\top}Z^{ \prime}-1))]\\ +\mathbb{E}_{p(Z^{\prime})}[\log\mathbb{E}_{p(Z)}[\exp(\tau(Z^{ \top}Z^{\prime}-1))]]]).\]
The following Proposition 3 provides an upper bound of the symmetric mean supervised loss involving the symmetric InfoNCE loss.
**Proposition 3**: _We have,_
\[\mathcal{L}_{\mathrm{SCE}}^{\mu,\widetilde{\mu}}(g_{\theta})\leq-\frac{1}{2} \left(I_{\mathrm{nce}}+I^{\prime}_{\mathrm{nce}}\right)+\frac{1}{2}\left( \sqrt{\mathrm{Var}(Z|Y)}+\sqrt{\mathrm{Var}(Z^{\prime}|Y)}+2\log C\right),\]
_where \(\mathrm{Var}(Z|Y)=\mathbb{E}_{p(Y)}[\mathbb{E}_{p(Z|Y)}[\|\tau Z-\mu_{Y}\|_{ \infty}^{2}]]\), \(\mathrm{Var}(Z^{\prime}|Y)=\mathbb{E}_{p(Y)}[\mathbb{E}_{p(Z^{\prime}|Y)}[\|\tau Z ^{\prime}-\mu_{Y}\|_{\infty}^{2}]]\)._
_Proof._ The proof of Proposition 3 is mainly due to Theorem A.3 of Wang et al. (2022), but slightly different because we now focus on the symmetric InfoNCE with the critic function of Eq.(3). We show the detail of our proof based on Wang et al. (2022) for the sake of completeness.
\[-\frac{1}{2}\left(I_{\mathrm{nce}}+I^{\prime}_{\mathrm{nce}}\right)\] \[=-\mathbb{E}_{p(Z,Z^{\prime})}[\tau Z^{\top}Z^{\prime}-\tau]+ \frac{1}{2}(\mathbb{E}_{p(Z)}[\log\mathbb{E}_{p(Z^{\prime})}[\exp(\tau Z^{ \top}Z^{\prime}-\tau)]\] \[+\mathbb{E}_{p(Z^{\prime})}[\log\mathbb{E}_{p(Z)}[\exp(\tau Z^{ \top}Z^{\prime}-\tau)]]])\] \[=-\mathbb{E}_{p(Z,Z^{\prime})}[\tau Z^{\top}Z^{\prime}]+\frac{1}{ 2}(\mathbb{E}_{p(Z)}[\log\mathbb{E}_{p(Z^{\prime})}[\exp(\tau Z^{\top}Z^{ \prime})]\] \[+\mathbb{E}_{p(Z^{\prime})}[\log\mathbb{E}_{p(Z)}[\exp(\tau Z^{ \top}Z^{\prime})]]])\] \[=-\mathbb{E}_{p(Z,Z^{\prime})}[\tau Z^{\top}Z^{\prime}]+\frac{1}{ 2}(\mathbb{E}_{p(Z)}[\log\mathbb{E}_{p(Y)}[\mathbb{E}_{p(Z^{\prime}|Y)}[\exp( \tau Z^{\top}Z^{\prime})]]])\] \[+\mathbb{E}_{p(Z^{\prime})}[\log\mathbb{E}_{p(Y)}[\mathbb{E}_{p( Z|Y)}[\exp(\tau Z^{\top}Z^{\prime})]]])\] \[\geq-\mathbb{E}_{p(Z,Z^{\prime})}[\tau Z^{\top}Z^{\prime}]+\frac{1 }{2}(\mathbb{E}_{p(Z)}[\log\mathbb{E}_{p(Y)}[\exp(\mathbb{E}_{p(Z^{\prime}|Y)}[ \tau Z^{\top}Z^{\prime}])]]\] \[+\mathbb{E}_{p(Z^{\prime})}[\log\mathbb{E}_{p(Y)}[\exp(\mathbb{E} _{p(Z|Y)}[\tau Z^{\top}Z^{\prime}]]]))\] \[=-\frac{1}{2}\Big{(}\mathbb{E}_{p(Z,Z^{\prime},Y)}[Z^{\top} \widetilde{\mu}_{Y}+Z^{\top}(\tau Z^{\prime}-\widetilde{\mu}_{Y})]\] \[+\mathbb{E}_{p(Z,Z^{\prime},Y)}[Z^{\prime}{}^{\top}\mu_{Y}+Z^{ \prime}{}^{\top}(\tau Z-\mu_{Y})]\Big{)}\] \[+\frac{1}{2}\left(\mathbb{E}_{p(Z)}[\log\mathbb{E}_{p(Y)}[\exp( Z^{\top}\widetilde{\mu}_{Y})]]+\mathbb{E}_{p(Z^{\prime})}[\log\mathbb{E}_{p(Y)}[ \exp(Z^{\prime}{}^{\top}\mu_{Y})]]\right)\] \[=\frac{1}{2}\left(-\mathbb{E}_{p(Z,Z^{\prime},Y)}[Z^{\top} \widetilde{\mu}_{Y}+Z^{\top}(\tau Z^{\prime}-\widetilde{\mu}_{Y})]+\mathbb{E} _{p(Z)}[\log\mathbb{E}_{p(Y)}[\exp(Z^{\top}\widetilde{\mu}_{Y})]]\right)\]
\[\geq\frac{1}{2}\left(\mathcal{L}_{\mathrm{CE,Raw}}^{\vec{\mu}}(g_{ \theta})+\mathcal{L}_{\mathrm{CE,Aug}}^{\mu}(g_{\theta})-\sqrt{\mathrm{Var}(Z^{ \prime}|Y)}-\sqrt{\mathrm{Var}(Z|Y)}\right)-\log C\] \[=\mathcal{L}_{\mathrm{SCE}}^{\mu,\vec{\mu}}(g_{\theta})-\frac{1}{ 2}\left(\sqrt{\mathrm{Var}(Z^{\prime}|Y)}+\sqrt{\mathrm{Var}(Z|Y)}+2\log C \right).\]
Here, in the first and the third inequality we use Jensen's inequality, and in the second inequality we use the Holder's inequality. \(\Box\)
We next present a lower bound of the symmetric mean supervised loss.
**Proposition 4**: _We have,_
\[\mathcal{L}_{\mathrm{SCE}}^{\mu,\vec{\mu}}(g_{\theta})-\log C\] \[\geq-\frac{1}{2}\left(I_{\mathrm{nce}}+I_{\mathrm{nce}}^{\prime} \right)-\frac{1}{2}\left(\sqrt{\mathrm{Var}(Z|Y)}+\sqrt{\mathrm{Var}(Z^{ \prime}|Y)}\right)-\frac{1}{2e}\mathrm{Var}(\exp(\tau Z^{\top}Z^{\prime})),\]
_where \(\mathrm{Var}(\exp(\tau Z^{\top}Z^{\prime}))=\mathbb{E}_{p(Z)p(Z^{\prime})}[( \exp(\tau Z^{\top}Z^{\prime})-\mathbb{E}_{p(Z)p(Z^{\prime})}[\exp(\tau Z^{\top }Z^{\prime})])^{2}]\)._
_Proof._ The proof of Proposition 3 is mainly due to Theorem A.5 of Wang et al. (2022), but also slightly different. Here, we show the detail of our proof based on Wang et al. (2022) for the sake of completeness.
\[\frac{1}{2}\mathcal{L}_{\mathrm{SCE}}^{\mu,\vec{\mu}}(g_{\theta})\] \[=\frac{1}{2}\left(\mathcal{L}_{\mathrm{CE,Raw}}^{\tilde{\mu}}(g_ {\theta})+\mathcal{L}_{\mathrm{CE,Aug}}^{\mu}(g_{\theta})\right)\] \[=\frac{1}{2}\mathbb{E}_{p(Z^{\prime},Y)}[Z^{\prime}{}^{\top}\mu_ {Y}]-\frac{1}{2}\mathbb{E}_{p(Z,Y)}[Z^{\top}\widetilde{\mu}_{Y}]\] \[\quad+\frac{1}{2}\left(\mathbb{E}_{p(Z)}[\log\mathbb{E}_{p(Y)}[ \exp(Z^{\top}\widetilde{\mu}_{Y})]]+\mathbb{E}_{p(Z^{\prime})}[\log\mathbb{E}_ {p(Y)}[\exp(Z^{\prime}{}^{\top}\mu_{Y})]]\right)+\log C\] \[=-\frac{1}{2}\mathbb{E}_{p(Z,Z^{\prime},Y)}[\tau Z^{\prime}{}^{ \top}Z+Z^{\prime}{}^{\top}(\mu_{Y}-\tau Z)]-\frac{1}{2}\mathbb{E}_{p(Z,Z^{ \prime},Y)}[\tau Z^{\top}Z^{\prime}+Z^{\top}(\widetilde{\mu}_{Y}-\tau Z^{ \prime})]\] \[\quad+\frac{1}{2}\left(\mathbb{E}_{p(Z)}[\log\mathbb{E}_{p(Y)}[ \exp(Z^{\top}\widetilde{\mu}_{Y})]]+\mathbb{E}_{p(Z^{\prime})}[\log\mathbb{E}_ {p(Y)}[\exp(Z^{\prime}{}^{\top}\mu_{Y})]]\right)+\log C\] \[\geq-\frac{1}{2}\mathbb{E}_{p(Z,Z^{\prime},Y)}[\tau Z^{\prime}{}^ {\top}Z+\|\mu_{Y}-\tau Z\|_{\infty}]-\frac{1}{2}\mathbb{E}_{p(Z,Z^{\prime},Y) }[\tau Z^{\top}Z^{\prime}+\|\widetilde{\mu}_{Y}-\tau Z^{\prime}\|_{\infty}]\]
\[\geq-\mathbb{E}_{p(Z,Z^{\prime})}[\tau{Z^{\prime}}^{\top}Z]-\frac{1}{2} \sqrt{\mathrm{Var}(Z|Y)}-\frac{1}{2}\sqrt{\mathrm{Var}(Z^{\prime}|Y)}-\frac{1}{ 2}\sqrt{\mathrm{Var}(Z^{\prime}|Y)}-\frac{1}{2e}\mathrm{Var}(\exp(\tau{Z^{ \top}}Z^{\prime}))+\log C.\]
Where, in the first inequality we use Holder's inequality, and in the second and the third inequality we apply Jensen's inequality. In the last inequality, we utilize the sharpened Jensen's inequality (Liao and Berg, 2019). \(\Box\)
As a direct result of Proposition 3 and Proposition 4, we obtain the claim of Theorem 2.
## Appendix C Further Comparison between Symmetric InfoNCE, InfoNCE, and SimCLR
Let us see additional differences between the symmetric InfoNCE and the original InfoNCE. To do so, let us recall the following property: due to the symmetrization, the degree of freedom of optimal critics is greatly reduced. Thus, from comparison between \(-(\hat{I}_{\text{nce}}+\hat{I}^{\prime}_{\text{nce}})/2\) and \(-\hat{I}_{\text{nce},q}\) of Eq.(2), the symmetrization is expected to stabilize the parameter learning; see Figure 3.
For the comparison, we decompose Eq.(5) into three terms like Eq.(6). This decomposition can be expressed as a variant of Eq.(6), where the last term of Eq.(6) is replaced by \(\frac{1}{|\mathcal{B}|}\sum_{x_{i}\in\mathcal{B}}\log\left(\sum_{x_{j}\in \mathcal{B}}e^{q(g_{\theta}(x_{i}),g_{\theta}(t_{j}(x_{j})))}\right)\). In the decomposition, following notations of Eq.(6), the corresponding positive and negative losses in Eq.(5) are denoted by \(L_{\mathrm{ps}}^{\prime}\) and \(L_{\mathrm{ng}}^{\prime}\), respectively. Moreover, let us re-write \(L_{\mathrm{ps}}+L_{\mathrm{ng}}\) and \(L_{\mathrm{ps}}^{\prime}+L_{\mathrm{ng}}^{\prime}\) as \(\frac{1}{|\mathcal{B}|}\sum_{x_{i}\in\mathcal{B}}\ell_{\mathrm{ps}}(x_{i})+ \ell_{\mathrm{ng}}(x_{i})\) and \(\frac{1}{|\mathcal{B}|}\sum_{x_{i}\in\mathcal{B}}\ell_{\mathrm{ps}}^{\prime} (x_{i})+\ell_{\mathrm{ng}}^{\prime}(x_{i})\), respectively. Here, we name both \(\ell_{\mathrm{ps}}(x_{i})+\ell_{\mathrm{ng}}(x_{i})\) and \(\ell_{\mathrm{ps}}^{\prime}(x_{i})+\ell_{\mathrm{ng}}^{\prime}(x_{i})\), _point-wise contrastive loss_. The four terms: \(\ell_{\mathrm{ps}}\), \(\ell_{\mathrm{ng}}\), \(\ell_{\mathrm{ps}}^{\prime}\), and \(\ell_{\mathrm{ng}}^{\prime}\) are defined as follows:
\[\underbrace{-q\left(g_{\theta}(x_{i}),g_{\theta}\left(t_{i}(x_{i})\right) \right)}_{\ell_{\mathrm{ps}}(x_{i})\text{: point-wise positive loss}}+\underbrace{\frac{1}{2}\log\left(\sum_{x_{j}\in\mathcal{B}}\sum_{x_{i^{ \prime}}\in\mathcal{B}}e^{a(i,i^{\prime},j)}\right)}_{\ell_{\mathrm{ng}}(x_{ i})\text{: point-wise negative loss}}, \tag{20}\]
\[\underbrace{-q\left(g_{\theta}(x_{i}),g_{\theta}\left(t_{i}(x_{i})\right) \right)}_{\ell_{\mathrm{ps}}^{\prime}(x_{i})\text{: point-wise positive loss}}+\underbrace{\log\left(\sum_{x_{j}\in\mathcal{B}}e^{q(g_{\theta}(x_{i}),g_{ \theta}(t_{j}(x_{j})))}\right)}_{\ell_{\mathrm{ng}}^{\prime}(x_{i})\text{: point-wise negative loss}}. \tag{21}\]
Suppose that values of the point-wise contrastive losses are small enough. In this case, we can see the difference on stability between \(-(\hat{I}_{\mathrm{nce}}+\hat{I}_{\mathrm{nce}}^{\prime})/2\) and \(-\hat{I}_{\mathrm{nce},q}\) by comparing
Figure 3: Illustration of positive / negative pairs of the proposed and baseline methods. From a) to f), the colors (blue, magenta, and yellow) mean different cluster labels. The light-colored manifolds in a) and d) express true clusters. In a) (reps. d)), the set of clusters composes Three-Blobs (resp. Two-Rings). A pair of the small circle and triangle symbols with the same color means a pair of a data point \(x\) and the transformed data point \(t(x)\) (i.e., so-called positive pair). The two data points connected by the red dash line (resp. blue straight or curved line) are enforced to be distant (resp. close) to each other. Especially in a) and d), the effect expressed by the red dash lines (resp. blue straight or curved line) are brought by making \(\ell_{\mathrm{ng}}(x_{i})\) (resp. \(\ell_{\mathrm{ps}}(x_{i})\)) of Eq.(20) to be small. In b) and e), the effect expressed by the red dash lines (resp. blue straight or curved line) are brought by making \(\ell_{\mathrm{ng}}^{\prime}(x_{i})\) (resp. \(\ell_{\mathrm{ps}}^{\prime}(x_{i})\)) of Eq.(21) to be small. In c) and f), the effect expressed by the red dash lines (resp. blue straight or curved line) are brought by making \(\ell_{\mathrm{ng}}^{\prime\prime}(x_{i})\) (resp. \(\ell_{\mathrm{ps}}^{\prime\prime}(x_{i})\)) of Eq.(22) to be small.
a) vs. b) and d) vs. e) in Figure 3. In this figure, it is observed that the empirical symmetric InfoNCE produces more stable contrastive effects than the empirical InfoNCE.
We also compare \(-(\hat{I}_{\text{nce}}+\hat{I}_{\text{nce}}^{\prime})/2\) and the loss of SimCLR introduced in Chen et al. (2020). To do so, consider the loss of SimCLR defined by a mini-batch \(\mathcal{B}\). Let \(L_{\text{SimCLR}}^{\mathcal{B}}\) denote the loss of SimCLR with the mini-batch \(\mathcal{B}\). Then, let \(\ell_{\text{ps}}^{\prime\prime}(x_{i})\) and \(\ell_{\text{ng}}^{\prime\prime}(x_{i})\) (\(x_{i}\in\mathcal{B}\)) denote the point-wise positive loss and point-wise negative loss by,
\[L_{\text{SimCLR}}^{\mathcal{B}}=\frac{1}{|\mathcal{B}|}\sum_{x_{i}\in\mathcal{ B}}\ell_{\text{ps}}^{\prime\prime}(x_{i})+\ell_{\text{ng}}^{\prime\prime}(x_{i}). \tag{22}\]
In this case, we can see the difference via a) vs. c) and d) vs. f) in Figure 3. Since the symmetric InfoNCE produces similar contrastive effects to SimCLR does, the symmetric InfoNCE is interpreted as a simplified variant of SimCLR. We however note that it is not easy to theoretically analyze SimCLR unlike our symmetric InfoNCE, since \(L_{\text{SimCLR}}^{\mathcal{B}}\) is designed based on heuristics.
## Appendix D Details of MIST
### Details of MIST Objective
To understand Eq.(11), let us see an effect brought by minimization of each term (\(R_{\text{vat}}\), \(-H(Y)\), \(H(Y|X)\), \(L_{\text{ps}}\), and \(L_{\text{ng}}\)) via Figure 4. In this figure, the left pictures a) and c) show true clusters defined by the set of data points in the original space \(\mathbb{R}^{d}\). Each color expresses a distinct true label. The pictures b) and d) show what kind of effects is brought by minimization of each term in the representation space \(\mathbb{R}^{C}\). We here suppose that the appropriate hyper-parameters are used for Eq.(11). In both b) and d), minimization of \(R_{\text{vat}}\) makes the model \(g_{\theta}\) acquire the local smoothness; see Eq.(12). In addition, minimization of \(L_{\text{ps}}\) makes the model predict the same cluster labels for the topologically close two data points. Note that, while minimization of \(L_{\text{ps}}\) defined by \(\mathcal{T}_{\text{e}}\) in Definition 3 brings the similar effect (see b) in Figure 4) to the effect of \(R_{\text{vat}}\), minimization of \(L_{\text{ps}}\) defined by \(\mathcal{T}_{\text{g}}\) in Definition 4 brings clearly different effect from \(R_{\text{vat}}\). For understanding this clear difference, observe that \(x_{i}\) and \(t_{i}(x_{i})\) in d) are forced to be close via minimization of \(L_{\text{ps}}\). Minimization of \(-H(Y)\) (i.e., forcing \(p_{\theta}(y)\in\Delta^{C-1}\) to be uniform) makes the model return the non-degenerate clustering result. Moreover, minimization of \(H(Y|X)\) makes the model return a one-hot vector. Thus, it assists each cluster to be distant. At last, as discussed at Section 3.3, minimization of \(L_{\text{ng}}\) also makes the model return the non-degenerate clustering result.
As recently proposed methods that are similar to MIST, we list Van Gansbeke et al. (2020); Li et al. (2021); Dang et al. (2021). The above three methods focus on the image domain (i.e., Scenario1 of Section 1.2). In the above three methods, either InfoNCE or SimCLR is employed to enhance the clustering performance. Moreover, the scenario these related works focus on is different from Scenario2. Furthermore, all the three studies do not provide any theoretical analysis of their proposed methods.
**Input** Unlabeled dataset: \(\mathcal{D}=\{x_{i}\}_{i=1}^{n}\). Model of \(p(y|x)\): \(g_{\theta}(x)\). Hyperparameters: \(\mu,\gamma,\eta>0\). Generative process: \(\mathcal{T}\) with a conditional probability \(p(t|x)\), where \(x\in\mathbb{R}^{d}\) and \(t:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}\). If \(\mathcal{D}\) is complex dataset, employ \(\mathcal{T}_{\mathfrak{g}}\) of Definition 4. Otherwise, employ \(\mathcal{T}_{\mathfrak{e}}\) of Definition 3. Mini-batch size \(m\). Number of epochs: \(n_{\mathrm{ep}}\).
**Output** Set of estimated cluster labels with \(\mathcal{D}\): \(\{\hat{y}_{i}\}_{i=1}^{n}\).
```
1:Initialize the trainable parameter \(\theta\).
2:for\(epoch=1,\cdots,n_{\mathrm{ep}}\)do
3:for\(l=0,1,\cdots,\lfloor\frac{n}{m}\rfloor\)do
4: Randomly pick up \(x_{i_{1}},\cdots,x_{i_{m}}\in\mathcal{D}\).
5: Compute \(x_{i_{k}}^{\prime}=t_{i_{k}}(x_{i_{k}}),i=1,\cdots,m\), where \(t_{i_{k}}\sim p(t|x_{i_{k}})\), and \(p(t|x_{i_{k}})\) is defined via either \(\mathcal{T}_{\mathfrak{e}}\) or \(\mathcal{T}_{\mathfrak{g}}\).
6: Update the parameter \(\theta\) by the SGD for the loss function in Eq.(11) computed using the mini-batch \(\mathcal{B}=\{x_{i_{k}}\}_{k=1}^{m}\) and \(\{x_{i_{k}}^{\prime}\}_{k=1}^{m}\).
7: Let \(\theta^{*}\) be the estimated parameter.
8:\(\hat{y}_{i}=\arg\max_{y\in\{1,\cdots,C\}}g_{\theta^{*}}^{y}(x_{i})\) for \(i=1,\cdots,n\).
```
**Algorithm 1**MIST
Figure 4: Intuitive illustration of MIST. Effect of each term in Eq.(11) is displayed for Three-Blobs and Two-Rings. In a) and c), the true clusters in the original space are shown, where colors mean cluster labels. In b) and d), effect of each term in Eq.(11) in the representation space is shown.
Figure 5: MIST architecture.
### Time and Memory Complexities with \(\mathcal{T_{\mathsf{c}}}\) and \(\mathcal{T_{\mathsf{g}}}\)
Suppose that we construct the non-approximated K-NN graph on \(\mathcal{D}\) by using Euclidean distance. Then, the time complexity with \(\mathcal{T_{\mathsf{c}}}\) is \(O(dn^{2})\), where \(d\) is the dimension of a feature vector. The memory complexity is \(O(K_{0}n)\). As for \(\mathcal{T_{\mathsf{g}}}\), time and memory complexities are \(O\left((K_{0}+\log n)n^{2}\right)\) and \(O(n^{2})\), respectively (Moscovich et al., 2017). Note that if we construct the approximated K-NN graph on \(\mathcal{D}\) by the Euclidean distance, the time complexity with \(\mathcal{T_{\mathsf{c}}}\) is reduced to \(O(dn\log n)\)(Wang et al., 2013; Zhang et al., 2013).
## Appendix E Experiment Details
### Details of Datasets
We used Two-Moons2 and Two-Rings3 in scikit-learn. For the former dataset, we set \(0.05\) as the noise parameter. For the latter dataset we set \(0.01\) and \(0.35\) as noise and factor parameters respectively. For SVHN, STL, CIFAR10, CIFAR100, Omniglot and Reuters10K, we used the datasets on GitHub4. As for MNIST and 20news, Keras (Geron, 2019) was used. The summary of all the datasets is shown in Table 7. In the following, we review how features of the eight real-world datasets are obtained.
Footnote 2: [https://scikit-learn.org/stable/modules/generated/sklearn.datasets.make_moons.html](https://scikit-learn.org/stable/modules/generated/sklearn.datasets.make_moons.html) [Last accessed 23-July-2022]
Footnote 3: [https://scikit-learn.org/stable/modules/generated/sklearn.datasets.make_circles.html](https://scikit-learn.org/stable/modules/generated/sklearn.datasets.make_circles.html) [Last accessed 23-July-2022]
Footnote 4: [https://github.com/weihua916/imsat](https://github.com/weihua916/imsat) [Last accessed 23-July-2022]
* MNIST: It is a hand-written digits classification dataset with \(28\times 28\) single-channel images. The value of each pixel is linearly normalized into \([0,1]\) and then flattened to a \(784\) dimensional feature vector.
* STL: It is a labelled subset of ImageNet (Jia Deng et al., 2009) with \(96\times 96\) colored images. We adopted features from Hu et al. (2017), which is extracted by pre-trained 50-layer ResNets.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline & \#Points & \#Cluster & Dimension & \%Largest Cluster & \%Smallest Cluster \\ \hline Two-Moons & 5000 & 2 & 2 & 50\% & 50\% \\ Two-Rings & 5000 & 2 & 2 & 50\% & 50\% \\ \hline MNIST & 70000 & 10 & 784 & 11\% & 9\% \\ SVHN & 99289 & 10 & 960 & 19\% & 6\% \\ STL & 13000 & 10 & 2048 & 10\% & 10\% \\ CIFAR10 & 60000 & 10 & 2048 & 10\% & 10\% \\ CIFAR100 & 60000 & 100 & 2048 & 1\% & 1\% \\ Omniglot & 40000 & 100 & 441 & 1\% & 1\% \\
20news & 18040 & 20 & 2000 & 5\% & 3\% \\ Reuters10K & 10000 & 4 & 2000 & 43\% & 8\% \\ \hline \hline \end{tabular}
\end{table}
Table 7: Summary of the ten datasets used in our experiments.
* CIFAR10: It is a dataset with ten clusters, \(32\times 32\) colored images. We adopted features from Hu et al. (2017), which is extracted by pre-trained 50-layer ResNets.
* CIFAR100: It is a dataset with one hundred clusters, \(32\times 32\) colored images. We adopted features from Hu et al. (2017), which is extracted by pretrained 50-layer ResNets.
* Omniglot: It is a hand-written character recognition dataset. We adopted the processing results from Hu et al. (2017), which is an one hundred clusters dataset with twenty unique data points per class. Twenty times affine augmentations were applied as in Hu et al. (2017), so there are \(100\times 20\times 20=40000\) images available. Images were sized \(21\times 21\) single-channel, linearly normalized into \([0,1]\) and flattened into feature vectors.
* 20news: It is a dataset of news documents across twenty newsgroups. We adopted the processing code from Hu et al. (2017). It used the data from python package scikit-learn (Geron, 2019) and processed using tf-idf features with 'english' stopwords.
* SVHN: It is a dataset with street view house numbers. Following Hu et al. (2017), we used the features they have extracted with \(960\)-dimensional GIST feature (Oliva and Torralba, 2001).
* Reuters10K: It is a dataset with English news stories. We adopted the processing results from Hu et al. (2017). It contains four categories as labels: corporate/industrial, government/social, markets and economics. ten-thousands documents were randomly sampled, and processed without stop words. tf-idf features were used as in Hu et al. (2017).
### Complex and Non-Complex Topology Datasets
In order to characterize each dataset from some geometric point of view, we performed experiments with the K-means algorithm for these ten datasets; see the top row of Table 2. Here, in the K-means algorithm we use the Euclidean distance to measure how far two points are apart from each other: see Chapter 22 of Shalev-Shwartz and Ben-David (2013) for a general objective function of the K-means algorithm. Hence, if the Top-1 accuracy with the K-means algorithm is low, then the dataset can have a complex structure so that the K-means algorithm fails to group the data points into meaningful clusters. Utilizing the results with K-means algorithm (see the second row of Table 2), we define (non-)complex topology of a dataset as follows: 1) we say a dataset has non-complex topology if the Top-1 accuracy (% According to these definitions, we classify the ten datasets into two categories; two synthetic datasets, Two-Moons and Two-Rings, are of complex topology, and the others are of non-complex topology.
Note that, strictly speaking, it is difficult to provide a rigorous definition of (non-)complex topology for a real-world dataset. Instead, we state a definition inspired by our empirical observations with the K-means algorithm for the ten different datasets.
### Implementation Details with Compared Methods
* K-means: sklearn.cluster.KMeans from scikit-learn.
* 'nearest neighbors' Graph and 'amg' eigen solver.
* GMMC: sklearn.mixture.GaussianMixture from scikit-learn with diagonal covariance matrices.
* DEC5: Keras implementation of Xie et al. (2016) is used.
* SpectralNet6: We used the version at commit _ce99307_ with tensorflow 1.15, keras 2.1.6, Ubuntu 18.04 since we found that this is the only configuration that reproduces paper result in our environments. For real-world datasets, we used the 10-dimensional VaDE representation obtained in this work (see implementation details of VaDE) as input to SpectralNet. 10 neighbors were used with approximated nearest neighbor search. For Toy-sets, we have used the raw 2-dimensional input with official hyper-parameter setups for "CC" dataset in SpectralNet. Footnote 5: [https://github.com/XifengGuo/DEC-keras](https://github.com/XifengGuo/DEC-keras) [Last accessed 23-July-2022]
* VaDE7: We added the constraint that Gaussian Mixture component weight \(\pi>0\) to avoid numerical instabilities. We did not use the provided pretraining weights since we cannot reproduce the pretraining process for all datasets. Footnote 7: [https://github.com/KlugerLab/SpectralNet](https://github.com/KlugerLab/SpectralNet) [Last accessed 23-July-2022]
* IMSAT8: Given an unlabeled dataset \(\mathcal{D}\) and the number of clusters, we train a clustering model of IMSAT by using \(\mathcal{D}\) via Eq.(15). In addition, we define the adaptive radius \(\epsilon_{i}\) in VAT as same with one defined in MIST: see also Appendix E.5. Moreover, for synthetic datasets, we set \((0.1,0.5)\) to \((\lambda_{1},\lambda_{2})\) of Eq.(15), and set \(0.1\) to \(\xi\) in VAT. For real-world datasets, we set \((0.1,4)\) to \((\lambda_{1},\lambda_{2})\) of Eq.(15), and set \(10\) to \(\xi\) in VAT. Footnote 8: [https://github.com/GuHongyang/VaDE-pytorch](https://github.com/GuHongyang/VaDE-pytorch) [Last accessed 23-July-2022]
* IIC9: Since we consider Scenario2, we cannot define the transformation function via the domain-specific knowledge. Therefore, we define it via \(\mathcal{T}_{\epsilon}\) of Definition 3 for all ten datasets as follows. For the synthetic datasets and the image datasets, \(K_{0}=10\) is used. For the text datasets, \(K_{0}=100\) is used. The above values of \(K_{0}\) are selected by the hyper-parameter tuning. Footnote 8: [https://github.com/betairylia/IMSAT_torch](https://github.com/betairylia/IMSAT_torch) [Last accessed 23-July-2022]
* CatGAN: We adopted the implementation from here10 and moved it to GPU. Since the original CatGAN experiments have used CNNs and cannot be applied to general-purpose datasets, we substituted CNNs in both generator and discriminator with a 4-layer MLP. Footnote 10: [https://github.com/xinario/catgan_pytorch](https://github.com/xinario/catgan_pytorch) [Last accessed 23-July-2022]
* SELA: We used the official implementation11 with single head and known cluster numbers. We replaced the Convolutional Network in the original work by a simple MLP identical to our MIST implementation as we focusing on general purpose unsupervised learning instead of images. We also disabled data-augmentation steps presented in the original work of SELA. Footnote 11: [https://github.com/yukimasano/self-label](https://github.com/yukimasano/self-label) [Last accessed 23-July-2022]
* SCAN: We adopted the loss computation part from official implementation12 and used MIST's framework to implement SCAN. Since we focus on generic datasets without specific domain knowledge, data augmentations are removed and SCAN learns solely on nearest neighbors. Same input data (and feature extraction steps) as MIST are used for our SCAN implementation. Footnote 12: [https://github.com/wvangansbeke/Unsupervised-Classification](https://github.com/wvangansbeke/Unsupervised-Classification) [Last accessed 19-July-2022]
### Two-Dimensional Visualization
Panels a)\(\sim\)h) in Figure 1 were obtained by the following procedure. For two-dimensional visualization with real-world datasets, we employ UMAP (McInnes et al., 2018), and implement it using the public code13, where we set ten and two to "n_neighbors" and "n_components". In addition, we fix the above two parameters with UMAP for all visualization of real-world datasets.
Footnote 13: [https://pypi.org/project/umap-learn/](https://pypi.org/project/umap-learn/) [Last accessed 23-July-2022]
* a): Input MNIST dataset \(\mathcal{D}_{\mathrm{mnist}}=\{x_{i}\}_{i=1}^{n}\), where \(x_{i}\in\mathbb{R}^{784}\) and \(n=70000\), to UMAP. Then, we obtain the two-dimensional vectors of \(\mathcal{D}_{\mathrm{mnist}}\). Then, assign true labels to the vectors.
* c): Firstly, using \(\mathcal{D}_{\mathrm{mnist}}\), train IMSAT of Eq.(15) where \((\lambda_{1},\lambda_{2})=(0.1,4)\). Moreover, for VAT in IMSAT, we set ten to \(\xi\), and define the adaptive radius \(\epsilon_{i}\) as same with one defined in MIST; see also Appendix E.5. Input \(\mathcal{D}_{\mathrm{mnist}}\) to the trained clustering model whose last layer (a softmax function) is removed. Then, get the output whose dimension is \(C=10\). Thereafter, we feed the output to UMAP, and we obtain the two-dimensional vectors. Thereafter, assign true labels to the vectors.
* e): Firstly, using \(\mathcal{D}_{\mathrm{mnist}}\), train a clustering MLP of SpectralNet. SpectralNet's official hyper-parameter setups are used. Input \(\mathcal{D}_{\mathrm{mnist}}\) to the trained clustering model whose last layer (a softmax function) is removed. Then, get the output whose dimension is \(C=10\). Thereafter, we feed the output to UMAP, and we obtain the two-dimensional vectors. Thereafter, assign true labels to the vectors.
* g): Firstly, using \(\mathcal{D}_{\mathrm{mnist}}\), train clustering neural network model with MIST whose hyper-parameters are defined in Appendix E.5. Input \(\mathcal{D}_{\mathrm{mnist}}\) to the trained clustering model whose last layer defined by the softmax is removed, and get the output. Then, we input the output to UMAP, and we obtain the two-dimensional vectors. Thereafter, assign true labels to the vectors.
* b), d), f), h): Since a data point in Two-Rings dataset \(\mathcal{D}_{\text{two\_rings}}\) already belongs to two-dimensional space, we just visualize the data point location with its label information in the panel b). With detail of \(\mathcal{D}_{\text{two\_rings}}\), see Appendix E.1. For the panels d), f) and h), we firstly predict the cluster labels by using corresponding clustering method. Then, visualize the data point location with its predicted cluster label.
Figure 6: Two-dimensional visualizations of original datasets and their representations by MIST. Visualization of a original dataset is obtained via the same manner with panel a) of Figure 1, while that of MIST representation is obtained via the same manner with panel g) of Figure 1.
In Figure 6, we additionally show two-dimensional visualization results of all eight real-world datasets. In this figure, visualizations of the first row were obtained by the same manner with the panel a) of Figure 1. Visualizations (by MIST) of the second row in Figure 6 were obtained by the same manner with the panel g) of Figure 1.
### Hyper-Parameter Tuning
Table 8 shows all hyper-parameters related to MIST algorithm. Throughout numerical experiments of Section 4, we set \(250,50\) as \(m,n_{\text{ep}}\), respectively. In addition, following Hu et al. (2017), we respectively fix \(\epsilon_{i}\) of VAT to \(\epsilon_{i}=0.25\times\left\|x_{i}-x_{i}^{(10)}\right\|_{2}\), where \(x_{i}\in\mathcal{D}\) and \(x_{i}^{(10)}\) is the tenth nearest neighbor data point from \(x_{i}\) on \(\mathcal{D}\) with the Euclidean metric. Note that for the synthetic datasets (resp. real-world datasets), the generative process \(\mathcal{T}_{\mathfrak{g}}\) of Definition 4 (resp. the generative process \(\mathcal{T}_{\mathfrak{epsilon}}\) of Definition 3) is employed.
In numerical experiments of Table 2, the other hyper-parameters are tuned within the corresponding candidates shown in Table 9. Those candidates were decided by the following procedure:
* \((\mu,\eta,\gamma)\): Since MIST is based on IMSAT, following Hu et al. (2017), for real-world datasets, we manually search efficient candidates, which satisfy the following criterion, inside the region including \(\mu\eta=0.4\) and \(\mu=0.1\): candidates which work well for MIST and MIST with \(\tilde{I}_{\mathrm{nec}}\) of Table 2. Note that, in IMSAT
\begin{table}
\begin{tabular}{c c} \hline \hline Hyper-Parameters & Reference \\ \hline \((\mu,\eta,\gamma)\) & MIST objective of Eq.(11) \\ \((\alpha,\tau)\) & Critic function of Eq.(3) \\ \((K_{0},\beta)\) & Definition 3 and 4 \\ \((\xi,\epsilon_{i})\) & VAT of Eq.(13) \\ \hline \((m,n_{\text{ep}})\) & Algorithm 1 \\ \hline \hline \end{tabular}
\end{table}
Table 8: All hyper-parameters related to MIST of Algorithm 1. In the first (resp. second) column, the hyper-parameters (resp. reference) are shown.
\begin{table}
\begin{tabular}{c c c c} \hline \hline Hyper-Parameters & Real-world \& Image & Real-world \& Text & Synthetic \\ \hline \((\mu,\eta,\gamma)\) & \(\{(0.045,5,1.5),(0.045,6,1.5),\) & & \(\leftarrow\) & \(\{(0.1,15,10),(0.1,15.5,10),\) \\ \((0.05,5,1.5),(0.04,6,1.5)\}\) & & \((0.1,16,10)\}\) \\ \(\alpha\) & \(\{0,1,2\}\) & \(\leftarrow\) & \(\leftarrow\) \\ \(\tau\) & \(\{0.01,0.05,0.1,1,10\}\) & \(\leftarrow\) & \(\leftarrow\) \\ \((K_{0},\beta)\) & \(\{(5,0),(7,0),(10,0),(15,0)\}\) & \(\{(50j,2/3),(50j,4/5)\mid j\in\{1,..,4\}\}\) & \(\{(15,j/10)\mid j\in\{0,..,10\}\}\) \\ \(\xi\) & \(\{0.1,1,10,100\}\) & \(\leftarrow\) & \(\leftarrow\) \\ \hline \hline \end{tabular}
\end{table}
Table 9: Candidates with some hyper-parameters related in MIST and MIST with \(\hat{I}_{\mathrm{nec}}\) of Table 2. The symbol of \(\leftarrow\) indicates the same value to the cell in the left. Real-world & Image means MNIST, SVHN, STL, CIFAR10, CIFAR100, Omniglot. Real-world & Text means 20news and Reuters10K. Synthetic means both Two-Moons and Two-Rings.
objective of Eq.(15), the authors set \(\mu=0.1\) and \(\eta=4\) in their official code. For the synthetic datasets, the candidates were decided via totally manual searching.
* \((\alpha,\tau)\) and \((K_{0},\beta)\): We essentially conducted manual searching for candidates, which can be efficient for both MIST and MIST with \(\hat{I}_{\rm nce}\). When we select the candidates of \(K_{0}\), we follow the same strategy of Shaham et al. (2018).
* \(\xi\): We chose values that are around ten, since ten is set as \(\xi\) in the official IMSAT code.
As for criterion of hyper-parameter tuning of the MIST and MIST with \(\hat{I}_{\rm nce}\), we employed the following: for each (either real-world or synthetic), the most efficient \((\mu,\eta,\gamma,\alpha,\tau,\xi)\) should be found, while \((K_{0},\beta)\) can be adaptive for ten datasets. To find the best efficient one, we used a tuning method described in Appendix G of Hu et al. (2017), where a set of hyper-parameters, that can have the highest average clustering accuracy over several datasets, are selected. The tuning result of the MIST is shown in Table 10.
Moreover, for all combinations except for (\(\circled8\), \(\circled8\)) in Table 3, we at first manually selected the candidates of hyper-parameters. Then, for each combination, we conducted hyper-parameter tuning, whose criterion is same with one employed for tuning hyper-parameters in MIST of Table 2. The tuning results are shown in Table 11.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline & MNIST, SVHN, STL, CIFAR100, Omniglot & CIFAR10 & 20news & Reuters10K & Two-Moons & Two-Rings \\ \hline \((\mu,\eta,\gamma)\) & \((0.045,5,1.5)\) & \(\leftarrow\) & \(\leftarrow\) & \(\leftarrow\) & \((0.1,15.5,10)\) & \(\leftarrow\) \\ \((\alpha,\tau)\) & \((1,0.05)\) & \(\leftarrow\) & \(\leftarrow\) & \(\leftarrow\) & \(\leftarrow\) & \(\leftarrow\) \\ \((K_{0},\beta)\) & \((7,0)\) & \((15,0)\) & \((200,4/5)\) & \((50,4/5)\) & \((15,0)\) & \((15,0.6)\) \\ \(\xi\) & 10 & \(\leftarrow\) & \(\leftarrow\) & \(\leftarrow\) & 0.1 & \(\leftarrow\) \\ \hline \hline \end{tabular}
\end{table}
Table 10: Selected hyper-parameters with MIST of Table 2. The symbol of \(\leftarrow\) indicates the same value to the cell in the left.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline & (\(\circled8\)) & (\(\circled8\)), \(\circled8\)) & (\(\circled8\)) & (\(\circled8\)), \(\circled8)) & (\(\circled8\)), \(\circled8)) & Two-Rings \\ \hline \(\mu\) & \(-\) & \(\leftarrow\) & 0.045 & \(-\) & 0.1 & 0.1 \\ \(\eta\) & \(-\) & 1 & \(-\) & 1 & 4 & 15.5 \\ \(\gamma\) & \(-\) & 10 & 1.5 & 10 & \(-\) & 10 \\ \(\tau\) & 1.0 & \(\leftarrow\) & \(\leftarrow\) & 0.1 & \(-\) & 1.0 \\ \hline \hline \end{tabular}
\end{table}
Table 11: Selected value for each hyper-parameter with experiments related to Table 3. From the second to sixth columns, hyper-parameter values selected for real-world datasets are shown. The symbol of ”\(-\)” means that the hyper-parameter is not needed. The symbol of \(\leftarrow\) indicates the same value to the cell in the left. In the last column, hyper-parameter values to define six combinations are shown. As for \((\alpha,K_{0},\beta,\xi)\), the same values shown in Table 10 are employed.
|
2308.01553
|
Uncertainty analysis for accurate ground truth trajectories with robotic
total stations
|
In the context of robotics, accurate ground truth positioning is essential
for the development of Simultaneous Localization and Mapping (SLAM) and control
algorithms. Robotic Total Stations (RTSs) provide accurate and precise
reference positions in different types of outdoor environments, especially when
compared to the limited accuracy of Global Navigation Satellite System (GNSS)
in cluttered areas. Three RTSs give the possibility to obtain the six-Degrees
Of Freedom (DOF) reference pose of a robotic platform. However, the uncertainty
of every pose is rarely computed for trajectory evaluation. As evaluation
algorithms are getting increasingly precise, it becomes crucial to take into
account this uncertainty. We propose a method to compute this six-DOF
uncertainty from the fusion of three RTSs based on Monte Carlo (MC) methods.
This solution relies on point-to-point minimization to propagate the noise of
RTSs on the pose of the robotic platform. Five main noise sources are
identified to model this uncertainty: noise inherent to the instrument, tilt
noise, atmospheric factors, time synchronization noise, and extrinsic
calibration noise. Based on extensive experimental work, we compare the impact
of each noise source on the prism uncertainty and the final estimated pose.
Tested on more than 50 km of trajectories, our comparison highlighted the
importance of the calibration noise and the measurement distance, which should
be ideally under 75 m. Moreover, it has been noted that the uncertainty on the
pose of the robot is not prominently affected by one particular noise source,
compared to the others.
|
Maxime Vaidis, William Dubois, Effie Daum, Damien LaRocque, François Pomerleau
|
2023-08-03T06:24:58Z
|
http://arxiv.org/abs/2308.01553v1
|
# Uncertainty analysis for accurate ground truth trajectories with robotic total stations
###### Abstract
In the context of robotics, accurate ground truth positioning is essential for the development of Simultaneous Localization and Mapping (SLAM) and control algorithms. Robotic Total Stations (RTSs) provide accurate and precise reference positions in different types of outdoor environments, especially when compared to the limited accuracy of Global Navigation Satellite System (GNSS) in cluttered areas. Three RTSs give the possibility to obtain the six-Degrees OF Freedom (DOF) reference pose of a robotic platform. However, the uncertainty of every pose is rarely computed for trajectory evaluation. As evaluation algorithms are getting increasingly precise, it becomes crucial to take into account this uncertainty. We propose a method to compute this six-DOF uncertainty from the fusion of three RTSs based on Monte Carlo (MC) methods. This solution relies on point-to-point minimization to propagate the noise of RTSs on the pose of the robotic platform. Five main noise sources are identified to model this uncertainty; noise inherent to the instrument, tilt noise, atmospheric factors, time synchronization noise, and extrinsic calibration noise. Based on extensive experimental work, we compare the impact of each noise source on the prism uncertainty and the final estimated pose. Tested on more than 50 km of trajectories, our comparison highlighted the importance of the calibration noise and the measurement distance, which should be ideally under 75 m. Moreover, it has been noted that the uncertainty on the pose of the robot is not prominently affected by one particular noise source, compared to the others.
## I Introduction
In mobile robotics, the current development of mapping and control algorithms heavily relies on datasets [1]. The performance of these algorithms is evaluated by comparing the different poses with a reference trajectory. In outdoor environments, Robotic Total Stations (RTSs) provide the highest accuracy by measuring reference trajectories with uncertainty on the position in the range of millimeters [2]. Coming from the field of surveying, a _total station_ is an optical-based measurement instrument that can be precisely aimed at a given prismatic retro-reflector (i.e., simply called _prism_ in the remainder of this article). A total station is _robotic_ when it can automatically track a prism, while this prism is in motion. The position of the prism is computed in the local coordinate system of the RTS, according to the horizontal and vertical angles, along with the range between the RTS and the measured prism. With three prisms or more attached to a robotic platform, it is possible to compute its six-Degrees Of Freedom (DOF) pose through manual static measurement [3] or through the use of multiple RTSs continuously tracking three _active prisms_ rigidly mounted on the same platform [4]. Active prisms are recently available off the shelf and provide a unique light signature for automatic target identification by RTSs. Each prism is tracked by its own assigned RTS, as shown in Figure 1. The distance between a RTS and its prism is determined by Electronic optical Distance Measurement (EDM), which is greatly impacted by weather conditions [5].
Yet, to be useful in autonomous navigation research, evaluation protocols need to be used in a variety of conditions and environments, such as snowfalls [6], which can increase the uncertainty of ground truth trajectories. In addition to measurement noise inherent to a single RTS, using multiple RTSs involves time synchronization and extrinsic calibration to fuse the data of all RTSs in a common frame [7]. Both this synchronization and calibration carry noises and uncertainties that must be studied. Uncertainty analysis is not usually part of the Simultaneous Localization and Mapping (SLAM) algorithms evaluation pipelines, as the most common metric for comparing trajectories is the Euclidean distance. However, as current algorithms are developed with the intention to be accurate, an evaluation that does not consider uncertainties will lead to biased results. Moreover, uncertainty estimations for ground truth trajectories are missing in state-of-the-art outdoor SLAM datasets. Two factors could explain this: **1)** Since ground truth trajectory noises are considered negligible, uncertainty is not computed. **2)** It can be complex to model the uncertainty of a reference system, such as Global Navigation Satellite Systems (GNSSs). Uncertainty models were developed for RTSs [8], but they were never
Fig. 1: Setup used to record a reference trajectory during a snowstorm. Three RTSs are each tracking a specific active prism, all mounted on a Clearpath Warthog robotic platform.
used for trajectory evaluations on mobile robots driven in outdoor environments.
This work is based on our previous research to develop an RTS setup for trajectory evaluations [4, 7]. In this paper, we propose a method to model RTS uncertainties with the objective to better compare six-DOF trajectories. For that purpose, we carried out a Monte Carlo (MC) method that includes five different sources of uncertainty relative to multiple RTS measurements in outdoor environments. These uncertainties are then interpolated over time by a Gaussian Process (GP) and propagated to the reference pose of a robot by using another MC method. A detailed qualitative analysis of the different noise sources is presented, as well as their impact on the final resulting six-DOF trajectories. The experimental data used to compute the results was gathered during a whole year of deployments, with over 50 km of recorded trajectories in different weather conditions and environments. Both our source code and our dataset are freely available in our RTS_Project repository.1
Footnote 1: [https://github.com/norlab-ulaval/RTS_project](https://github.com/norlab-ulaval/RTS_project)
## II Related work
We first describe the current uses of RTSs to obtain reference trajectories for mobile robotics. Then, we present different studies of RTS-related uncertainties and we expose different methods that are used in the state of the art to model and propagate uncertainties. Finally, we address the use of these methods in mobile robotics and we discuss their properties.
RTS-based positioning systems are quite common in mobile robotics. The number of RTSs in an experimental setup is determined by the number of prisms that can be handled by the robotic platform, as well as the number of DOFs in the desired resulting trajectory. A single RTS was used to acquire the three-DOF position of a prism mounted on different robotic platforms, such as a planetary rover [9], a tracked robot [10], an unmanned surface vessel [11], a skid steered robot [12], and a Unmanned Aerial Vehicle (UAV) [13]. It is possible to reduce the uncertainty of the reference position by adding a second RTS to track the same prism. Reitbauer _et al._[14] have used a second RTS to follow two different prisms on a compost turner, enabling the measurement of four DOFs on the platform (i.e., the position and the yaw angle). To obtain the full pose reference of a static robotic platform, it is possible to manually measure three prisms with a single RTS [3]. Furthermore, this setup provides a quantitative way to analyze uncertainty through inter-prism distances. These distances can be compared with values that were accurately determined in a controlled environment. For a moving platform, Vaidis _et al._[4] developed the first method to compute and interpolate six-DOF poses of a robot, with the measurements of three RTSs. In this paper, we build on this method by providing a continuous six-DOF pose uncertainty model that relies on RTSs measurements.
Many noise sources can be modeled and used to estimate the uncertainty of a RTS's measurement. Each noise model has an impact on different parts of a RTS processing pipeline, from the raw measurements of the RTS to the estimated Cartesian position of a prism. Most uncertainty sources are directly related to the devices (measuring instruments and prisms). Distances and angles uncertainties can be estimated with manufacturer's specifications, or with experimental results, done both in laboratories. Outside these controlled environments, the atmospheric factors (e.g., temperature, pressure, and humidity) need to be considered, due to EDM sensitivity [5]. As such, a variation on 1 \({}^{\circ}\)C can lead to an error of 0.2 mm on a measured distance of 200 m [15]. The noise of a Robotic Total Station's electronic compensator can be estimated through manufacturer specifications, yet the associated uncertainty is often disregarded when conducting precise surveying [16]. Moreover, time synchronization errors and uncertainties can occur in the communication between a RTS and an external controller or data acquisition system [17]. When using multiple RTSs, the accuracy of the extrinsic calibration between all RTSs influences the accuracy of the estimated prism positions. Vaidis _et al._[7] implemented a pipeline to filter outliers on RTS data and proposed an extrinsic calibration method that corrects the error on the poses, yet uncertainty remained. Finally, a moving target creates some additional uncertainties that are difficult to quantify. This noise comes from the limitations of the RTSs angular tracking system, especially at high prism speeds and accelerations [18]. When using multiple prisms, the inter-prism distances can be used to filter imprecise results with a threshold on prism speeds [4]. This paper examines all these sources of uncertainty, to model the global uncertainty of each RTS measurement under different atmospheric conditions.
There are two main ways to model the total uncertainty of a RTS, based on the aforementioned sources of uncertainty: either with an approach that is based on the _Guide to the expression of Uncertainty in Measurement_ (GUM), or with MC simulations. The GUM [19] divides uncertainties into two types, between those obtained from statistical analysis on a series of observations (defined as _Type A_), and those expressed by average manufacturer-specified or user-defined values (defined as _Type B_). With the GUM method, an uncertainty budget of a RTS allows one to express the total uncertainty of this RTS as an isotropic noise [20]. This method works well for noise sources that can be linearized, but it can be complex to implement for non-linear noise, such as weather conditions. For this reason, MC simulations are widely used to determine the uncertainty of RTS, whether for simple models [21], or very complex models taking into account non-linear noise, such as atmospheric factors [8]. Moreover, the resulting uncertainty is modeled as anisotropic. Generally, a MC method relies on between \(10^{3}\) to \(10^{5}\) samples to have coherent generated results, making this method computationally greedy for large datasets [22]. Both of these methods give an estimate of a prism's position uncertainty, yet it is unsuitable for mobile robotics when the uncertainty is propagated into the reference frame of a robotic platform. Several algorithms exist to propagate uncertainty in a system. An Unscented Kalman filter can
be used to estimate the resulting noise [23]. The Unscented transform method has been carried out to tackle computational resources issues, with as accurate results for uncertainty estimation as with MC [24]. This method is based on the key idea that it should be easier to approximate a probability distribution than to approximate an arbitrary nonlinear function. Yet, the Unscented transform is only applied to points of a specific covariance distribution at a time. Since three-RTSs positioning systems give three different covariance distributions, other methods that can process multiple distributions are more appropriate. Other studies have used Lie Algebra to link and interpolate the pose of a system to its uncertainty. Barfoot _et al._[25] formalized ways to work with noise in \(\mathfrak{se}(3)\) and applied them to propagate the noise from a camera over a trajectory. Anderson _et al._[26] developed a library called Simultaneous Trajectory Estimation And Mapping (STEAM) that uses GPs to interpolate the covariance matrix of a system for nonlinear optimization problems with continuous-time components. In this article, we combine the research of Ulrich [8] and Anderson _et al._[26] to propagate the uncertainty to the pose of a robotic platform.
## III Theory
We first present our approach for modeling uncertainty of RTS measurements with the MC method. Then, we show how we interpolate data with GPs for prism measurement uncertainties. Next, we describe how we use another Monte Carlo method to propagate uncertainty from interpolated prism measurements to six-DOF vehicle poses.
### _Robotic Total Station noise models_
As highlighted in Section II, the uncertainty on the measurement from a RTS is impacted by different noise sources. Each noise source can be defined with a stochastic model, hence the possibility to use a MC method to estimate the resulting combination of all sources of uncertainty on a single RTS measurement. We defined a trajectory \(\mathcal{P}^{i}\) in the frame \(\mathcal{F}^{i}\), where \(i\in\{1,2,3\}\) is the index of a single RTS, as a set of normalized homogeneous prism coordinate measurements \(\{\mathbf{p}_{1}^{i},\ldots,\mathbf{p}_{n_{i}}^{i}\}\) such that \(\mathbf{p}_{k}^{i}\) is the \(k^{\text{th}}\) measurement of \(\mathcal{P}^{i}\) with \(k\in\{1,n_{i}\}\) and \(n_{i}\in\mathbb{N}^{\text{*}}\) is the number of measurements for the \(i\)-th RTS. By merging all different kinds of noises with a MC method, we are able to determine the spatial covariance \(\mathbf{\Sigma}_{k}^{i}\) around each measurement \(\mathbf{p}_{k}^{i}\). In the following paragraphs, we define five uncertainty models with their parameters that were used to describe the noise encountered during our deployments with multiple RTSs.
**RTS instrument noises -** These noise sources are directly coming from multiple errors in the instrument calibration, namely the vertical collimation error, the centering error, the horizontal collimation error, and the eccentricity error. They alter the raw measurements given by the RTS, namely the distance \(\rho\), and both the horizontal and vertical angular values, \(\phi\) and \(\theta\), which are used to compute prism coordinates. Their standard deviations \(\sigma_{\rho}\), \(\sigma_{\phi}\) and \(\sigma_{\theta}\), respectively for the distance, horizontal and vertical deviation, are given by manufacturers in the instrument specifications. Then, errors on measurements can be represented by a zero-mean normal distribution, respectively \(\epsilon_{\rho}\sim\mathcal{N}(0,\sigma_{\rho})\), \(\epsilon_{\phi}\sim\mathcal{N}(0,\sigma_{\phi})\) and \(\epsilon_{\theta}\sim\mathcal{N}(0,\sigma_{\theta})\).
**Tilt compensator -** Modern RTS are equipped with an electronic angular compensator that allows the instrument to correct pitch and roll values, with its estimated gravity vector. This compensator has an inherent noise \(\epsilon_{tilt}\) represented by a zero-mean normal distribution \(\epsilon_{tilt}\sim\mathcal{N}(0,\sigma_{tilt})\) as described by Lienhart _et al._[16].
**Atmospheric factors and weather -** Since distance measurements are taken with EDM, they are subject to the influence of atmospheric factors, specifically temperature \(T\), pressure \(P\), and humidity \(H\)[8]. These atmospheric factors are represented by uniform distributions \(\epsilon_{T}\), \(\epsilon_{P}\), and \(\epsilon_{H}\). According to equations proposed by Rueger _et al._[5], these uniform distributions will lead to the estimation of a correction factor \(\alpha\) (expressed in ppm) to rectify a measured distance \(\rho\). The aforementioned measurement noise sources (\(\epsilon_{\rho}\), \(\epsilon_{\phi}\), \(\epsilon_{\theta}\), \(\epsilon_{tilt}\) and the correction \(\alpha\)) are combined to include uncertainties to raw RTS measurements:
\[\widehat{\rho} =(\rho+\epsilon_{\rho})(1+\alpha), \tag{1}\] \[\widehat{\theta} =\theta+\epsilon_{\theta}+\epsilon_{tilt},\] (2) \[\widehat{\phi} =\phi+\epsilon_{\phi}+\epsilon_{tilt}\cot(\widehat{\theta}). \tag{3}\]
**Time synchronization -** Data acquisition made by several RTSs leads to a time synchronization error \(\epsilon_{t_{s}}\), expressed in seconds. The resulting uncertainty \(\mathbf{\epsilon_{t}}\) alters the Cartesian coordinates of a prism and is related to the velocity \(\mathbf{v}_{k}^{i}\) at which it moves. According to Ulrich [8], this time synchronization uncertainty follows a normal distribution \(\mathbf{\epsilon_{t}}\sim\mathcal{N}(\mathbf{\mu_{t}},\mathbf{\sigma_{t}})\) that depends on the time synchronization error \(\epsilon_{t_{s}}\sim\mathcal{N}(\mu_{t_{s}},\sigma_{t_{s}})\) and the prism velocity \(\mathbf{v}_{k}^{i}\sim\mathcal{N}(\mathbf{\mu_{v}},\mathbf{\sigma_{v}})\):
\[\mathbf{\mu_{t}} =\mu_{t_{s}}\mathbf{\mu_{v}} \tag{4}\] \[\mathbf{\sigma_{t}^{2}} =\mu_{t_{s}}^{2}\mathbf{\sigma_{v}^{2}}+\sigma_{t_{s}}^{2}\mathbf{\mu_{v }^{2}}, \tag{5}\]
where \(\mu_{t_{s}}\) represents the mean time synchronization error, \(\sigma_{t_{s}}\) its standard deviation and \(\mathbf{\mu_{v}^{2}}=\left[\mu_{v_{x}}^{2}\quad\mu_{v_{y}}^{2}\quad\mu_{v_{z}}^{ 2}\right]^{\mathsf{T}}\) is the square of the average prism velocity vector with a covariance \(\mathbf{\sigma_{v}^{2}}=\left[\sigma_{v_{x}}^{2}\quad\sigma_{v_{y}}^{2}\quad \sigma_{v_{z}}^{2}\right]^{\mathsf{T}}\). The prisms' velocities are estimated by differentiating the prism Cartesian coordinates with respect to time, by considering computed uncertainties from eqs. (1) to (3), such that:
\[\mathbf{p}_{k}^{i}=\left[\widehat{\rho_{k}}\sin\widehat{\phi_{k}}\cos \widehat{\theta_{k}}\quad\widehat{\rho_{k}}\sin\widehat{\phi_{k}}\sin\widehat{ \theta_{k}}\quad\widehat{\rho_{k}}\cos\widehat{\phi_{k}}\right]^{\mathsf{T}} \tag{6}\] \[\mathbf{v}_{k}^{i}=\frac{\mathbf{p}_{k+1}^{i}-\mathbf{p}_{k}^{i}}{t_{k+1}-t_{ k}}. \tag{7}\]
The values of \(\mathbf{\mu_{v}}\) and \(\mathbf{\sigma_{v}}\) can be estimated for each prism position by applying a MC method to prism speeds given by eqs. (6) and (7). A time synchronization error \(\epsilon_{t_{s}}\) can be estimated over a span of time, by taking into account the rate at which the external system's clock diverges from the RTS's clock. The time synchronization method presented by Vaidis _et al._[4] yields time drift measurements, equal to the worst drift at the end of every time synchronization period (5 min
in the current case). When these measurements are recorded for all deployments on the field, they form a distribution of drifts, among which we can statistically determine the values of \(\mu_{t_{s}}\) and \(\sigma_{t_{s}}\). The estimated error \(\mathbf{\epsilon_{t}}\) is then added to each prism position.
**Extrinsic calibration -** This calibration determines the rigid transformations \(\sfrac{W}{i}T\) between the reference frame \(\mathcal{F}^{W}\) and the frame \(\mathcal{F}^{i}\) of each RTS. Our previous work [7] exposed many extrinsic calibration methods, including the static Ground Control Points (GCPs) calibration, that will be used in this paper. As defined in [7], a GCP is a position measured on the ground with a static prism used as a target. A number \(n\) of GCPs is measured in an environment with all RTSs. The outcome of this calibration will have some noise, as the earlier-mentioned uncertainties on single measurements propagate in the process. As the extrinsic calibration is complex to model, its uncertainty is estimated by applying another MC method to each GCP. The instrument noises, tilt compensator noise, and atmospheric factors are considered for this MC method. Time synchronization was not included due to the static nature of GCPs, and because extrinsic calibration yields results that are independent of time. An extrinsic calibration is computed for each set of MC samples for each GCPs. The resulting rigid transformation is then applied to each prism trajectory as \(\mathcal{Q}^{i}=\sfrac{W}{i}\,\mathcal{T}\,\mathcal{P}^{i}\), where \(\mathcal{Q}^{i}=\{\mathbf{q}^{i}_{1},\dots,\mathbf{q}^{i}_{n_{i}}\}\) represents the prism trajectory of the \(i\)-th RTS in the global frame \(\mathcal{F}^{W}\). The extrinsic calibration uncertainty is estimated from the distribution of the points along those trajectories.
Applying all the noises on RTS measurements with a MC method enables us to estimate the covariance matrix \(\mathbf{\Sigma}^{i}_{k}\) of each measurement \(q^{i}_{k}\) in \(\mathcal{Q}^{i}\). In the rest of this paper, the frame of the first RTS \(\mathcal{F}^{1}\) is chosen as the global frame.
### _Prism position uncertainty interpolation_
The aim of trajectory evaluation for SLAM is to compare a reference trajectory with a six-DOF trajectory of a robotic platform computed from various sensors (e.g., lidar, Inertial Measurement Unit (IMU), GNSS), usually defined with different acquisition rates. Therefore, interpolation is required to synchronize both trajectories. A GP regression approach is chosen for this state estimation, as proposed by Anderson _et al._[27]. This allows us to represent the prism trajectories in continuous time in order to query position values for a desired timestamp. To guarantee a unique solution, we modelize a prior distribution of the potential trajectories, as a unidimensional GP, such that:
\[\mathbf{x}(t)\sim\mathcal{GP}(\tilde{\mathbf{x}}(t),\tilde{\mathbf{P}}(t,t^ {\prime})),\ \ t_{0}<t,t^{\prime} \tag{8}\] \[\mathbf{y_{n}}=\mathbf{g}(\mathbf{x}(t_{n}))+\mathbf{n_{n}},\ \ t_{1}<t_{n}<t_{N}, \tag{9}\]
where \(\mathbf{x}(t)\) represents the normalized homogeneous prism coordinates at time \(t\), \(\tilde{\mathbf{x}}(t)\) is the prior mean function, \(\tilde{\mathbf{P}}(t,t^{\prime})\) is the prior covariance function between two different times \(t\) and \(t^{\prime}\), \(\mathbf{y_{n}}\) are measurements, \(\mathbf{n_{n}}\sim\mathcal{N}(\mathbf{0},\mathbf{\Sigma})\) is a Gaussian measurement noise, \(\mathbf{g}(\cdot)\) is a nonlinear measurement model, and \(\{t_{1},\dots,t_{n},\dots,t_{N}\}\) is a sequence of measurement times.
In this paper, \(\mathbf{y_{n}}\) are the measurements in \(\mathcal{Q}^{i}\), the covariance \(\mathbf{\Sigma}\) of \(\mathbf{n_{n}}\) is estimated by the MC method presented in Section III-A (i.e., \(\mathbf{\Sigma}^{i}_{k}\)), \(\mathbf{g}(\cdot)\) is the non-linear process of having the measurements in \(\mathcal{Q}^{i}\) by the RTS and \(t_{n}\) are the timestamps of \(\mathbf{q}^{i}_{k}\). The interpolated results \(\widehat{\mathcal{Q}}^{i}\), which are expressed by \(\mathbf{x}(t)\), are computed by the STEAM library [26] for desired query times. As a result, each estimated point \(\widehat{\mathbf{q}}^{i}_{j}\) in \(\widehat{\mathcal{Q}}^{i}\) has its associated estimated covariance matrix \(\widehat{\mathbf{\Sigma}}^{i}_{j}\) coming from the GP interpolation, where \(j\in\{1,J\}\) is the interpolated prism positions index, and \(J\in\mathbb{N}^{\star}\) is the total number of interpolated prism positions.
### _Uncertainty propagation to ground truth trajectory_
With only one RTS, the uncertainty \(\widehat{\mathbf{\Sigma}}^{i}\) on the reference trajectory can be exploited right away to evaluate the reference position of a robotic platform. However, the robotic platform's pose needs to be evaluated in six-DOF. With three RTSs, it is possible to obtain the reference pose by doing a point-to-point minimization between the triplets of measured prism coordinates, and the reference triplets measured in laboratory [4, 7]. Prism uncertainties can be propagated by applying a MC sampling with this point-to-point minimization.
After the GP interpolation, a setup of three RTS yields a set of three paths \(\{\widehat{\mathcal{Q}}^{1},\widehat{\mathcal{Q}}^{2},\widehat{\mathcal{Q}} ^{3}\}\) of interpolated measurements, with their respective covariance \(\{\widehat{\mathbf{\Sigma}}^{1},\widehat{\mathbf{\Sigma}}^{2},\widehat{\mathbf{\Sigma}}^{3}\}\). We define \(\widehat{\mathcal{Q}}_{j}\) as the \(j\)-th triplet of interpolated prism positions with its corresponding triplet \(\widehat{\mathbf{\Sigma}}_{j}\) of covariances, such that \(\widehat{\mathcal{Q}}_{j}=\{\widehat{\mathbf{q}}^{1}_{j},\widehat{\mathbf{q}}^{2}_{j}, \widehat{\mathbf{q}}^{3}_{j}\}\), and \(\widehat{\mathbf{\Sigma}}_{j}=\{\widehat{\mathbf{\Sigma}}^{1}_{j},\widehat{\mathbf{\Sigma}} ^{2}_{j},\widehat{\mathbf{\Sigma}}^{3}_{j}\}\), as shown in Figure 2. A reference triplet \(\mathcal{R}\) contains normalized homogeneous points \(\mathbf{r}_{i}\), where \(\mathcal{R}=\{\mathbf{r}_{1},\mathbf{r}_{2},\mathbf{r}_{3}\}\) with a covariance \(\mathbf{U}_{i}\) associated to each point \(\mathbf{r}_{i}\). These reference points are defined in another world frame \(\mathcal{F}^{L}\) and were statically estimated with measurements from a single RTS after each deployment. To apply a MC method, we sample \(M\) points \(\{s^{i}_{1},\dots,s^{i}_{m},\dots,s^{i}_{M}\}\) from every Gaussian distribution \(\mathcal{N}(\widehat{\mathbf{q}}^{i}_{j},\widehat{\mathbf{\Sigma}}^{i}_{j})\), and \(M\) points from the Gaussian distribution \(\mathcal{N}(\mathbf{r}_{i},\mathbf{U}_{i})\) defined for the triplet of points in \(\mathcal{R}\) along with their covariances \(\mathbf{U}_{i}\).
For every sample \(s_{m}\in\{s_{1},\dots,s_{M}\}\), we applied the point-to-point minimization with:
\[\sfrac{1}{L}\widehat{\mathbf{T}}_{j,m}=\operatorname*{arg\,min}_{\mathbf{T}}\sum_{i=1}^ {3}\Bigl{\lVert}\widehat{\mathbf{q}}^{i}_{j}-\mathbf{Tr}_{i}\Bigr{\rVert}_{2}^{2}, \tag{10}\]
where \(\sfrac{1}{L}\widehat{\mathbf{T}}_{j,m}\in\mathbb{R}^{4\times 4}\) is the resulting rigid transformation of the MC method between the frame \(\mathcal{F}^{L}\) and the global frame of prism measurements (\(\mathcal{F}^{1}\), in the current case), for a sample \(s_{m}\) in the \(j\)-th triplet.
The subsequent \(M\) poses form a distribution, of which we can extract an average translation and rotation defined by \(\mathbf{\xi}_{j}\in\mathbb{R}^{6}\) in the global frame \(\mathcal{F}^{W}\) for every \(j\in\{1,J\}\). The covariance \(\mathbf{\Lambda}_{j}\in\mathbb{R}^{6\times 6}\) of this distribution of poses yields the uncertainty on every vehicle pose \(\mathbf{\xi}_{j}\). The resulting reference trajectory is defined as the set \(\xi\) of poses
\(\left\{\boldsymbol{\xi}_{1},\ldots,\boldsymbol{\xi}_{j},\ldots,\boldsymbol{\xi}_ {J}\right\}\). \(\Lambda\) is a set that contains the covariance \(\boldsymbol{\Lambda}_{j}\) of every pose \(\boldsymbol{\xi}_{j}\) along \(\xi\). These ground truth uncertainties can be used for the evaluation of six-DOF trajectories. In the next sections, we will characterize the impact of the source of noise over the uncertainty models of the prism trajectories and of the reference trajectory.
## IV Experiments
We used three Trimble S7 RTSs to track three Trimble MultiTrack Active Target MT1000 prisms with a measurement rate of 2.5 Hz. The Table I gives the different kinds of noises that were modeled, in accordance with the specifications of the _Trimble S7_. Following the GUM guidelines [19], these noises have been divided into two types (i.e., **A** and **B**). The former is determined through experimental values (e.g., extrinsic calibration, time synchronization error). The latter is given by the specifications of the measuring instrument (e.g., range, angle, tilt compensator), or from an environmental model (i.e., atmospheric factors).
As shown in Figure 1, all three prisms were mounted on a Clearpath Warthog Unmanned Ground Vehicle (UGV). A Robosense RS-32 and a XSens MTi-10 IMU were used as part of an Iterative Closest Point (ICP)-based SLAM framework, working at a rate of 10 Hz.2 The experiments were conducted from February 2022 to January 2023. They include 20 deployments, of which 18 took place on the campus of Universite Laval and two were done in the Montmorerapy research forest, 75 km north of Quebec City. These 20 deployments allowed us to conduct 48 experiments, for a total of 50 km of RTS-tracked prism trajectories.
Footnote 2: [https://github.com/norlab-ulaval/norlab_icp_mapper](https://github.com/norlab-ulaval/norlab_icp_mapper)
The same procedure was applied during each experiment, in order to collect consistent and standardized data during the whole year. Also, each deployment was completed by measuring accurately the position of the three prisms, rigidly installed on the robot, with a single RTS. These measurements are used as reference points to compute the inter-prism distances, as a way to control data for each experiment. The point-to-point minimization method presented in Section III-C is also relying on these measurements. Weather conditions and atmospheric values were obtained through the weather service of _Environment and Climate Change Canada_.3
Footnote 3: [https://climate.weather.gc.ca/historical_data/search_historic_data_e.html](https://climate.weather.gc.ca/historical_data/search_historic_data_e.html)
## V Results
### _Influence of the sources of uncertainty over the results_
We first evaluated the impact of different sources of noise on the prism position uncertainty. These sources of noise are 1) the RTS instrument noises, 2) the tilt compensator noises, 3) the atmospheric factors, 4) the time synchronization, and 5) the extrinsic calibration. Every source of noise was represented by a distinct covariance matrix, on which we applied the Frobenius norm [25] to evaluate their effect on the prism position uncertainty. These uncertainties were also compared for different ranges, to determine how it is impacted by the RTS-prism distance.
\begin{table}
\begin{tabular}{l l l l} \hline \hline \multicolumn{2}{c}{**Influence factors**} & \multicolumn{1}{c}{_Distribution_} & \multicolumn{1}{c}{_Values_} \\ \hline \hline \multicolumn{2}{c}{**Extrinsic calibration**} & & \\ \multicolumn{2}{c}{} & - Translation & Normal & \(\sigma_{tx}\), \(\sigma_{ty}\), \(\sigma_{tz}\) \\ \multicolumn{2}{c}{} & - Rotation & Normal & \(\sigma_{rx}\), \(\sigma_{ry}\), \(\sigma_{rz}\) \\ \cline{2-4} \multicolumn{2}{c}{} & **Time synchronization** & & \\ \multicolumn{2}{c}{} & - Velocity & Normal & \(\mu_{v}\), \(\sigma_{v}\) \\ \multicolumn{2}{c}{} & - Time error & Normal & \(\mu_{t_{s}}=1.2\,\mathrm{ms}\), \\ \multicolumn{2}{c}{} & & \(\sigma_{t_{s}}=0.8\,\mathrm{ms}\) \\ \hline \hline \multicolumn{2}{c}{**Instrument**} & & \\ \multicolumn{2}{c}{} & - Distances & Normal & \(\sigma_{\rho}=4\,\mathrm{mm}+2\,\mathrm{ppm}\) \\ \multicolumn{2}{c}{} & - Horizontal directions & Normal & \(\sigma_{\phi}=2\,^{\mathrm{\,\,\,}}\) \\ \multicolumn{2}{c}{} & - Vertical directions & Normal & \(\sigma_{\theta}=2\,^{\mathrm{\,\,}}\) \\ \hline \multicolumn{2}{c}{**Tilt compensator**} & & \\ \multicolumn{2}{c}{} & - Angle bias & Normal & \(\sigma_{tilt}=0.5\,^{\mathrm{\,\,\,}}\) \\ \multicolumn{2}{c}{} & **Atmospheric factors** & & \\ \multicolumn{2}{c}{} & - Temperature & Uniform & \(\sigma_{T}=[0,1\,^{\mathrm{\,\,}}\mathrm{C}]\) \\ \multicolumn{2}{c}{} & - Pressure & Uniform & \(\sigma_{P}=[0,10\,\mathrm{hPa}]\) \\ \multicolumn{2}{c}{} & - Humidity & Uniform & \(\sigma_{H}=[0,2\,\mathrm{\,\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,} \mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,} \mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,} \mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,} \mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,} \mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,} \mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,} \mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,} \mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,} \mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,} \mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,} \mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,} \mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,} \mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,} \mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,} \mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,} \mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,} \mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,} \mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,} \mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,} \mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,} \mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,} \mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,} \mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,} \mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,} \mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,} \mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,} \mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,} \mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,} \mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,} \mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,} \mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,} \mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,} \mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,} \mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,} \mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,} \mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,} \mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,} \mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,} \mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,} \mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,}\mathrm{\,} \mathrm{\,}\mathrm{\,}\mathrm
Figure 3 shows that, for every noise source except the time synchronization noise, a longer range will lead to higher uncertainty. This relation is especially the case for extrinsic calibration, with a median value of 0.91 mm in the range of 0 to 75 m, a median value of 1.88 mm in the range of 75 to 150 m, and a median value of 5.49 mm in the range of more than 150 m. Moreover, this noise source affects the majority of the total uncertainty for the complete range, with a median value of 1.32 mm. This observation is coherent with the description of extrinsic calibration given in Section III-A, as this calibration relies on measurements that are all impacted by the other sources of uncertainty, causing its covariance to be higher. However, this impact on the uncertainty is only considerable at a long-range, while the noises inherent to the RTS have the highest median regardless of the distance. With a median value of 2.1 mm, we confirmed that the uncertainty level from the instrument is in the range of the manufacturer's specifications. This noise also increases with long-range measurements, with a median value of 2.78 mm for distances of more than 150 m. Meanwhile, the other sources of noise were less significant. The time synchronization noise has a median value of 0.4 mm, and does not depend on the measurement range. Similarly, the atmospheric factors have a median value of 0.16 mm, while the tilt noise has a median value of 0.06 mm. Both of these noise sources increase according to the measurement range.
In a field deployment, it would be important to keep in mind the two factors that have the highest influence on the results. Therefore, it is crucial to achieve a good extrinsic calibration, as it is the main source of uncertainty for long-range measurements. Otherwise, it is important to gather as much data as possible with ranges lower than 150 m. The median for all sources on the complete range is close to the median for shorter ranges, as we gathered more data at short distances than at long distances: 80 % of the data was taken with distances between 0 and 75 m, 14 % between 75 and 150 m and 6 % for more than 150 m. Consequently, the results could be impaired by the lack of long-range measurements. Overall, since the RTS has an inherent noise, better results could be obtained with other instruments that would be more precise.
### _Trajectories with uncertainty_
We used the pipeline from [7] to filter the raw prism measurements to increase the accuracy of the results. The modules (1 and 2) from this pipeline were used with the parameters \(\tau_{r}=2\) m s\({}^{-1}\), \(\tau_{a}=\tau_{e}=1\) deg s\({}^{-1}\), \(\tau_{s}=3\) s and \(\tau_{l}=2\) s. Instead of using linear interpolation in the third module, we computed a GP interpolation with the STEAM library. This GP was used to interpolate the uncertainties from the MC method, as explained in Section III-B.
An example of this interpolation is shown in Figure 4, which represents the results of a deployment at the Mont-morency forest. The interpolated prism measurements are displayed with red, blue, and green dots, along with their uncertainties as shaded ellipsoids. The orange dots represent measurements from a GNSS system on the robot that took data at a rate of 5 Hz. As in the fourth module of the pipeline in [7], the uncertainty has been filtered for values over 20 cm, while the inter-prism distances are kept under 10 cm to ensure that the values are precise enough for ground truth generation. The point-to-point method described in Section III-C propagates the prisms uncertainties among the reference pose of the Warthog, as shown with black dots in Figure 4. The six-DOF pose and uncertainties on the ground truth trajectory can be compared with an estimated robot trajectory through the use of other metrics than the Euclidean norm.
Even if RTS measurements are more accurate than GNSS (2-3 cm), they gather fewer data over time. Therefore, with the GP interpolation, the uncertainty on the RTSs measurements increases over time. It can reach as much as 5 cm, as shown in the zoomed section of Figure 4. This issue can be solved by using RTSs with a higher measurement rate. Moreover, the MC method used with the point-to-point method spreads the error on the final robot pose. Finally, as RTSs requires direct line-of-sight with a prism, fewer data can be measured in obstructed environments such as forests. This constraint is visible on Figure 4, where the RTS-estimated poses only appear in areas with a direct line-of-sight from the RTSs.
### _Impact of models over pose-uncertainty results_
The Figure 5 shows that the uncertainty on the position and orientation of a robot is not prominently affected by a single source of noise. For instance, no matter the source of uncertainty, the medians are 2.5 m and 0.76 rad for the position and the orientation, respectively. This stability might come from the point-to-point minimization which smoothens the trajectory and therefore minimizes some of the errors that could be caused by the different sources of uncertainty (i.e., uncertainty inherent to the RTS, the tilt, the atmospheric conditions, the extrinsic calibration, and the time synchronization).
Fig. 3: Influence of the noise sources from Table I, in relation with the measured distance between an RTS and its assigned prism. The square root of the Frobenius norm is used to estimate the similarity between covariance matrices.
The values given by the Frobenius norm square root for the robot position (in Figure 5) are larger by an order of magnitude than the uncertainty computed on a prism position (in Figure 3). This can be related to the GP interpolation in the pipeline, as the interpolation drastically increases the uncertainty in proportion to the speed of the vehicle. Also, the point-to-point minimization propagates the prism uncertainties on the vehicle pose uncertainty. These results can be compared to the one obtained by Vaidis _et al._[4], where they show the same kind of uncertainty on the final pose of a vehicle, with a comparable amount of uncertainty on the prism positions.
## VI Conclusion
In this paper, we proposed a MC method to model the uncertainties coming from multiple RTSs with the intent to better compare six-DOF trajectories. The estimated uncertainty of a prism measurement is then interpolated with a GPs, and propagated to the estimated six-DOF pose of a robotic platform with a MC method, used with a point-to-point minimization. We have highlighted that the main source of noise when using multiple RTSs is coming from the extrinsic calibration, besides the uncertainty that is inherent to the instrument. Our model has demonstrated that the uncertainty on a prism measurement is proportional to the distance between that prism and a RTS. Moreover, none of the sources of noise have a certain impact on the uncertainty of a pose that is computed with point-to-point minimization. This can be caused by the minimization method that smoothens the effect of different noise sources to an average value.
Future works would include the optimization of our extrinsic calibration method to minimize resulting uncertainties. Other atmospheric factors such as snow or rain would also need to be experimentally characterized. The uncertainty of GNSSs could be modeled in the same manner to compare it with the uncertainty obtained with our method. This would allow us to evaluate localization and mapping algorithms by merging information from RTS-based ground truth trajectories with GNSS-based ground-truth trajectories.
## Acknowledgment
This research was supported by the Natural Sciences and Engineering Research Council of Canada (NSERC) through the grant CRDPJ 527642-18 SNOW (Self-driving Navigation Optimized for Winter).
|
2301.02551
|
Denser glasses relax faster: a competition between rejuvenation and
aging during in-situ high pressure compression at the atomic scale
|
A fascinating feature of metallic glasses is their ability to explore
different configurations under mechanical deformations. This effect is usually
observed through macroscopic observables, while little is known on the
consequence of the deformation at atomic level. Using the new generation of
synchrotrons, we probe the atomic motion and structure in a metallic glass
under hydrostatic compression, from the onset of the perturbation up to a
severely-compressed state. While the structure indicates reversible
densification under compression, the dynamic is dramatically accelerated and
exhibits a hysteresis with two regimes. At low pressures, the atomic motion is
heterogeneous with avalanche-like rearrangements suggesting rejuvenation, while
under further compression, aging leads to a super-diffusive dynamics triggered
by internal stresses inherent to the glass. These results highlight the
complexity of the atomic motion in non-ergodic systems and support a theory
recently developed to describe the surprising rejuvenation and strain hardening
of metallic glasses under compression.
|
A. Cornet, G. Garbarino, F. Zontone, Y. Chushkin, J. Jacobs, E. Pineda, T. Deschamps, S. Li, A. Ronca, J. Shen, G. Morard, N. Neuber, M. Frey, R. Busch, I. Gallino, M. Mezouar, G. Vaughan, B. Ruta
|
2023-01-06T15:01:38Z
|
http://arxiv.org/abs/2301.02551v1
|
Denser glasses relax faster: a competition between rejuvenation and aging during _in-situ_ high pressure compression at the atomic scale
###### Abstract
A fascinating feature of metallic glasses is their ability to explore different configurations under mechanical deformations. This effect is usually observed through macroscopic observables, while little is known on the consequence of the deformation at atomic level. Using the new generation of synchrotrons, we probe the atomic motion and structure in a metallic glass under hydrostatic compression, from the onset of the perturbation up to a severely-compressed state. While the structure indicates reversible densification under compression, the dynamic is dramatically accelerated and exhibits a hysteresis with two regimes. At low pressures, the atomic motion is heterogeneous with avalanche-like rearrangements suggesting rejuvenation, while under further compression, aging leads to a super-diffusive dynamics triggered by internal stresses inherent to the glass. These results highlight the complexity of the atomic motion in non-ergodic systems and support a theory recently developed to describe the surprising rejuvenation and strain hardening of metallic glasses under compression.
## Introduction
Every glass has its own story, which is encoded in the evolution of its properties. Once a glass is formed by rapidly cooling from the melt, its final state depends on the applied temperature protocol and spontaneously evolves with time[1]. Fast cooling rates create glasses trapped in more energetically unstable configurations with larger structural disorder, lower elastic moduli and larger frozen-in free volume than slow cooling protocols[2]. Upon successive annealing, the glass ages and relaxes towards energetically more stable minima in the potential energy landscape (PEL), exploring continuously different configurations. This process, called physical aging, is particularly strong in metallic glasses (MGs), and modifies the mechanical, structural and thermal properties of the material[3, 4]. In stark contrast, fast thermal cycling or mechanical deformation can rejuvenate the system, driving the glass into energetically less favoured configurations with increased plasticity[5, 6, 7, 8, 9, 10, 11, 12]. In some cases, the rejuvenated MG would be equivalent to quenched glasses theoretically obtainable with cooling rates
much faster than those reachable in a laboratory [6]. In the presence of almost hydrostatic compression, this rejuvenation leads to the suppression of shear banding and the inhibition of catastrophic mechanical failures, making deformed MGs appealing for technological applications [7]. Diffraction studies suggest a broadening of the interatomic distances in severely deformed MGs, which is in opposite to the well-known increase of structural order during physical aging [13]. Changes in correlation lengths at medium range order (MRO) have been also reported in pre-deformed Pd-based MGs and are accompanied by an acceleration of the microscopic relaxation dynamics possibly due to an increase in free volume [14, 15].
The majority of studies deal with ex-situ compressed glasses, while little is known on the microscopic physical mechanisms occurring during the compression, owing to the experimental difficulty of in-situ experiments under high pressures. Theoretical works ascribe the pressure-induced rejuvenation and strain hardening of MGs to the creation of an additional local minimum in the PEL associated to rearrangements of the energy for cage dynamics [16, 17]. This process would lead to the occurrence of two distinguishable dynamical regimes under pressure, whose existence has not been experimentally observed so far. By combining in-situ high pressure, high energy X-Ray Photon Correlation Spectroscopy (XPCS) and high energy X-ray Diffraction (XRD) in a 4th generation synchrotron source, we here provide the experimental evidence of how the atomic dynamics evolve under the application of pressure in a Pt\({}_{42.5}\)Cu\({}_{27}\)Ni\({}_{8.5}\)P\({}_{21}\) MG, unveiling a complex, non-monotonous behaviour which is in agreement with recent theoretical works.
Figure 1: **Dynamical rejuvenation under densification at room temperature.** a) Sketch of an XPCS experiment showing the sample within the diamond anvil cell, the diffracted intensity corresponding to the structure factor, the portion of reciprocal space probed by the detector and a typical speckle pattern. b) top of FSDP covered during the XPCS experiments (the intensity
integrated across the detector area) measured at atmospheric pressure and at 3.3 GPa. Glitches in the I(Q) comes from the detector. c) Corresponding Intermediate Scattering Functions showing the acceleration of the dynamics with pressure.
## Results
So far, the relatively low flux of high-energy coherent x-rays in 3\({}^{\mathrm{rd}}\) generation synchrotrons, limited the use of XPCS in bulky sample environments, including diamond anvil cells (DACs) and other high-pressure apparatus. The current development of 4\({}^{\mathrm{th}}\) generation synchrotron sources, such as the upgraded ESRF synchrotron (France), provides a monochromatic high flux (10\({}^{12}\) photon/s) of coherent x-rays at energies as high as 21 keV [18] with an unprecedented high quality, allowing for pressure dependent studies. A schematic view of the XPCS experiments is shown in Fig. 1a) and S1. The scattered speckle pattern is collected in a wide angle geometry covering the maximum of the first sharp diffraction peak (FSDP) of the glass which is at about q\({}_{1}\)=2.87 A\({}^{\mathrm{-1}}\) for our as-cast Pt\({}_{2.5}\)Cu\({}_{22}\)Ni\({}_{5.5}\)P\({}_{21}\) MG at ambient temperature and atmospheric pressure (Fig. 1b). By increasing pressure, the maximum of the FSDP shifts toward high scattering vectors, as shown in Fig. 1b) for 3.3 GPa. As the FSDP originates from the medium range order, in the absence of important structural rearrangements its position can be related to the macroscopic density of the glass [19]. The agreement between the thermal expansion coefficients of 3.85x10\({}^{5}\) K\({}^{\mathrm{-1}}\) obtained from us with high energy XRD (Supplementary Fig. S1), and the 3.95x10\({}^{5}\) K\({}^{\mathrm{-1}}\) value reported in literature dilatometry data [20] supports the validity of the density-MRO relation for the Pt\({}_{2.5}\)Cu\({}_{22}\)Ni\({}_{5.5}\)P\({}_{21}\). Therefore, the continuous shift toward high scattering vectors, q, in Fig. 1b) reflects the monotonic rise in the glass density as the pressure increases.
The evolution of the internal dynamics of the glass accompanying the density change can be described by the pressure dependence of the intermediate scattering function (ISF), F(\(\delta\)t), which monitors the temporal decay of the electron density fluctuations at the probed \(q\) and pressure. As shown in Fig. 1c), a pressure increase from 1 atm to 3.3 GPa results in a dramatic shift of more than one order of magnitude in the ISF toward smaller \(\delta\)t, which implies a pressure-induced acceleration of the atomic dynamic of the same magnitude. Fitting the data with the Kohlrausch-Williams-Watts (KWW) phenomenological model \(|F(\delta t)|^{2}=e^{-2(\delta t/\tau)^{\beta}}\) with \(\tau\) the relaxation time and \(\beta\) the shape exponent, we find a relaxation time \(\tau\) = 571s and \(\tau\) = 38s for 1 atm and 3.3 GPa respectively at 300K. This pressure-induced acceleration of the dynamics by a factor 15 suggests a rejuvenation of the glass under _in-situ_ hydrostatic compression and is larger than that observed in _ex-situ_ deformed Pd-based MGs (factor 3.2 at 300K) [14] and around a single isolated shear band (factor 3.3 at 300K) in a Zr\({}_{65}\)Cu\({}_{25}\)Al\({}_{10}\) glass [21].
The overall evolution of the structure and dynamics during HP compression is reported in Fig. 2. The static structure factor, S(Q), varies only slightly with pressure, and exhibits a (4.91x10\({}^{3}\)\(\AA\)\({}^{1}\)/GPa) linear shift of q\({}_{1}\) with pressure up to 7 GPa, which is completely reversible with no hysteresis within the uncertainty of our measurement (Fig. 2a) and b)). In contrast, the collective atomic dynamics exhibits a complex evolution during the compression stage as shown by selected two-times correlation functions (TTCFs, Fig. 2c) and ISFs (Fig. 2d) measured after similar elapsed times from the pressure change at the different nominal pressures. The TTCF is a time-resolved representation of the ISFs, where the width of the high correlation contour is proportional to \(\tau\), the characteristic time of the rearrangements at the microscopic length scale.
At atmospheric pressure, high correlation values remain for most of the scan, which indicate almost arrested dynamics in the 600s total acquisition time. This is the classical picture of a glass well below the glass transition temperature, where large-scale dynamics are frozen and only slow local atomic rearrangements occur. The dramatic pressure-induced acceleration at 3.3 GPa corresponds to a sharp and narrow high-correlation contour and a fast decay of the ISF. As pressure increases from 3.3 GPa to 6.3 GPa, a non-monotonous evolution of the dynamics occurs, as evidenced by the larger width of the high correlation contours in the TTCFs and the shifts of the ISF to slower dynamics at high pressure.
To better clarify the nature of the atomic motion under hydrostatic compression, Fig. 2e) show relaxation times averaged over different scans acquired over a period of 3h at each single pressure, covering thus both the early stage deformation and the severely-compressed state. Two distinct dynamical regimes can be identified. An acceleration of the particle dynamics up to 3.3 GPa, followed
Figure 2: **Pressure dependence of structure and dynamics.** a) Static structure factor measured with high energy XRD under compression. b) corresponding maximum of the FSDP during both compression and decompression. c) TTCFs of selected scans acquired at 0 (1 atm), 3.3 and 6.3 GPa for similar elapsed times after the pressure change. d) Selected ISFs showing the transition from rejuvenation to relaxation with increasing pressure. Black dotted lines correspond to KWW fits to the data. e) Averaged relaxation time during compression (full symbols). Data of a second as-cast sample measured during a different XPCS experiment are reported as well to confirm the reproducibility of the results (empty symbols).
by a progressive slow down at larger pressure values, suggesting the existence of a rejuvenation and a relaxation regime at low and high pressure, respectively. These results have been confirmed by repeating the experimental protocol in a different XPCS experiment on a second sample (see Methods for further details). Interestingly, the pressure-induced acceleration of the dynamics is visible even at the lowest pressure of 0.1 GPa, which corresponds to the preloading of the cell, where hardly any structural change is visible from XRD, showing the great sensitivity of the dynamics with respect to pressure. Artefacts related to the cell assembly, including the PTM, have been excluded as they give rise only to a static background, free of any dynamical contribution (Fig. S8). It is interesting to note that although this acceleration of a factor 2.5 in a small pressure interval is significant, it remains small when compared to the pressure-induced shifts of the structural relaxation times reported in softer molecular liquid glass-former [22, 23].
To visualize how the dynamic varies with time during isobars in both the rejuvenation and relaxation regimes, Fig. 3) reports TICFs measured at the extremum pressures of 3.3 and 6.3 GPa as a function of the elapsed time after pressure change. At low pressure, rejuvenation leads to heterogeneous dynamics with relaxation times fluctuating around an average constant value, as evidenced by the variation on the thickness of the red contour in the TICFs at 3.3 GPa. We rule out the presence of possible artefacts, such as fluctuations in the incident flux and potential sample movement (Fig. S2 and S3). At about 6800s and 7100s at 3.3 GPa, complete decorrelation happens over one pixel in the TTCF, which is evidence for massive atomic rearrangements with a time scale lower than our acquisition time of 0.1s, while a steady acceleration of the dynamics is visible after 11600 s. We note that this heterogeneous dynamical regime does not stabilize over time during our experiment, as fluctuations are still visible on the TTCF after 12000s in the severely compressed state.
In sharp contrast to the heterogeneous, rejuvenation regime, the TICFs show smoothly and continuously slowing down dynamics at 6.3 GPa, with decorrelation times growing with the time elapsed since pressure change. The transition from the low-pressure heterogeneous but constant dynamics regime to the homogeneous, high-pressure aging regime is not sharp but continuous, as visible from the evolution of \(\langle\tau\rangle\) in Fig. 2e) and the TICFs at 4.9 GPa which shows this intermediate regime, where both physical aging and fast massive atomic rearrangements are observed (Fig. S3). The existence of the two dynamical regimes is confirmed also by the reproducibility of the results in a second experiment (Fig. S4). The last row of the Fig. 3) corresponds to the TTCFs acquired at 3.3 GPa during decompression. It is highly similar to the aging regime visible at 6.3 GPa in compression, and does not match the heterogeneous dynamics observed at the corresponding pressure in compression. The pressure evolution of the dynamics is therefore not fully reversible, and exhibits a hysteresis evolution with a slow-down of the atomic motion of even a factor 10 during decompression (Fig. S5), in contrast with the apparently elastic behaviour of the structure under deformation (Fig. 2b).
The ever-slowing dynamics at 6.3 GPa strongly resembles the physical aging usually observed in thermally activated structural relaxations, associated to the interplay between density changes and MRO ordering processes [24, 25, 26]. In this regime, the corresponding ISFs evaluated at successively larger waiting times, \(t_{w}\), elapsed from the pressure change, shift continuously toward longer decay times (Fig. 4a) and can be rescaled into a master curve when normalizing \(\delta\)t by \(\tau\) (Fig. 4b). The validity of the temporal scaling confirms the homogeneous nature of the collective motion. The corresponding evolution of \(\tau\) as a function of \(t_{w}\) is reported in the inset of Fig. 4b) and echoes the results obtained in MGs at atmospheric pressure and high temperature [26], that is a first rapid aging regime which obeys a phenomenological equation \(\tau(t_{w})=\tau_{0}\exp(t_{w}/\tau^{*})\) followed by a constant dynamical state (last point at \(t_{w}\)=8000s, excluded from fit), less visible here. The yellow line in the inset corresponds to a fit of the previous equation to \(\tau(t_{w})\). It yields \(\tau^{*}\)=2300s and \(\tau_{0}\)=34s, respectively compatible to and ten times smaller than atmospheric pressure high temperature literature data [25, 26, 27]. This means that despite the rejuvenation at early stages after pressure compression (and therefore the slower value of \(\tau_{0}\)), the rate of aging is similar in both temperature and pressure studies.
Figure 3: **Temporal evolution of the atomic motion during isobars.** TTCFs from scans acquired during compression (3.3 and 6.3 GPa) and during decompression (3.3 GPa), showing heterogeneous dynamics and physical aging at low and high pressure, respectively, and the hysteresis evolution of the dynamics during decompression.
Interestingly, the same physical mechanism seems to control the atomic rearrangements in both the rejuvenation and relaxation regimes. All data can be described by compressed ISFs with an averaged compressed shape parameter, \(\beta\), ranging from 1.6 to 2, depending on the degree of heterogeneity of the dynamics. Similar compressed values of \(\beta\) have been reported in all MGs under temperature studies [26, 28, 29]. Thanks to the high signal to noise ratio of the data and the large area detector used during the XPCS measurements, we can evaluate the dynamics of the glass at different wave-vectors \(q\) even in the nonergodic state, bypassing the problem of aging [27]. Although the probed q-range is limited by the size of the detector (Fig. S6), the relaxation time follows a r(\(q\))=1/\(cq^{a}\) dependence from the probed wave-vector, with 0<\(\alpha\)<1. This is shown in Fig. 4c for 6.3 GPa where the fit yields \(\alpha\)=0.36\(\pm\)0.04. The wave-vector dependence of the dynamics and the constant compressed shape of the ISFs implies that the ISFs can be described by \(|F(\delta t)|=e^{-(\delta t/\tau(q))^{\beta}}=e^{-(c^{a}\delta t^{a}\theta^{a} )^{k}}\) with \(\theta\)=1/\(\alpha\)=2.74 and k=\(\alpha\):\(\beta\)= 0.73 at 6.3 GPa [30]. This expression confirms the complex nature of the dynamics of MGs and contrast with the high temperature diffusive motion of liquid metals which would instead corresponds to t(q)=1/Dq\({}^{2}\) and thus to \(|F(\delta t)|=e^{-Dq^{2}\delta t}\), with D the diffusion coefficient.
To characterize the evolution of the dynamics over the complete compression/decompression cycle, we have defined a dynamical heterogeneity parameter, in the following way. We first extract the temporal evolution of the correlations \(C(t,\delta t=\tau)=\langle I(t)\cdot I(t+\tau)\rangle/\langle I\rangle^{2}\) at a fixed delay time between frames corresponding to the structural relaxation time \(\tau\) obtained by the KWW analysis of the individual ISFs (red curve in Fig. 5a). We then compute distributions of the correlation values observed at a single pressure (Fig. 5b) by averaging all the different histograms of \(C(t,\delta t=\tau)\) at this pressure. Heterogeneous dynamics lead to broad, potentially multimodal dynamics as illustrated by the distribution obtained at 3.3 GPa, where two main distinct contributions are visible in addition to the long tail at large values. Overall, the distributions broaden toward both the low and high correlation values when pressure increases from 0.1 GPa to 3.3 GPa, and shrink afterward. As the width of the distribution translates directly to the behaviour of the dynamics, we define a heterogeneity parameter \(\Delta C\) as the smallest width that contains 90% of the values of the statistics of the distribution (as shown in Fig. 5b at 0.1GPa). Similar results are observed also for lower percentages of \(\Delta C\) (Fig. S9). The evolution of this heterogeneity parameter shows the pathway of the dynamics during the
Figure 4: **Aging and wave-vector dependence of the dynamics in the homogeneous regime.** a) ISFs measured at 6.3 GPa as a function of the elapsed time from pressure change. Black dotted lines correspond to KWW fits to the data. Aging is visible through the shift to long delay times with increasing waiting time (from left to right ISFs). b) Scaling of the ISF as a function of the reduced time \(\delta t\)/\(t\). Inset shows the evolution of the corresponding \(\tau\) and the best fit to the equation \(\tau(t_{w})=\tau_{0}exp(t_{w}/\tau^{*})\) (line). c) Wave-vector dependence of the dynamics at 6.3 GPa: top) KWW shape parameter; and bottom) relaxation time (symbols) and integrated intensity in the detector (grey line) measured at 6.3 GPa.
compression-decompression cycle, and is displayed in Fig. 5c). The compression regime is inversely related to the evolution of \(\langle\tau\rangle\), with the bell shaped curve centred around 3.3 GPa, and corresponds to the dynamical transition between the rejuvenated and relaxed regimes described above. Interestingly, the decompression pathway shows the hysteresis deduced from the TTCFs at 3.3 GPa for increasing and decreasing pressures (Fig. 3). The decreasing pressure does not impact significantly the dynamical behaviour until 0.5 GPa, with a heterogeneity that remain relatively constant, possibly going through a limited increase. At 0.5 GPa, the heterogeneity rises up to a value similar to the maximum observed in compression. This pressure step corresponds to a fully deflated membrane in the DAC, and one could associate the dynamic fluctuations to mechanical instabilities of the cell. The pressure stability of 0.04 GPa over the course of the measurement dismisses however this possible artefact.
## Discussion
The existence of two dynamical regimes controlling the atomic motion of MGs under hydrostatic pressure is consistent with results from recent theoretical works, which suggest that increasing pressure leads to the formation of a second metastable higher-energy state in the potential energy landscape [16, 17]. In this picture, fast dynamics correspond to temperature-assisted transitions within this two-level system which leads to rejuvenation at low pressures. With further increase of pressure the second metastable state vanishes, and dynamics reverse then to the slow structural relaxation, similarly to our data [16, 17]. It would be interesting to know, whether the model could describe also the heterogeneous to homogeneous evolution of the particle motion during compression.
Figure 5: **Dynamical pathway during full compression-decompression cycle.** a) Typical evolution of \(C(t_{1},t_{2})=(I(t_{1})\cdot I(t_{2}))/(I)^{2}\) at a fixed delay time \(\delta t=t_{1}-t_{2}=\tau\). b) Corresponding distributions of \(C(t,\delta t=\tau)\) during the compression stage for \(\tau\) the structural relaxation time obtained from the KWW fits of the ISFs. Distributions are offset vertically for clarity. c) Dynamical heterogeneity \(\Delta C\) as a function of the applied pressure. This parameter represents the width of the distributions in panel b), defined as the smallest interval that contains 90% of the correlations. The first point corresponds to the loading pressure of 0.1 GPa. The fixed delay time \(\delta t=\tau\) is not accessible at 1 atm because dynamics is too slow to observe a full decorrelation in the TTCFs.
The decorrelation events observed in the rejuvenation regime, especially when fast and complete decorrelation occurs, are the sign of cascade or avalanche-like cooperative relaxation mechanisms, where local relaxation events trigger neighbouring events in a chain reaction [31]. While the trigger for thermally activated relaxation in MGs is highly localized and independent of the stability of the system [32, 33], this chain reaction implies a high spatial density of local minima in the PEL of the glass [34], as isolated minima do not interact with each other. Such avalanche-like dynamics have been reported as an aging mechanism in the similar Pd\({}_{3}\)Cu\({}_{27}\)Ni\({}_{20}\)P\({}_{20}\) metallic glass [29], in a mechanically stressed metallic glass ribbon [28], and as a mediator of aging and/or crystallization in a hard-sphere glass [33, 35]. Regardless of the final structural state (aged glass or crystal), Yanagishima et al. showed that the avalanche events statistically appear in regions of lower local density and bond orientational order [33], reinforcing the heterogeneity of the PEL mentioned above at low pressures. Therefore, the avalanches-like dynamics observed at low pressures witnesses a higher degree of inhomogeneity in the glass structure in this pressure range, in agreement with the as-cast nature of our glass. As individual avalanches do not necessarily increase the local order in the glass [33], and longer time is necessary for the aging trend to emerge at low pressures, the rejuvenation regime persists for several steps in pressure and for many hours per pressure without any signature of relaxation. The transition from rejuvenation to aging hints also toward an effect of the excess free volume, which is present in the as-cast glass but seems to reduce greatly during the relaxation at high pressures, as suggested by the dynamical hysteresis, even if further measurements would be necessary to investigate more this aspect. XRD studies report the occurrence of elastic deformations during the compression of MGs, supporting the idea of a homogeneous fractal network model for the glass [36], as opposed to the heterogeneous structural model of liquid-like regions of loosely bonded atoms embedded in a solid-like matrix [3, 37, 38]. Our work shows that the presence of apparently reversible structural changes under hydrostatic pressure compression (Fig. 2b), is not a sufficient condition to assure a simple elastic structural mechanism under compression, as they can be accompanied by a dramatic hysteresis evolution of the dynamics (Fig. 3 and S5).
The t(q)=1/q\({}^{\alpha}\) wave-vector dependence of the relaxation time implies a super-diffusive collective particle motion in the glass at all pressures, which differs from the well-known structural dependence of the relaxation time observed in supercooled liquids in the proximity of the FSDP [39, 40]. Above the glass transition temperature, the equilibrium dynamics is associated to cage-escape processes, and the long-time collective motion is sub-diffusive leading to a stretched exponential decay of the ISFs, described thus by a value of \(\beta\)\(\leq\)1 [40, 41]. In the glassy state, atomic mobility of MGs originates from fast secondary relaxation processes, such as the \(\beta\)- and \(\gamma\)-processes [3, 42]. These processes control the stress response of the material in the non-ergodic state and have been associated to cooperative string-like particle motions in nanometric liquid-like regions [43, 44]. Compressed ISFs and super-diffusive dynamics have been reported in many different complex systems as colloidal gels, clays, concentrated emulsions, oxides and soft colloids [45, 30, 46]. In these systems, the anomalous dynamic has been associated to the presence of random local stresses in the materials, which are then released triggering the faster-than-exponential collective dynamics [46, 30, 47, 48, 41]. In MGs this stress propagation could be related to the kinetics of structural rearrangements induced by the stress field controlled by the \(\beta\)- and \(\gamma\)- relaxation processes. Further studies will be necessary to clarify the nature of the collective dynamics in MGs and their evolution paths under annealing and pressure.
**Methods:**
Glass synthesis: We prepared a PtCuNi precursor by arc-melting the pure metallic components (with purity >99.95%) under a Ti-gettered Ar-atmosphere (with purity >99.999%). We then alloyed inductively the elemental P with the PtCuNi precursor in a fused-silica tube under Ar-atmosphere. In order to obtain as low as possible oxide content, the alloy was subjected to a fluxing treatment in dehydrated B\({}_{2}\)O\({}_{3}\) for more than 6 hours at 1473 K. The ribbons were produced by melt spinning of the master alloy on a rotating copper wheel under high-purity Ar atmosphere. The resulting glass ribbons of Pt\({}_{2.5}\)Cu\({}_{2r}\)Ni\({}_{3}\),P\({}_{21}\) at.% had a thickness of 20 \(\upmu\)m and a width of 2 mm.
High Pressure: the sample was cut from the as-cast 20 \(\upmu\)m thick ribbon to a rough shape of 50x50x20 \(\upmu\)m\({}^{3}\). The sample was subsequently pre-loaded at 0.1 GPa in a membrane driven Diamond Anvil Cell with a ruby sphere and 4:1 methanol/ethanol mixture as pressure-transmitting medium (PTM), to ensure a perfectly hydrostatic compression up to 10 GPa [49]. The DAC was equipped with 600 \(\upmu\)m diamonds (culet size) and a pre-indented laser drilled stainless steel gasket to make a 60 \(\upmu\)m x 300 \(\upmu\)m (height x diameter) experimental volume. The compression cycle up to 7 GPa is shown in Fig. S1: similar pressures were reached in compression and decompression, and the elapsed time at each pressure was around three hours in compression and one hour in decompression. The pressure was measured from the wavelength of the Chromium \({}^{2}\)E\(\rightarrow\)\({}^{4}\)A\({}_{2}\) transition in a ruby sphere after and before each pressure change, and a dedicated pressure protocol on the membrane ensured a pressure variation lower than 0.12 GPa at all pressures (Fig. S7).
X-Ray Diffraction: The structure of the metallic glass under pressure was monitored by two different runs of x-ray diffraction. The first run, conducted at beamline ID27 at ESRF synchrotron, France, reproduced the pressure protocol of the XPCS experiment. Experiment was performed using an incident energy of 33 keV, an EIGER2 X CdTe 9M (active area = 233.1 x 244.7 mm\({}^{2}\), pixel size = 75 \(\upmu\)m) detector and a DAC loaded with 4:1 methanol/ethanol mixture as PTM, a sample and a ruby sphere for pressure determination. Background was collected at each pressure by measuring the scattering pattern of a location inside the DAC next to the sample. The maximum scattering vector probed in this run is q=12 A\({}^{-1}\). Azimuthal integration of the 2D scattered patterns was performed using the pyFIAl python library [50, 51] to yield 1d diffraction patterns, and the computation of the (background corrected) Faber-Zimman structure factor with Krogh-Moe-Norman normalization [52] was performed using the python-based Amorpheus software [53].
To assess quantitatively the link between the peak position and the sample density, x-ray diffraction data was collected as a function of temperature at atmospheric pressure to compare the shift of the first sharp diffraction to the coefficient of thermal expansion measured by dilatometry (Fig. S1). The XRD data was collected at the beamline ID15a [54] at the ESRF synchrotron, France. Data acquisition using an incident beam energy of 68.5 keV and the scattered diffraction pattern was collected with a Pilatus3 X CdTe 2M detector (active area = 253.7 x 288.8 mm\({}^{2}\), pixel size = 172 \(\upmu\)m). A sample to detector distance of 1.087m was chosen to maximize the resolution on the first sharp diffraction peak. The background was acquired in the same condition with an empty sample. Diffraction patterns were azimuthally integrated using routines from the pyFIAl library [50, 51], and locally implemented corrections for outliers rejection, background, polarization of the X-rays and detector geometry, response, and transparency, to yield 1D diffraction patterns.
XPCS: In order to optimize high-energy and high-pressure XPCS studies, we performed three different XPCS campaigns for a total of 3 weeks of beamtime at beamline ID10 at the ESRF synchrotron, France. The main data have been collected by using a 20.95 keV partially coherent monochromatic beam with a photon flux of 4.2x10\({}^{11}\) photon/s, focalized by a 2D Be lens transfocator to 50.5x14.2 \(\upmu\)m\({}^{2}\) (HxV, FWHM) cut by a pair of slits for an illumination area of 8x8 \(\upmu\)m\({}^{2}\) on sample. The second sample was measured in a second run with an incident energy set to 21.67 keV with a flux of 7.3x10\({}^{11}\) photon/s focalized to a beamsize of 5.2x4.2 \(\upmu\)m\({}^{2}\) (HxV, FWHM) on sample. To record the speckle patterns, we placed an Eiger2 4M CdTe detector (active area = 155.1 x 162.2 mm\({}^{2}\), pixel size = 75 \(\upmu\)m) 5 meters downstream at an angle corresponding to the pressure-dependent position of the FSP, whose maximum is at 2.79 A\({}^{-1}\) at atmospheric pressure and 25\({}^{\circ}\)C. The top part of this FSP is reconstructed by integrating the intensity in the detector, allowing the monitoring of the position of the peak during the measurements. An additional PILATUS detector has been also employed to control the evolution of the structure in a broader Q range during the experiment. A constant acquisition time of 0.1s/frame was kept throughout the whole XPCS experiment, with scans ranging from 6000 frames to 14000 frames depending on \(\uptau\). Intensity-intensity correlation functions, g\({}_{2}\)(t), and TTCFs are extracted from the successive speckle patterns using the event correlator method described in [55]. The ISFs are then obtained from the g\({}_{2}\)(t) through the Siegert relation \(g_{2}(q,\delta t)=~{}1~{}+\gamma\cdot|F(q,\delta t)|^{2}\), whose validity in non-ergodic systems is assured by the use of large area detectors [55, 57]. In this expression \(\nu\) is the experimental contrast related to the degree of coherence of the beam. TTCFs have been evaluated from the normalized correlation \(\langle I(t_{1})\cdot I(t_{2})\rangle/\langle I\rangle^{2}\) between all pairs of scattering patterns recorded during a scan at a given q. The main diagonal corresponds to the elapsed time of the measurement with _t(frame 1) = t(frame 2) = t_, while any point off this diagonal express the correlation value at a certain delay time \(\delta t\) = t(frame 2) - t(frame 1) after the first frame is recorded. To quantify the evolution of the dynamics with pressure, we extracted the characteristic times \(\tau\) of all scans acquired during the compression by fitting KWW functions to the F(q,\(\delta t\)) data. We further averaged the different values of \(\tau\) at a single pressure, to get an average \(\langle\tau\rangle\) for each isobars. No contribution to the dynamics has been observed from the background (diamonds and pressure transmitting medium) as shown in Fig. S8. For the analysis of Fig. 5, the computation of \(\langle I(t)\cdot I(t+\tau)\rangle/\langle I\rangle^{2}\) have been done on the raw data, i.e. taking into account also the aging within each scan. Although this potentially leads to an overestimation of the heterogeneity parameter in the aging regime, we found this effect to be very limited leading to a well-defined transition between the rejuvenation and aging regimes.
## References
* [1] Ediger, M. D. & Harrowell, P. Perspective: Supercooled liquids and glasses. _J. Chem. Phys._**137**, 080901 (2012).
* [2] Debenedetti, P. G. & Stillinger, F. H. Supercooled liquids and the glass transition. _Nature_**410**, 259-267 (2001).
* [3] Wang, W. H. Dynamic relaxations and relaxation-property relationships in metallic glasses. _Progress in Materials Science_**106**, 100561 (2019).
* [4] Wang, W. H. The elastic properties, elastic models and elastic perspectives of metallic glasses. _Progress in Materials Science_**57**, 487-656 (2012).
* [5] Sun, Y., Concustell, A. & Greer, A. L. Thermomechanical processing of metallic glasses: extending the range of the glassy state. _Nat Rev Mater_**1**, 1-14 (2016).
* [6] Pan, J. _et al._ Extreme rejuvenation and softening in a bulk metallic glass. _Nat Commun_**9**, 560 (2018).
* [7] Pan, J., Ivanov, Y. P., Zhou, W. H., Li, Y. & Greer, A. L. Strain-hardening and suppression of shear-banding in rejuvenated bulk metallic glass. _Nature_**578**, 559-562 (2020).
* [8] Egami, T., Tong, Y. & Dmowski, W. Deformation in Metallic Glasses Studied by Synchrotron X-Ray Diffraction. _Metals_**6**, 22 (2016).
* [9] Liu, C. & Fan, Y. Emergent Fractal Energy Landscape as the Origin of Stress-Accelerated Dynamics in Amorphous Solids. _Phys. Rev. Lett._**127**, 215502 (2021).
* [10] Ketov, S. V. _et al._ Rejuvenation of metallic glasses by non-affine thermal strain. _Nature_**524**, 200-203 (2015).
* [11] Ding, G. _et al._ Ultrafast extreme rejuvenation of metallic glasses by shock compression. _Science Advances_**5**, eaaw6249 (2019).
* [12] Tong, Y., Dmowski, W., Bei, H., Yokoyama, Y. & Egami, T. Mechanical rejuvenation in bulk metallic glass induced by thermo-mechanical creep. _Acta Materialia_**148**, 384-390 (2018).
* [13] Dmowski, W. _et al._ Structural rejuvenation in a bulk metallic glass induced by severe plastic deformation. _Acta Materialia_**58**, 429-438 (2010).
* [14] Zhou, H. _et al._ X-ray photon correlation spectroscopy revealing the change of relaxation dynamics of a severely deformed Pd-based bulk metallic glass. _Acta Materialia_**195**, 446-453 (2020).
* [15] Qiao, J. C., Pelletier, J. M., Kou, H. C. & Zhou, X. Modification of atomic mobility in a Ti-based bulk metallic glass by plastic deformation or thermal annealing. _Intermetallics_**28**, 128-137 (2012).
* [16] Phan, A. D., Zaccone, A., Lam, V. D. & Wakabayashi, K. Theory of Pressure-Induced Rejuvenation and Strain Hardening in Metallic Glasses. _Phys. Rev. Lett._**126**, 025502 (2021).
* Rapid Research Letters_**15**, 2100235 (2021).
* [18] Mezouar, M. & Garbarino, G. Exploring phase transitions and the emergence of structural complexity at the ESRF extremely brilliant source. _J. Phys.: Condens. Matter_**33**, 244005 (2021).
* [19] Yavari, A. R. _et al._ Excess free volume in metallic glasses measured by X-ray diffraction. _Acta Materialia_**53**, 1611-1619 (2005).
* [20] Stolpe, M. Synchrotron x-ray diffraction studies of bulk metallic glass forming liquids and glasses. (Saarlandische Universitats- und Landesbibliothek, 2019). doi:10.22028/D291-32093.
* [21] Kuchemann, S., Liu, C., Dufresne, E. M., Shin, J. & Maass, R. Shear banding leads to accelerated aging dynamics in a metallic glass. _Phys. Rev. B_**97**, 014204 (2018).
* [22] Niss, K., Dalle-Ferrier, C., Tarjus, G. & Alba-Simionesco, C. On the correlation between fragility and stretching in glass-forming liquids. _J. Phys.: Condens. Matter_**19**, 076102 (2007).
* [23] Paluch, M., Grzybowska, K. & Grzybowski, A. Effect of high pressure on the relaxation dynamics of glass-forming liquids. _J. Phys.: Condens. Matter_**19**, 205117 (2007).
* [24] Giordano, V. M. & Ruta, B. Unveiling the structural arrangements responsible for the atomic dynamics in metallic glasses during physical aging. _Nat Commun_**7**, 10344 (2016).
* [25] Ruta, B. _et al._ Atomic-Scale Relaxation Dynamics and Aging in a Metallic Glass Probed by X-Ray Photon Correlation Spectroscopy. _Phys. Rev. Lett._**109**, 165701 (2012).
* [26] Ruta, B., Pineda, E. & Evenson, Z. Relaxation processes and physical aging in metallic glasses. _J. Phys.: Condens. Matter_**29**, 503002 (2017).
* [27] Ruta, B., Baldi, G., Monaco, G. & Chushkin, Y. Compressed correlation functions and fast aging dynamics in metallic glasses. _J. Chem. Phys._**138**, 054508 (2013).
* [28] Luo, P. _et al._ Nonmonotonous atomic motions in metallic glasses. _Phys. Rev. B_**102**, 054108 (2020).
* [29] Evenson, Z. _et al._ X-Ray Photon Correlation Spectroscopy Reveals Intermittent Aging Dynamics in a Metallic Glass. _Phys. Rev. Lett._**115**, 175701 (2015).
* [30] Cipelletti, L. _et al._ Universal non-diffusive slow dynamics in aging soft matter. _Faraday Discuss._**123**, 237-251 (2003).
* [31] Trachenko, K. & Zaccone, A. Slow stretched-exponential and fast compressed-exponential relaxation from local event dynamics. _J. Phys.: Condens. Matter_**33**, 315101 (2021).
* [32] Fan, Y., Iwashita, T. & Egami, T. How thermally activated deformation starts in metallic glass. _Nat Commun_**5**, 5083 (2014).
* [33] Yanagishima, T., Russo, J. & Tanaka, H. Common mechanism of thermodynamic and mechanical origin for ageing and crystallization of glasses. _Nat Commun_**8**, 15954 (2017).
* [34] Fan, Y., Iwashita, T. & Egami, T. Crossover from Localized to Cascade Relaxations in Metallic Glasses. _Phys. Rev. Lett._**115**, 045501 (2015).
* [35] Sanz, E. _et al._ Avalanches mediate crystallization in a hard-sphere glass. _Proceedings of the National Academy of Sciences_**111**, 75-80 (2014).
* [36] Chen, S. _et al._ Reversible linear-compression behavior of free volume in a metallic glass. _Phys. Rev. B_**105**, 144201 (2022).
* [37] Wang, Z., Sun, B. A., Bai, H. Y. & Wang, W. H. Evolution of hidden localized flow during glass-to-liquid transition in metallic glass. _Nat Commun_**5**, 5823 (2014).
* [38] Wagner, H. _et al._ Local elastic properties of a metallic glass. _Nature Mater_**10**, 439-442 (2011).
* [39] Ruta, B. _et al._ Wave-Vector Dependence of the Dynamics in Supercooled Metallic Liquids. _Phys. Rev. Lett._**125**, 055701 (2020).
* [40] Neuber, N. _et al._ Disentangling structural and kinetic components of the \(\alpha\)-relaxation in supercooled metallic liquids. _Commun Phys_**5**, 1-10 (2022).
* [41] Chaudhuri, P., Berthier, L. & Kob, W. Universal Nature of Particle Displacements close to Glass and Jamming Transitions. _Phys. Rev. Lett._**99**, 060604 (2007).
* [42] Yu, H.-B., Wang, W.-H. & Samwer, K. The \(\beta\) relaxation in metallic glasses: an overview. _Materials Today_**16**, 183-191 (2013).
* [43] Yu, H.-B., Richert, R. & Samwer, K. Structural rearrangements governing Johari-Goldstein relaxations in metallic glasses. _Science Advances_**3**, e1701577 (2017).
* [44] Chang, C. _et al._ Liquid-like atoms in dense-packed solid glasses. _Nat. Mater._**21**, 1240-1245 (2022).
* [45] Angelini, R. & Ruzicka, B. Non-diffusive dynamics in a colloidal glass: Aging versus rejuvenation. _Colloids and Surfaces A: Physicochemical and Engineering Aspects_**483**, 316-320 (2015).
* [46] Gnan, N. & Zaccarelli, E. The microscopic role of deformation in the dynamics of soft colloids. _Nat. Phys._**15**, 683-688 (2019).
* [47] Ferrero, E. E., Martens, K. & Barrat, J.-L. Relaxation in Yield Stress Systems through Elastically Interacting Activated Events. _Phys. Rev. Lett._**113**, 248301 (2014).
* [48] Bouzid, M., Colombo, J., Barbosa, L. V. & Del Gado, E. Elastically driven intermittent microscopic dynamics in soft solids. _Nat Commun_**8**, 15846 (2017).
* [49] Klotz, S., Chervin, J.-C., Munsch, P. & Marchand, G. L. Hydrostatic limits of 11 pressure transmitting media. _J. Phys. D: Appl. Phys._**42**, 075413 (2009).
* [50] Ashi otis, G. _et al._ The fast azimuthal integration Python library: pyFAI. _J Appl Cryst_**48**, 510-519 (2015).
* [51] Kieffer, J., Petitdemange, S. & Vincent, T. Real-time diffraction computed tomography data reduction. _J Synchrotron Rad_**25**, 612-617 (2018).
* [52] Krogh-Moe, J. A method for converting experimental X-ray intensities to an absolute scale. _Acta Cryst_**9**, 951-953 (1956).
* [53] Boccato, S. _et al._ Amorpheus: a Python-based software for the treatment of X-ray scattering data of amorphous and liquid systems. _High Pressure Research_**42**, 69-93 (2022).
- a beamline for high speed operando X-ray diffraction, diffraction tomography and total scattering. _J Synchrotron Rad_**27**, 515-528 (2020).
* [55] Chushkin, Y., Caronna, C. & Madsen, A. A novel event correlation scheme for X-ray photon correlation spectroscopy. _J Appl Cryst_**45**, 807-813 (2012).
* [56] Bartsch, E., Frenz, V., Baschnagel, J., Schartl, W. & Sillescu, H. The glass transition dynamics of polymer micronetwork colloids. A mode coupling analysis. _J. Chem. Phys._**106**, 3743-3756 (1997).
* [57] Cipelletti, L. & Weitz, D. A. Ultralow-angle dynamic light scattering with a charge coupled device camera based multispeckle, multitau correlator. _Review of Scientific Instruments_**70**, 3214-3221 (1999).
## Acknowledgements
We acknowledge ESRF (Grenoble, France), for the provision of experimental facilities. Parts of this research were carried out at ID10 and ID27 beamlines under the LTP project HC4529. We gratefully thank M. di Michiel for providing in-house experimental time at the ID15a beamline and for his assistance during the experiment. We would also like to thank T. Poreba, K. Lhoste and D. Duran for assistance. This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (Grant Agreement No 948780). All data needed to evaluate the conclusions in the paper are present in the paper and/or the Supplementary Materials.
## Competing Interests
The authors declare that there are no financial or non-financial competing interests.
## Author Contributions
B. R., A. C., Y. C. and F. Z. conceived the study. N.N. and M.F. prepared the samples. G. G., J. J. and M. M. provided technical and scientific support for all high pressure experiments. A. C., B. R., F.Z., Y. C., S. L., T. D., N. N., M. F., E. P., J. S. and A.R. conducted the HP-XPCS experiments at beamline ID10. A.C., B.R., S. L., G.V. and M. di M. conducted the high temperature XRD measurements at beamline ID15A. A.C., J.S., B.R. and G.G. performed the HP-XRD experiments at beamline ID27. A.C. analysed all data with the support of B.R., Y.C., G.V., G.G. and G.M.. A.C. and B.R. wrote the manuscript with inputs from all authors.
Supplementary Materials to "Denser glasses relax faster: a competition between rejuvenation and aging during _in-situ_ high pressure compression at the atomic scale"
### A. Cornet et al.
Similar pressures were reached upon compression and decompression to allow direct comparison. The first step in compression at 0.1 GPa corresponds to the preloading of the cell, and the pressure of the subsequent steps are of 1.5, 3.3, 4.9, 6.3, and 7 GPa. The last decompression step at 0.5 GPa corresponds to the situation where the diamond anvil cell (DAC) membrane is fully deflated, but the DAC remains mechanically locked. The top of the First Sharp Diffraction Peak (FSDP) can be reconstructed by integrating the intensity collected on the detector during a complete XPCS scan. The quantitative estimate of the peak position q\({}_{1}\) is used to verify the consistency of the XPCS experiment with X-ray diffraction (XRD) experiment. Here, the continuous shift of the FSDP toward the high scattering vectors confirms the XRD results shown in the main text. q\({}_{1}\) being linked to the characteristic distance & of the medium range order of the glass by q\({}_{1}\)=2\(\pi\)/&, we can quantitatively assess the validity of the link between q1 and the macroscopic density of the glass by comparing the coefficient of thermal expansion (CTE) derived from high energy XRD measurement to the CTE obtained from dilatometry. The CTE \(\alpha\) is inferred from the evolution of (p(T)-p(25\({}^{\circ}\)C))/p(25\({}^{\circ}\)C) \(\propto\) (q\({}_{1}\)(25\({}^{\circ}\)C)/q\({}_{1}\)(T))\({}^{3}\)-1. We obtain \(\alpha\) = 3.85x10\({}^{\circ}\)\(K^{-1}\) in the glassy state, in good agreement with the reported value obtained by dilatometry\({}^{1}\)\(\alpha\) = 3.95x10\({}^{\circ}\)\(K^{-1}\). This shows that the FSDP is directly linked through the macroscopic density for this glass.
Although the intensity impinging on the sample is in general stable, some fluctuations can occur during a full week of beamtime, and are usually due to adjustment of the electron beam in the storage ring and refilling mode. As the magnitude of these fluctuations is usually small compared to the total intensity, data are not affected. In Figure S2 we report selected TTCFs and the corresponding trace (total intensity in the detector as a function of time) which show two different situations. In the left panel and the beginning of the central panel, heterogeneous dynamics appear while trace is stable. Differently, in the central and right panel trace shows fluctuations related to the re-fill in the storage ring with no influence on the dynamics. This demonstrates that fluctuations of the incoming beam intensity are not responsible for the observation of heterogeneous dynamics. These data allow us to rule out also possible sample movements as sources of induced decorrelations. The position of the sample was monitored by a microscope before and after each scan. Large movements on the scale of 10 \(\mu\)m (>5 times the Rayleigh Criterion) would then been detected, which is not the case. We exclude also smaller, micrometres movements as if the decorrelation would be associated only to a change of the scattering volume, the decorrelation time \(\tau\) should be identical before and after the event, which is generally not the case. In both the left and central panel, the 'decorrelations' in the TTCFs lead to different dynamic profiles, which implies that the relaxation of the systems has changed. Another example is reported also in Fig. S3 as described below.
It is well-known that some Pressure Transmitting Medium (PTM) can alter the property of a glass under compression. This is the case for instance of gas loading with He that can enter the large open network of silica glasses[2]. Due to its large molecular structure this is not the case for the alcohol mixture chosen here as PTM even in the presence of large open structures[2].
**3. Additional data on the rejuvenation and relaxation regimes**
Fig. S3 shows the TTCFs measured at 4.9 GPa in compression. The data shows aging regimes separated by a cascade relaxation at 9600s. The mix of cascade relaxation and aging between the two well defined dynamical regimes at 3.3 and 6.3 GPa shows the transition is not abrupt but continuous. The third TTCF at 4.9 GPa is also a further confirmation of the absence of sample movement as the source
Figure 6: Trace (total intensity on detector) and selected TTCFs in compression at 3.3 GPa (left panels, elapsed time = 9700s), 4.9 GPa (central panels, elapsed time = 9800s) and in decompression at 3.3 GPa (right panels, elapsed time = 860s).
of the heterogeneous dynamical regime observed at low pressures. If sample movement caused the decorrelation event at 9600s, decorrelation time \(\tau\) should be identical before and after the event, which is obviously not the case. The TTCFs at 0.5 GPa at the end of the decompression stage show the heterogeneous dynamical regime is eventually recovered but at a lower pressure compared to compression, in accordance with the evolution of the dynamical heterogeneity introduced in the figure 5.
## 4 Repeatability: results from a second experiments
We controlled the repeatability of the results on a new sample measured in a different run. As shown in Fig. 2e and Fig. S4, results of both runs overlap and show the same heterogenous vs homogeneous transition with pressure, which confirm the robustness of the data showed in this study.
## 5 Hysteresis evolution of the dynamics under pressure compression and decompression
The hysteresis shown in the main text from the comparison of the TTCFs at 3.3 GPa and the complete pathway of the heterogeneity parameter is also visible from the characteristic relaxation time of the intermediate scattering function. In the figure S5 we represent intermediate scattering functions (ISFs) from the compression and decompression stages at 1.5 GPa and 3.3 GPa. To provide an accurate comparison, we chose to plot the curves obtained at the most similar elapsed times \(t_{w}\) (defining the duration of the isobar). Both the shift of the curves and the relaxation times inferred from the KWW
model fitted to the data show slower dynamics during the decompression stage, confirming the hysteretic behaviour.
## 6 Wave-vector dependent XPCS study
To determine the dependence of the ISF with respect to the scattering vector q, we added a binning on the raw data. In the figure S6 we plot a colour representation the total scattered intensity in the detector within a single scan of 7000 frames with an acquisition time of 0.1s. The distribution of the intensity clearly shows the maximum of the diffraction peak. The grey area corresponds to the raw mask applied during the pre-processing of the data, which covers the shadow of the vacuum tube between the sample and the detector and Kossel lines from the diamonds. On the right panel, the segmentation of the unmasked area of the detector into several bands at different is visible. The TTCFs and ISFs were then extracted for the data corresponding to each of these bins.
As XPCS is extremely sensitive to any structural change, it is essential to minimize any potential pressure drift during the measurements. The typical pressure increase after reaching the desired set-point for our DAC configuration is shown in the upper right panel, and can be higher than 1 GPa over one hour. To mitigate this issue, we have determined how much we can reverse the membrane pressure to stabilize the pressure on the sample. More precisely, we measured how much one can decrease the membrane pressure after an initial increase before we can see any change in the sample pressure (down to our precision of 0.01 GPa), as shown in the left panels. We have applied this loading protocol during the measurements, and we report all pressure drifts, taken as the pressure difference between the beginning and the end of a pressure steps. We can see the pressure drifts are now limited to about 0.1 GPa. Importantly, this drift is random and does not depend on the nominal pressure, so it is not responsible for the two-steps pressure effect on the dynamics reported in this study.
**8. Diamond Anvil Cell XPCS background**
Fig. S8 contains two ISFs obtained under pressure with beam focused on the sample or in the experimental volume next to the sample. The absence of a decorrelation in the second case demonstrates that the contribution of the Diamond Anvil Cell, which comprises contributions from the diamonds and the pressure-transmitting medium, are only static contributions. The sample dynamics probed with XPCS is therefore not affected by the high pressure cell.
## 9 Dynamical heterogeneity parameter
The dynamical heterogeneity parameter \(\Delta\)C characterize the level of inhomogeneity of the glass atomic scale dynamics. This parameter corresponds to the smallest width that encompasses 90% of the distribution of the correlation values at a fixed delay time \(\tau\) for each pressure. This 90% threshold was chosen to reflect the extremes values taken by the correlation values. However, we show that the evolution of with respect to pressure is not threshold strictly dependent on the value of this threshold. In the figure S9 we reproduce the figure 5c) of the main text for different threshold values: 90%, 80%, 70%, and 60% (upper left, upper right, lower left and lower right panels respectively).
The result obtained for a width of 90% is reproducible quantatively down to a width of 70%, and qualitatively down to a width of 60%. Overall, this confirms the robustness of the heterogeneity parameter \(\Delta\)C.
|
2310.14472
|
The dressing field method for diffeomorphisms: a relational framework
|
The dressing field method is a tool to reduce gauge symmetries. Here we
extend it to cover the case of diffeomorphisms. The resulting framework is a
systematic scheme to produce Diff(M)-invariant objects, which has a natural
relational interpretation.
Its precise formulation relies on a clear understanding of the bundle
geometry of field space. By detailing it, among other things we stress the
geometric nature of field-independent and field-dependent diffeomorphisms, and
highlight that the heuristic "extended bracket" for field-dependent vector
fields often featuring in the covariant phase space literature can be
understood as arising from the Fr\"olicher-Nijenhuis bracket. Furthermore, by
articulating this bundle geometry with the covariant phase space approach, we
give a streamlined account of the elementary objects of the (pre)symplectic
structure of a Diff(M)-theory: Noether charges and their bracket, as induced by
the standard prescription for the presymplectic potential and 2-form. We give
conceptually transparent expressions allowing to read the integrability
conditions and the circumstances under which the bracket of charge is Lie, and
the resulting Poisson algebras of charges are central extensions of the Lie
algebras of field-independent ($\mathfrak{diff}(M)$) and field-dependent vector
fields.
We show that, applying the dressing field method, one obtains a
Diff(M)-invariant and manifestly relational formulation of a general
relativistic field theory. Relying on results just mentioned, we easily derive
the "dressed" (relational) presymplectic structure of the theory. This
reproduces or extends results from the gravitational edge mode and
gravitational dressing literature. In addition to simplified technical
derivations, the conceptual clarity of the framework supplies several insights
and allows us to dispel misconceptions.
|
Jordan T. Francois Andre
|
2023-10-23T01:01:40Z
|
http://arxiv.org/abs/2310.14472v3
|
# The dressing field method for diffeomorphisms:
###### Abstract
The dressing field method is a tool to reduce gauge symmetries. Here we extend it to cover the case of diffeomorphisms. The resulting framework is a systematic scheme to produce \(\mathrm{Diff}(M)\)-invariant objects, which has a natural relational interpretation.
Its precise formulation relies on a clear understanding of the bundle geometry of field space. By detailing it, among other things we stress the geometric nature of field-independent and field-dependent diffeomorphisms, and elucidate the origin of the heuristic "extended bracket" for field-dependent vector fields often appearing in the covariant phase space literature. Furthermore, by articulating this bundle geometry with the covariant phase space approach, we give a streamlined account of the elementary objects of the (pre)symplectic structure of a \(\mathrm{Diff}(M)\)-theory: Noether charges and their bracket, as induced by the standard prescription for the presymplectic potential and 2-form. We give conceptually transparent expressions allowing to read the integrability conditions and the circumstances under which the bracket of charge is Lie, and the resulting Poisson algebras of charges are central extensions of the Lie algebras of field-independent (\(\mathrm{bif}(M)\)) and field-dependent vector fields.
We show that, applying the dressing field method, one obtains a \(\mathrm{Diff}(M)\)-invariant and manifestly relational formulation of a general relativistic field theory. Relying on results just mentioned, we easily derive the "dressed" (relational) presymplectic structure of the theory. This reproduces or extends results from the gravitational edge mode and gravitational dressing literature. In addition to simplified technical derivations, the conceptual clarity of the framework supplies several insights and allows us to dispel misconceptions.
**Keywords** : Relationship, bundle geometry, covariant phase space, gravitational dressings, edge modes.
###### Contents
* 1 Introduction
* 2 Geometry of field space
* 2.1 Field space as a principal bundle
* 2.1.1 Natural transformation groups
* 2.2 Differential structure
* 2.2.1 Tangent bundle and subbundles
* 2.2.2 Differential forms and their derivations
* 2.3 General vertical transformations, and gauge transformations
* 2.4 Connections on field space
* 2.4.1 Ehresmann connections
* 2.4.2 Twisted connections
* 2.5 Associated bundles, fondamental representation of \(\mathrm{Diff}(M)\), and integration on \(M\)
* 2.5.1 Associated bundle of regions
* 2.5.2 Integration map
* 3
The dressing field method * 3.1 Building basic forms via dressing * 3.1.1 Dressing field and flat connections * 3.2 Residual symmetries * 3.2.1 Residual symmetries of the first kind * 3.2.2 Residual symmetries of the second kind * 3.3 Dressed regions and integrals * 3.4 Discussion
* 4 Covariant phase space methods and bundle geometry
* 4.1 Covariant phase space for \(\text{Diff}(M)\)-theories
* 4.1.1 Noether charges for field-independent gauge parameters
* 4.2 Vertical and gauge transformations
* 4.2.1 Noether charges for field-dependent gauge parameters
* 5 Relational formulation
* 5.1 Basic presymplectic structure
* 5.2 Charges for residual symmetries and their bracket
* 5.3 Relational interpretation of the DFM
* 6 Conclusion
* A Appendix
* A.1 Lie algebra (anti)-isomorphisms
* A.2 Pushforward by a vertical diffeomorphism of field space
* A.3 Assumptions on the set of fields and the Lagrangian functional
* A.4 Condition for the bracket (218) to be Lie
* A.5 Concrete expression of the map (219)
## 1 Introduction
The dressing field method (DFM) is a algorithm to build basic forms on a bundle, i.e. to obtain gauge-invariants in gauge theories. It was first introduced systematically in [1, 2, 3], and its implications regarding the philosophy of gauge theories has been first expounded in [4]. Its applications range from the construction of twistors and tractors in conformal Cartan geometry [5, 6], to reformulation of electroweak physics dispensing with the notion of spontaneous symmetry breaking (SSB) [7]. More recently, it was shown in [8, 9] that it is the geometric underpinning of the notion of _edge modes_ introduced in recent years in the study of the presymplectic structure of gauge theories over bounded regions [10, 11, 12, 13, 14, 15, 16, 17, 18] - which was first hinted at in [19].
The DFM has been first developed for theories with internal gauge symmetries, i.e. Yang-Mills theories and gauge theories of gravity formulated via Cartan geometry. The interested reader can find self-contained expositions in sections 3 and 4.3.1 of [8], section 2.3 of [9], or chapter 5 of [7]. See also [20, 21] for another mathematical development and applications. Here, we aim to provide a detailed exposition of its natural extension to theories with \(\text{Diff}(M)\)-symmetry, and to show how the resulting framework encompasses and unifies a diversity of notions in old and new literature on gravity (to be named below). The proper mathematical formulation of the DFM requires some familiarity with the geometry of the field space \(\Phi\) of \(\text{Diff}(M)\)-theories. The paper is thus organised as follows.
In section 2, in a systematic manner, we lay the elementary notions pertaining to the geometry of \(\Phi\) as an infinite dimensional fiber bundle with structure group \(\text{Diff}(M)\). First, we will be interested in the bundle structure of \(\Phi\): the smooth action of \(\text{Diff}(M)\), the definition of its group of vertical diffeomorphisms (otherwise known as "transformations under _field-dependent_ gauge parameters"), giving in particular the geometric definition of gauge transformations. Then, we consider the differential structure: tangent bundle and remarkable vector fields, the de Rham complex, remarkable spaces of differential forms, and the graded algebra of their derivations.
This will be the occasion to show that the heuristic bracket for field-dependent gauge parameters (vector fields) introduced in [22] and [23], then again in [24] - and often used in covariant phase space literature, see e.g. [25, 26, 27, 28] - is but a special case of the Frolicher-Nijenhuis bracket for vector-valued differential forms on \(\Phi\). The field space Lie derivative along such field-dependent vector fields is simply the Nijenhuis-Lie derivative [29]. Of course, both these facts are true when an internal gauge group is taken as structure group of \(\Phi\) instead of \(\mathrm{Diff}(M)\).
Once these elementary notions properly introduced, we will mention two natural notions of connections \(\Phi\) can be endowed with: Ehresmann connections, and a generalisation known as "twisted connections" [30]. The latter are directly relevant to understanding the geometry behind classical and/or quantum \(\mathrm{Diff}(M)\)-anomalies.
Finally, we will give an account of integration on \(M\) as a natural pairing operation on a space we call the _associated bundle of regions_: a fiber bundle associated to field space via the defining representation of the structure group \(\mathrm{Diff}(M)\), i.e. the field of open sets \(U\subset M\). This allows us to show how field-dependent diffeomorphisms act (non-trivially) on integrals.
Section 3 details the dressing field method (DFM) for diffeomorphisms. It is a systematic procedure to produce _basic_ forms on \(\Phi\), invariant under vertical diffeomorphisms, provided one can identify or build a _dressing field_. A notion we define precisely. We consider the question of residual symmetries, arising in case of partial symmetry reduction, or of possible ambiguities in the choice of dressing field. We show that in the latter case, the space of dressed fields has its own bundle structure, mirroring that of \(\Phi\). The notion of dressed integrals is then defined, and we discuss its relevance to physical applications (to be detailed later), especially regarding the variational principle.
The conceptual implications of the DFM are discussed, and contact with the literature is made. It is argued that the framework developed, technically implementing Einstein's point coincidence argument to field theory [31, 32, 33], has a natural relational interpretation.
We then aim to showcase a natural application of the DFM to the covariant presymplectic structure of \(\mathrm{Diff}(M)\)-theories, to highlight how it reproduces results pertaining to this literature.
In section 4, we thus give our account of covariant phase space methods for \(\mathrm{Diff}(M)\)-theories, making explicit use of the bundle geometry of \(\Phi\) exposed in 2. Given a choice of Lagrangian, we define Noether currents and charges for both field-independent and field-dependent vector fields, and show that, equipped with the bracket induced by the presymplectic 2-form, their Poisson algebras are respectively central extensions of the Lie algebras of the structure group of field space, \(\mathrm{bit}(M)\), and of the group of vertical diffeomorphisms of field space. We give (hopefully) conceptually transparent and concrete expressions for all relevant objects.
Then we move to deriving geometrically the vertical transformations of key objects: Lagrangian, field equations, presymplectic potential and 2-form. This will show how and why \(\mathrm{Diff}(M)\)-theories enjoy a much larger "covariance group" under field-dependent diffeomorphisms, as hinted at in [22]. This is used to derive naturally the previously mentioned Noether charges for field-dependent parameters together with their bracket.
Our goal here is twofold. Firstly, we aim to be both synthetic and systematic in our presentation of elementary covariant phase space notions: by streamlining their technical derivation and improving the conceptual clarity, we hope to lower the cost of entry for newcomers (PhDs and postdocs most notably, whose time is precious). Secondly, as stated above, we will rely on the derived results to showcase the DFM.
This we do in a final short section 5, where the DFM is applied to easily write down the dressed version of a \(\mathrm{Diff}(M)\)-theory together with their _basic_ presymplectic structure. This will make manifest that the DFM is the unifying geometric framework underpinning various notions featuring in the literature on gravity: Notably in the past decade that of _gravitational edge modes_ as introduced in [10] and developed e.g. in [12, 15, 16, 17, 28, 34, 35], and of _gravitational dressings_ as proposed in [36, 37, 38, 39, 40] - see also [41] - as well as the more recent idea of "dynamical reference frames" as expounded in [42, 43], or yet that of "embedding maps/fields" as advocated in [35, 44, 45, 46].
But a more fundamental fact, we will argue again, is that the DFM makes manifest the _relational_ character of general relativistic physics: By systematically implementing a notion of _relational observables_[47], or \(\mathrm{Diff}(M)\)-_Dirac variables_, it allows to reformulate a general relativistic field theory in a \(\mathrm{Diff}(M)\)-invariant and _manifestly relational_ way. This provides some conceptual insights. In particular it dispels the often encountered notion that "boundaries breaks \(\mathrm{Diff}(M)\)-invariance" (a statement equivalent to the famous hole argument). It will also suggest a resolution as to why general relativistic theories could be successfully experimentally tested before solving the issue of their observables. The technical relational formulation provided by the DFM encompassed various notions of "scalar coordinatisation" proposed for GR, e.g. [48, 49, 50, 51, 52].
We conclude by taking stock of our main results and hint at future applications and developments. Technical appendices complete the main text.
## 2 Geometry of field space
In this section we give a sense of the bundle geometry of field space as an infinite dimensional manifold. We aim for a correct conceptual description rather than for a perfectly rigorous account, we thus refer to the the relevant literature, e.g. [53; 54], to back the soundness of extending standard notions defined in the finite dimensional context to the infinite dimensional setting.
### Field space as a principal bundle
As we are interested in \(\operatorname{Diff}(M)\)-theories, our field space is \(\Phi=\Gamma(\mathcal{T})\) where \(\mathcal{T}\) denotes collectively any kind of tensor or pseudo-tensor bundles. Generically these are vector and affine bundles associated to the frame bundle \(LM\) of \(M\), and, if we are interested in gauge field theories based on a principal bundle \(P(M,H)\) with structure group \(H\), it may include the affine bundle of (Ehresmann or Cartan) connections \(\mathcal{C}\) and/or any vector bundles \(E\) associated to \(P\) via representations of \(H\). If we have furthermore a \(SO\)-reduction \(OM\) of \(LM\), we may consider also spinor bundles \(\mathbb{S}\) associated to \(OM\) via spin representations, and a fortiori the bundles \(\mathbb{S}\otimes E\) whose sections would describe gauge-charged spinors. A set of sections of these bundles will be noted collectively as \(\phi\in\Phi\), i.e. a single point in field space.
The group \(\operatorname{Diff}(M)\) has a natural action by pullback on sections of those bundles: Given \(\phi\in\Phi\) and \(\psi\in\operatorname{Diff}(M)\) we have, \(\phi^{\phi}:=\psi^{*}\phi\) - where the left-hand side is a notation for the pullback on the right-hand side. We will use it when convenient. Due to the well-known relation \((f\circ g)^{*}=g^{*}\circ f^{*}\) for any two smooth maps \(M\xrightarrow{g}N\xrightarrow{\phantom{g}}Q\), this action is a right-action.1 We therefore note,
Footnote 1: This is clear on forms, a.k.a. contravariant tensors. On vector fields and more general covariant tensors, the pullback action is \(\psi^{*}:=\psi^{-1}\),, the pushforward by the inverse diffeomorphism. So, on general (mixed) tensors, \(\psi^{*}\) is indeed a well-defined right action of \(\operatorname{Diff}(M)\).
\[\Phi\times\operatorname{Diff}(M) \to\Phi, \tag{1}\] \[(\phi,\psi) \mapsto R_{\psi}\phi:=\psi^{*}\phi,\]
with indeed, for another \(\psi^{\prime}\in\operatorname{Diff}(M)\): \(R_{\psi^{\prime}}R_{\psi}\phi:=\psi^{\prime*}\psi^{*}\phi=(\psi\circ\psi^{ \prime})^{*}\phi=:R_{\psi\phi\phi^{\prime}}\phi\). The field space is fibered by the action of \(\operatorname{Diff}(M)\), the fiber through a point \(\phi\) being its orbit \(\mathcal{O}(\phi)\) under diffeomorphisms. The set of orbits, or moduli space, we denote \(\Phi/\operatorname{Diff}(M)=:\mathcal{M}\). Under adequate restrictions of either of \(\Phi\) or \(\operatorname{Diff}(M)\), this means that the field space \(\Phi\) can be understood as an infinite dimensional principal fiber bundle over \(\mathcal{M}\), its _base_ space, with structure group \(\operatorname{Diff}(M)\).23 We note,
Footnote 2: We refer to Michor for rigorous results on how to describe a fibered space with the non-compact structure group \(\operatorname{Diff}(M)\).
Footnote 3: In particular, this requires that all points \(\phi\) have trivial stability groups, meaning that we a priori reject Killing symmetries of \(\phi\). Otherwise one is dealing with \(\Phi\) as a stratified manifold.
\[\Phi\xrightarrow{\phantom{g}\pi}\mathcal{M}, \tag{2}\] \[\phi\mapsto\pi(\phi)=:[\phi]\]
The projection \(\pi\) of course satisfies the usual relation \(\pi\circ R_{\psi}=\pi\). The fiber over a point \([\phi]\in\mathcal{M}\) can be noted \(\pi^{-1}([\phi])=\mathcal{O}(\phi)\), and it is diffeomorphic to the structure group \(\operatorname{Diff}(M)\) as a manifold.
As a fiber bundle, \(\Phi\) is locally trivialisable. Meaning, for any open subset \(\mathcal{U}\subset\mathcal{M}\) there is a morphism of \(\operatorname{Diff}(M)\)-spaces \(t:\Phi_{\mathcal{U}\mathcal{U}}\mapsto\mathcal{U}\times\operatorname{Diff}(M)\), \(\phi\mapsto(\pi(\phi),\psi(\phi))\), called a _trivialisation_. In others words, it is a local coordinate system on \(\Phi\), and \(\psi\) is the fiber coordinate of \(\phi\). It is s.t. \(t(\phi^{\psi})=R_{\psi}\,t(\phi)=t(\phi)^{\psi}=(\pi(\phi),\psi(\phi)^{\psi})\). A trivialisation is equivalent to the datum of a _local section_, \(\boldsymbol{\sigma}:\mathcal{U}\to\Phi_{\mathcal{U}\mathcal{U}}\), \([\phi]\mapsto\boldsymbol{\sigma}([\phi])=:\phi_{0}\), s.t. \(\pi\circ\boldsymbol{\sigma}=\operatorname{id}_{\mathcal{U}}\). One considers that \(t(\phi_{0})=([\phi],\operatorname{id})\), so that any other point \(\phi\) of the fiber over \([\phi]\) can be reached via the action of the structure group: \(\phi=R_{\psi}\sigma([\phi])=\boldsymbol{\sigma}([\phi])^{\psi}\). Unless \(\Phi\) is trivial, there are no global section \(\boldsymbol{\sigma}:\mathcal{M}\to\Phi\). A choice of local section \(\boldsymbol{\sigma}\) is a _gauge fixing_ - a choice of representative in each gauge orbit over \(\mathcal{U}\) - and the previous statement simply expresses the inexistence of a globally valid gauge fixing procedure, i.e. the Gribov-Singer obstruction [55; 56].
Two choices of local sections \(\mathbf{\sigma}^{\prime}\) and \(\mathbf{\sigma}\) are related via \(\mathbf{\sigma}^{\prime}=R_{\mathbf{g}}\mathbf{\sigma}\), where \(\mathbf{g}:\mathcal{U}\rightarrow\mathrm{Diff}(M)\), \([\phi]\mapsto\mathbf{g}([\phi])\), belongs to the set of _transition functions_ of \(\Phi\).4 We stress that this means that a choice of gauge-fixing/local section _cannot be invariant_ under a further action of the structure group \(\mathrm{Diff}(M)\).
Footnote 4: The latter constitute _passive_ gauge transformations, or _gluings_, on \(\mathcal{M}\). We will be concerned throughout this paper with _active_ gauge transformations associated to vertical automorphisms on \(\Phi\) (and more generally with vertical diffeomorphisms of \(\Phi\)). Conceptually, these two are in the same relationship as coordinate changes, a.k.a. passive diffeomorphisms, and \(\mathrm{Diff}(M)\), a.k.a active diffeomorphisms.
#### 2.1.1 Natural transformation groups
As an infinite dimensional manifold \(\Phi\) has a diffeomorphism group \(\mathbf{Diff}(\Phi)\), but _as a principal bundle_ its maximal transformation group is its group of automorphisms \(\mathbf{Aut}(\Phi):=\{\Xi\in\mathbf{Diff}(\Phi)\,|\,\Xi\circ R_{\psi}=R_{\psi} \circ\Xi\}\). Its elements preserve the fibration structure, and thus project naturally as elements of \(\mathbf{Diff}(\mathcal{M})\).
The subgroup of _vertical_ diffeomorphisms \(\mathbf{Diff}_{v}(\Phi):=\{\Xi\in\mathbf{Diff}(\Phi)\,|\,\pi\circ\Xi=\pi\}\) induce the identity transformation on \(\mathcal{M}\): These are moving along fibers, therefore to \(\Xi\in\mathbf{Diff}_{v}(\Phi)\) corresponds a unique \(\psi:\Phi\rightarrow\mathrm{Diff}(M)\) s.t. \(\Xi(\phi)=R_{\mathbf{\phi}(\phi)}\phi:=[\mathbf{\psi}(\phi)]^{\dagger}\phi\). In other words \(\mathbf{Diff}_{v}(\Phi)\simeq C^{\infty}(\Phi,\mathrm{Diff}(M))\). It should be noticed that the map composition law for \(\mathbf{Diff}_{v}(\Phi)\) gives rise to a peculiar composition operation for \(C^{\infty}(\Phi,\mathrm{Diff}(M))\). Indeed, for \(\Xi,\Xi^{\prime}\in\mathbf{Diff}_{v}(\Phi)\) to which correspond \(\mathbf{\psi},\mathbf{\psi}^{\prime}\in C^{\infty}(\Phi,\mathrm{Diff}(M))\), one has: \(\Xi^{\prime}\circ\Xi(\phi)=R_{\mathbf{\psi}^{\prime}(\Xi(\phi))}\,\Xi(\phi)=R_{\bm {\psi}^{\prime}(\Xi(\phi))}\,R_{\mathbf{\psi}(\phi)}\phi=R_{\mathbf{\psi}(\phi)\mathbf{ \psi}^{\prime}(\Xi(\phi))}\phi\). Thus,
\[\Xi^{\prime}\circ\Xi\in\mathbf{Diff}_{v}(\Phi)\quad\text{corresponds to}\quad\mathbf{ \psi}\circ(\mathbf{\psi}^{\prime}\circ R_{\mathbf{\psi}})\in C^{\infty}(\Phi,\mathrm{ Diff}(M)). \tag{3}\]
Notice we distinguish the composition law \(\circ\) of maps on \(\Phi\), from the composition law \(\circ\) of maps on \(M\).
This generalises the better known subgroup of _vertical automorphisms_\(\mathbf{Aut}_{v}(\Phi):=[\Xi\in\mathbf{Aut}(\Phi)\,|\,\pi\circ\Xi=\pi]= \mathbf{Diff}_{v}(\Phi)\cap\mathbf{Aut}(\Phi)\), isomorphic to the _gauge group_
\[\mathbf{Diff}(M):=\{\mathbf{\psi}:\Phi\rightarrow\mathrm{Diff}(M)\,|\,\mathbf{\psi}( \phi^{\prime})=\psi^{-1}\circ\mathbf{\psi}(\phi)\circ\psi\} \tag{4}\]
via \(\Xi(\phi)=R_{\mathbf{\psi}(\phi)}\,\phi\) still.5 The defining equivariance of elements \(\mathbf{\psi}\) of \(\mathbf{Diff}(M)\) implies that to \(\Xi^{\prime}\circ\Xi\in\mathbf{Aut}_{v}(\Phi)\) corresponds \(\mathbf{\psi}^{\prime}\circ\mathbf{\psi}\in\mathbf{Diff}(M)\): i.e. the composition operation \(\circ\) in \(\mathbf{Aut}_{v}(\Phi)\) translates to the usual (pointwise) composition operation \(\circ\) of the group \(\mathbf{Diff}(M)\) - as expected and familiar in the Yang-Mills case.
Footnote 5: It is indeed easy to check that the equivariance condition on \(\mathbf{\psi}\in\mathbf{Diff}(M)\) flows from the definition of an automorphism: On the one hand \(\Xi\circ R_{\mathbf{\psi}}(\Phi):=\Xi(\phi^{\prime}):=R_{\mathbf{\psi}(\phi)}\phi^{ \prime}:=[\mathbf{\psi}(\phi^{\prime})]^{\dagger}\phi^{\prime}=[\mathbf{\psi}(\phi^{ \prime})]^{\dagger}\phi^{\prime}\phi=[\mathbf{\psi}(\phi^{\prime})]^{\dagger}\phi ^{\prime}\phi=[\mathbf{\psi}\circ\mathbf{\psi}(\phi^{\prime})]^{\dagger}\phi\). On the other hand \(R_{\psi}\circ\Xi(\phi):=R_{\mathbf{\psi}}(\mathbf{\psi}(\phi))^{\dagger}\phi=\psi^{ \prime}[\mathbf{\psi}(\phi)]^{\dagger}\phi^{\prime}\phi=[\mathbf{\psi}(\phi)\circ\psi]^{ \dagger}\phi\). The equality of both expression imposes: \(\mathbf{\psi}(\phi^{\prime})=\psi^{-1}\circ\mathbf{\psi}(\phi)\circ\psi\), or \(R_{\mathbf{\psi}}^{*}\mathbf{\psi}=\psi^{-1}\circ\mathbf{\psi}\circ\mathbf{\psi}\). This condition is thus necessary if a map \(\mathbf{\psi}\) is to induce a vertical automorphism of \(\Phi\).
We have that \(\mathbf{Aut}(\Phi)\) is in the normaliser of \(\mathbf{Diff}_{v}(\Phi)\), and as a group is a subgroup of its normaliser we get: \(N_{\mathbf{Diff}(\Phi)}(\mathbf{Diff}_{v}(\Phi))=\mathbf{Diff}_{v}(\Phi)\cup \mathbf{Aut}(\Phi)\). As a special case, we have \(N_{\mathbf{Diff}(\Phi)}(\mathbf{Aut}_{v}(\Phi))=\mathbf{Aut}(\Phi)\), i.e. \(\mathbf{Aut}_{v}(\Phi)\) - \(\mathbf{Aut}(\Phi)\), which gives the short exact sequence (SES) of groups characteristic of a principal bundle:
\[\mathbf{Diff}(M)\simeq\mathbf{Aut}_{v}(\Phi)\rightarrow\mathbf{Aut}(\Phi) \rightarrow\mathbf{Diff}(\mathcal{M}), \tag{5}\]
where the image of each arrow is in the kernel of the next. We remark that, as physical degrees of freedom are gauge invariant, they should be in the moduli space \(\mathcal{M}\). Therefore, if anything is deserving of the name of "_physical_ symmetries", it is the right-most group \(\mathbf{Diff}(\mathcal{M})\).
In the physics literature, especially on the covariant phase space approach to gravity theories, one often encounters the notion of "_field-dependent_" gauge transformations, or diffeomorphisms. We submit that the group \(\mathbf{Diff}_{v}(\Phi)\simeq C^{\infty}(\Phi,\mathrm{Diff}(M))\) is the correct mathematical embodiment of this notion. We nonetheless stress that stricto sensu, geometric (active) gauge transformations on \(\Phi\) are defined by the action of the subgroup \(\mathbf{Aut}_{v}(\Phi)\simeq\mathbf{Diff}(M)\). Still, the linearised action of \(\mathbf{Diff}_{v}(\Phi)\) gives rise to the so-called "extended bracket" for infinitesimal field-dependent gauge transformations, or diffeomorphisms, as introduced by Bergmann & Komar [22] and Salisbury & Sundermeyer [23], then again by Barnich & Troessaert (BT) in [24] - we will further indicate that this bracket is just the Frolicher-Nijenhuis (FN) bracket of vector-valued differential forms [29]. Obviously, the structure group \(\mathrm{Diff}(M)\) supplies the notion of "_field-independent_" gauge transformations or diffeomorphisms.
The linearization of (5) is the Atiyah Lie algebroid of the principal bundle \(\Phi\). To understand it, and more generally the differential structure of \(\Phi\), let us first remind a general result that we will use often.
Consider the manifolds \(M\), \(N\) and their tangent bundles \(TM\), \(TN\), together with a diffeomorphism \(\psi:M\to N\) and the flow \(\phi_{\tau}:N\to N\) of a vector field \(X\in\Gamma(TN)\), s.t. \(X_{|\phi_{0}}=\frac{d}{d\tau}\phi_{\tau}|_{\tau=0}\in T_{\phi_{0}}N\). One defines the flow
\[\varphi_{\tau}:=\psi^{-1}\circ\phi_{\tau}\circ\psi:M\to M \tag{6}\]
of a vector field \(Y\in\Gamma(TM)\) related to \(X\) as:
\[Y:=\tfrac{d}{d\tau}\left(\psi^{-1}\circ\phi_{\tau}\circ\psi\right) \big{|}_{\tau=0}=(\psi^{-1})_{*}\,X\circ\psi. \tag{7}\]
Indeed as maps we have \(M\xrightarrow{\psi}N\xrightarrow{X}TN\xrightarrow{(\psi^{-1})_{*}}TM\). Their composition give the above vector field \(Y:M\to TM\). We say that \(Y\) and \(X\) are \(\psi\)_-related_. It is a standard result that \(\psi\)-relatedness is a morphism of Lie algebras, that is: \([Y,Y^{\prime}]=(\psi^{-1})_{*}[X,X^{\prime}]\circ\psi\). We will therefore remember that \(X\in\Gamma(TN)\) and \(Y\in\Gamma(TM)\) are \(\psi\)-related (7) when their flows are \(\psi\)_-conjugated_ (6).
The following lemma can be considered a generalisation of the above: Suppose \(\phi\) is some tensor field on \(M\), and \(\mathfrak{L}_{X}\phi\) its Lie derivative along \(X\in\Gamma(TM)\) with flow \(\varphi_{\tau}\). Then, for \(\psi\in\mathrm{Diff}(M)\) we have,
\[\psi^{*}(\mathfrak{L}_{X}\phi) =\psi^{*}\tfrac{d}{d\tau}\varphi_{\tau}^{*}\phi\big{|}_{\tau=0}= \tfrac{d}{d\tau}(\varphi_{\tau}\circ\psi)^{*}\phi\big{|}_{\tau=0}=\tfrac{d}{d \tau}(\varphi_{\tau}\circ\psi)^{*}\,(\psi^{-1})^{*}\psi^{*}\phi\big{|}_{\tau=0},\] \[=\tfrac{d}{d\tau}(\psi^{-1}\circ\varphi_{\tau}\circ\psi)^{*}\, \psi^{*}\phi\big{|}_{\tau=0},\] \[=:\mathfrak{L}_{\{(\psi^{-1}),X\circ\psi\}}(\psi^{*}\phi). \tag{8}\]
### Differential structure
As a manifold, \(\Phi\) has a tangent bundle \(T\Phi\), a cotangent bundle \(T^{\star}\Phi\), or more generally a space of forms \(\Omega^{\star}(\Phi)\). We will consider both structures in turn. It will be useful to distinguish easily the pushforward and pullback on \(M\) and \(\Phi\); we thus reserve \(*\) to denote these notions on \(M\), and use \(\star\) for their counterparts on \(\Phi\).
#### 2.2.1 Tangent bundle and subbundles
Sections of the tangent bundle \(\mathfrak{X}:\Phi\to T\Phi\) are vector fields on \(\Phi\), we note \(\mathfrak{X}\in\Gamma(T\Phi)\). They form a Lie algebra under the bracket of vector field \([\,\ ]:\Gamma(T\Phi)\times\Gamma(T\Phi)\to\Gamma(T\Phi)\). We may write a vector field at \(\phi\in\Phi\) as \(\mathfrak{X}_{\phi}=\tfrac{d}{d\tau}\Psi_{\tau}(\phi)\big{|}_{\tau=0}\), with \(\Psi_{\tau}\in\mathrm{Diff}(\Phi)\) its flow s.t. \(\Psi_{\tau=0}(\phi)=\phi\). Considered as derivations of the algebra of functions \(C^{\infty}(\Phi)\) we would write, in analogy with the finite dimensional case: \(\mathfrak{X}=\mathfrak{X}(\phi)\tfrac{\delta}{\delta\phi}\), where \(\tfrac{\delta}{\delta\phi}\) is of course the functional differentiation w.r.t. \(\phi\), and \(\mathfrak{X}(\phi)\) are the functional components.
The pushforward by the projection is the map \(\pi_{\star}:T_{\phi}\Phi\to T_{\pi(\phi)}\mathcal{M}=T_{[\phi]}\mathcal{M}\). The pushforward by the right action of \(\psi\in\mathrm{Diff}(M)\) is the map \(R_{\psi\star}:T_{\phi}\Phi\to T_{\psi^{*}\phi}\Phi\). In general \(R_{\psi\star}\mathfrak{X}_{\phi}\neq\mathfrak{X}_{[\phi^{*}\phi}\), meaning that a generic vector field "rotates" as it is pushed vertically along fibers. For this reason, in general \(\pi_{\star}\mathfrak{X}\) is not a well-defined vector field on the base \(\mathcal{M}\): Indeed, at \([\phi]\in\mathcal{M}\) the vector obtained would vary depending on where on the fiber over \([\phi]\) the projection is taken.
There is a remarkable Lie subalgebra of \(\Gamma(T\Phi)\), the _right-invariant_ vector fields:
\[\Gamma_{\text{inv}}(T\Phi):=\left\{\mathfrak{X}\in\Gamma(T\Phi) \,|\,R_{\psi\star}\mathfrak{X}_{[\phi]}=\mathfrak{X}_{[\psi^{*}\phi}\right\}. \tag{9}\]
Those do not rotate as they are pushed vertically. Consequently, they project as well-defined vector fields on \(\mathcal{M}\): Indeed, for \(\mathfrak{X}\in\Gamma_{\text{inv}}(T\Phi)\), we have \(\pi_{\star}\mathfrak{X}_{[\psi^{*}\phi}=\pi_{\star}R_{\psi\star}\mathfrak{X}_{ [\phi]}=(\pi\circ R_{\phi})_{\star}\mathfrak{X}_{[\phi]}=\pi_{\star}\mathfrak{ X}_{[\phi]}=:\mathfrak{Y}_{[[\phi]}\in T_{[\phi]}\mathcal{M}\). The result is independent of where on the fiber over \([\phi]\) the projection is taken, so \(\pi_{\star}\mathfrak{X}=:\mathfrak{Y}\in\Gamma(T\mathcal{M})\) is a well defined vector field. Furthermore, the defining property of invariant vector fields implies that their flow is an automorphism of \(\Phi\): Indeed we have,
\[R_{\psi\star}\mathfrak{X}_{[\phi]}=\tfrac{d}{d\tau}R_{\phi}\Psi_{ \tau}(\phi)\big{|}_{\tau=0}\quad\text{and}\quad\mathfrak{X}_{[R_{\phi\phi}]}= \tfrac{d}{d\tau}\Psi_{\tau}(R_{\psi}\phi)\big{|}_{\tau=0}. \tag{10}\]
The equality of both implies \(R_{\psi}\circ\Psi_{\tau}=\Psi_{\tau}\circ R_{\phi}\). Therefore, the Lie subalgebra of right-invariant vector fields is the Lie algebra of the group \(\mathbf{Aut}(\Phi)\):
\[\mathbf{aut}(\Phi)=(\Gamma_{\text{inv}}(T\Phi);[\,\ ]). \tag{11}\]
The tangent bundle \(T\Phi\) has a canonical subbundle, the _vertical tangent bundle_ defined as \(V\Phi:=\ker\pi_{\star}\). In other words, vertical vector fields are elements of \(\Gamma(V\Phi):=\{\mathfrak{X}\in\Gamma(T\Phi)|\,\pi_{\star}\,\mathfrak{X}=0\}\). Since \(V\Phi\) is a subbundle, \(\Gamma(V\Phi)\) is a Lie subalgebra of \(\Gamma(T\Phi)\), even an ideal. Indeed, since \(\pi_{\star}:\Gamma(T\Phi)\to\Gamma(T\mathcal{M})\) is a Lie algebra morphism, we have, for \(\mathfrak{X}\in\Gamma(V\Phi)\) and \(\mathfrak{Y}\in\Gamma(T\Phi)\): \(\pi_{\star}[\mathfrak{X},\mathfrak{Y}]=[\pi_{\star}\mathfrak{X},\pi_{\star} \mathfrak{Y}]=[0,\pi_{\star}\mathfrak{Y}]=0\), i.e. \([\mathfrak{X},\mathfrak{Y}]\in\Gamma(V\Phi)\).
Vertical vector fields can be understood as arising from the linearization of the right action of \(\mathrm{Diff}(M)\). Its characterization requires that we describe the Lie algebra of \(\mathrm{Diff}(M)\). It is well know that as a vector space it it it given by \(\Gamma(TM)\), the vector fields on \(M\), but as a Lie algebra its Lie bracket is _minus_ the bracket of vector fields:
\[\mathfrak{diff}(M)=(\Gamma(TM);-[\,,\,]). \tag{12}\]
We may use the notation \([X,Y]_{\mathrm{left}}:=-[X,Y]_{\Gamma(TM)}\) when useful.
Let us have a look at the types of vertical vector fields induced by the respective actions of \(\mathfrak{diff}(M)\), \(\mathfrak{diff}(M)\) and \(\mathfrak{diff}_{v}(\Phi)\).
A _fundamental_ vertical vector field at \(\phi\in\Phi\) generated by \(X=\frac{d}{d\tau}\psi_{\tau}\big{|}_{\tau=0}\in\mathfrak{diff}(M)\) with flow \(\psi_{\tau}\in\mathrm{Diff}(M)\) is:
\[X^{v}_{\psi}:=\frac{d}{d\tau}R_{\psi_{\tau}}\phi\big{|}_{\tau=0}=\frac{d}{d \tau}\psi^{*}_{\tau}\phi\big{|}_{\tau=0}=:\mathfrak{X}_{X}\phi, \tag{13}\]
where \(\mathfrak{X}_{X}\phi\) is the spacetime Lie derivative of the field \(\phi\) along the vector field \(X\in T(M)\). The spacetime Lie derivative is also given by the Cartan formula \(\mathfrak{L}_{X}=[\iota_{x},d]=\iota_{X}d+d\iota_{X}\), with \(d\) the de Rham exterior derivative on \(M\) - It is derivation of degree \(0\) of the algebra \(\Omega^{\star}(M)\) of forms on \(M\), since \(\iota_{X}\) is of degree \(-1\) and \(d\) is of degree \(1\). Naturally, \(X^{v}_{\psi}\) is tangent to the \(\mathrm{Diff}(M)\)-orbit \(\mathcal{O}[\phi]\) at \(\phi\in\Phi\), hence the name. As expected, it satisfies \(\pi_{\star}X^{v}\equiv 0\), since \(\pi_{\star}X^{v}_{\psi}=\frac{d}{d\tau}\pi\circ R_{\psi_{\tau}}\phi\big{|}_{ \tau=0}=\frac{d}{d\tau}\pi(\phi)\big{|}_{\tau=0}\). As is standard, one shows (see Appendix A.1) that the map \(|^{v}:\mathfrak{diff}(M)\to\Gamma(V\Phi)\), \(X\mapsto X^{v}\), is a Lie algebra morphism: i.e. \(([X,Y]_{\mathrm{left}})^{v}=(-[X,Y]_{\Gamma(TM)})^{v}=[X^{v},Y^{v}]\).
The pushforward by the right-action of a fundamental vertical vector field is:
\[R_{\psi\star}X^{v}_{\psi} :=\frac{d}{d\tau}R_{\psi}\circ R_{\psi_{\tau}}\phi\big{|}_{\tau=0} =\frac{d}{d\tau}R_{\psi_{\tau}\circ\psi}\phi\big{|}_{\tau=0}=\frac{d}{d\tau}R_ {\psi_{\tau}\circ\psi}\,R_{\psi^{-1}\circ\psi}\,\phi\big{|}_{\tau=0},\] \[=\frac{d}{d\tau}R_{(\psi^{-1}\circ\psi_{\tau}\circ\psi)}R_{\psi} \,\phi\big{|}_{\tau=0}=\frac{d}{d\tau}R_{(\psi^{-1}\circ\psi_{\tau}\circ\psi) }\,\psi^{*}\phi\big{|}_{\tau=0},\] \[=:\left((\psi^{-1})_{\star}\,X\circ\psi\right)^{v}_{\psi^{*}\phi }. \tag{14}\]
Where we use the relation between \(\psi\)-relatedness of vector fields and \(\psi\)-conjugation of their flows.6 A fundamental vector field generated by the action of the (Lie algebra of the) structure group is thus not right-invariant.
Footnote 6: This can be understood as the incarnation of the lemma (8) on field space.
But the fundamental vector fields induced by \(\mathfrak{diff}(M)\), the Lie algebra of the gauge group \(\mathbf{Diff}(M)\), are. For \(\psi_{\tau}\in\mathbf{Diff}(M)\), we have of course \(\mathbf{X}=\frac{d}{d\tau}\,\psi_{\tau}\,\big{|}_{\tau=0}\mathfrak{diff}(M)\). Now, given the definition (4) of the gauge group, the transformation property of whose elements can be written \(R^{\star}_{\psi}\psi=\psi^{-1}\circ\psi\circ\psi\) - and reminding the link between \(\psi\)-relatedness of vector fields and \(\psi\)-conjugation of their flow - the former is,
\[\mathfrak{diff}(M):=\left\{\,X:\Phi\to\Gamma(TM)\,|\,R^{\star}_{\psi}\mathbf{X}=( \psi^{-1})_{\star}\,\mathbf{X}\circ\psi\,\right\}. \tag{15}\]
This transformation property can also be written as: \(\mathbf{X}(\phi^{\psi})=\mathbf{X}(\psi^{*}\phi)=(\psi^{-1})_{\star}\,\mathbf{X}(\phi)\circ\psi\). We may remark that the infinitesimal version is given by the Lie derivative on \(\Phi\) along the corresponding fundamental vector field:
\[\mathbf{L}_{X^{v}}\mathbf{X}=X^{v}(\mathbf{X})=\frac{d}{d\tau}\,R^{\star}_{\phi_{\tau}}\mathbf{X }\,\Big{|}_{\tau=0}=\frac{d}{d\tau}\,(\psi^{-1}_{\tau})_{\star}\,\mathbf{X}\circ \psi_{\tau}\,\big{|}_{\tau=0}=:\mathfrak{L}_{X}\mathbf{X}=[\mathbf{X},\mathbf{X}]_{\mathrm{ int}}. \tag{16}\]
Which is natural, and as one would expect. Now a fundamental vector field generated by \(\mathbf{X}\in\mathfrak{diff}(M)\) is,
\[\mathbf{X}^{v}_{\psi}:=\frac{d}{d\tau}R_{\psi_{\tau}(\phi)}\phi\big{|}_{\tau=0}= \frac{d}{d\tau}(\psi_{\tau}(\phi))^{*}\phi\big{|}_{\tau=0}=:\mathfrak{L}_{X}\phi. \tag{17}\]
Its pushforward by the right-action of the structure group is:
\[R_{\psi\star}\mathbf{X}^{v}_{\psi}:=\frac{d}{d\tau}R_{\psi}\circ R_{\psi_{\tau}( \phi)}\phi\big{|}_{\tau=0}=\frac{d}{d\tau}R_{(\psi^{-1}\circ\psi_{\tau}(\phi) \circ\psi)}\,R_{\psi}\,\phi\big{|}_{\tau=0}=\frac{d}{d\tau}R_{\psi_{\tau}(\psi^{ *}\phi)}\,\psi^{*}\phi\big{|}_{\tau=0}=:\mathbf{X}^{v}_{\psi^{*}\phi}. \tag{18}\]
These are right-invariant, as advertised. Furthermore, one shows (see Appendix A.1) that the map \(|^{v}:\mathfrak{diff}(M)\to\Gamma_{\text{inv}}(V\Phi)\), \(\mathbf{X}\mapsto\mathbf{X}^{v}\), is a Lie algebra _anti_-morphism: i.e. \(([\mathbf{X},\mathbf{Y}]_{\text{inv}})^{v}=(-[\mathbf{X},\mathbf{Y}]_{\Gamma(TM)})^{v}=-[\mathbf{X}^ {v},\mathbf{Y}^{v}]\). Therefore, since the Lie subalgebra of right-invariant vertical vector fields is the Lie algebra of the group \(\mathbf{Aut}_{v}(\Phi)\), we have
\[\mathfrak{diff}(M)\simeq\mathfrak{aut}_{v}(\Phi)=(\Gamma_{\text{ inv}}(V\Phi);[\,\ ]). \tag{19}\]
We can thus write the infinitesimal version of (5), i.e. the SES describing the Atiyah Lie algebroid of the bundle \(\Phi\):
\[0\to\mathfrak{diff}(M)\simeq\mathfrak{aut}_{v}(\Phi)\xrightarrow{ \mathfrak{l}^{v}}\mathfrak{aut}(\Phi)\xrightarrow{\pi_{\pi}}\mathfrak{diff}( \mathcal{M})\to 0, \tag{20}\]
where again, the "_physical_ symmetries" are in the right-most Lie algebra \(\mathfrak{diff}(\mathcal{M})\).7
Footnote 7: A splitting of this SES, i.e. the datum of a map \(\mathfrak{aut}(\Phi)\to\mathfrak{diff}(M)\) – or equivalently of a map \(\mathfrak{diff}(\mathcal{M})\to\mathfrak{aut}(\Phi)\) – which would allow to decompose a (right-invariant) vector field on \(\Phi\) as a sum of a gauge element and a vector field on \(\mathcal{M}\), is supplied by a choice of Ehresmann connection 1-form on \(\Phi\). See later.
Finally, consider the Lie algebra of the group of vertical automorphisms \(\mathbf{Diff}_{v}(\Phi)\):
\[\mathfrak{diff}_{v}(\Phi):=\left\{\mathbf{X}^{v}_{|\phi}=\frac{d}{dt} \mathbf{\Xi}_{\tau}(\phi)\big{|}_{\tau=0}=\frac{d}{dt}R_{\phi_{\tau}(\phi)}\phi \big{|}_{\tau=0}\in\Gamma(V\Phi)\right\}, \tag{21}\]
where \(\mathbf{\Xi}_{\tau}\in\mathbf{Diff}_{v}(\Phi)\), and \(\mathbf{\psi}_{\tau}\in C^{\infty}(\Phi,\text{Diff}(M))\) is the flow of \(\mathbf{X}:\Phi\to\Gamma(TM)\simeq\mathfrak{diff}(M)\). Therefore, we have that \(\mathfrak{diff}_{v}(\Phi)\simeq C^{\infty}(\Phi,\mathfrak{diff}(M))\). The pushforward of \(\mathbf{X}^{v}\) by the action of the structure group \(\text{Diff}(M)\) is the same as for a fundamental vector field (14),
\[R_{\phi\star}\mathbf{X}^{v}_{|\phi}=\left((\psi^{-1})_{\star}\mathbf{X} \circ\psi\right)^{v}_{|\phi^{\star}\phi}. \tag{22}\]
Now, here the map \(|^{v}:C^{\infty}(\Phi,\mathfrak{diff}(M))\to\Gamma(V\Phi)\) is also a Lie algebra morphism, yet the bracket on \(C^{\infty}(\Phi,\mathfrak{diff}(M))\) is not simply the bracket in \(\mathfrak{diff}(M)\) but an "extended" bracket that take into account the \(\Phi\)-dependence of its elements. Indeed, one shows that for \(\mathbf{X}^{v},\mathbf{Y}^{v}\in\mathfrak{diff}_{v}(\Phi)\):
\[[\mathbf{X}^{v},\mathbf{Y}^{v}] =-[\mathbf{X},\mathbf{Y}]^{v}_{\Gamma(TM)}+[\mathbf{X}^{v}(\mathbf{Y})]^{v}-[\mathbf{ Y}^{v}(\mathbf{X})]^{v},\] \[=\left\{[\mathbf{X},\mathbf{Y}]_{\mathfrak{diff}(M)}+\mathbf{X}^{v}(\mathbf{Y})- \mathbf{Y}^{v}(\mathbf{X})\right\}^{v}=:\{\mathbf{X},\mathbf{Y}\}^{v}. \tag{23}\]
The result is proven in the short note [57] (for the finite dimensional case). The bracket on \(C^{\infty}(\Phi,\mathfrak{diff}(M))\) is thus,
\[\{\mathbf{X},\mathbf{Y}\}:=[\mathbf{X},\mathbf{Y}]_{\mathfrak{diff}(M)}+\mathbf{X}^{ v}(\mathbf{Y})-\mathbf{Y}^{v}(\mathbf{X}). \tag{24}\]
From this follows naturally that,
\[[\mathbf{L}_{\mathbf{X}^{v}},\mathbf{L}_{\mathbf{Y}^{v}}]=\mathbf{L}_{[\mathbf{X}^{v}, \mathbf{Y}^{v}]}=\mathbf{L}_{[\mathbf{X},\mathbf{Y}]^{v}}. \tag{25}\]
We may observe that the result (23) reproduces the results of Appendix A.1: i.e. on the one hand if \(\mathbf{X},\mathbf{Y}\to X,Y\in\mathfrak{diff}(M)\) the second and third term of the extended bracket vanish and \([X^{v},Y^{v}]=(-[X,Y]_{\Gamma(TM)})^{v}=([X,Y]_{\mathfrak{het}(M)})^{v}\), and on the other hand if if \(\mathbf{X},\mathbf{Y}\to\mathbf{X},\mathbf{Y}\in\mathfrak{diff}(M)\) we may use (16) to obtain the second and third term of the extended bracket so that \([\mathbf{X}^{v},\mathbf{Y}^{v}]=([\mathbf{X},\mathbf{Y}]_{\Gamma(TM)})^{v}=(-[\mathbf{X},\mathbf{Y}]_{ \mathfrak{het}(M)})^{v}\). Naturally, the relation (23) is the infinitesimal counterpart of (3), with (24) reflecting the peculiar composition law in \(C^{\infty}(\Phi,\text{Diff}(M))\).
To the best of my knowledge, the bracket (24) has been first introduced in physics by Bergmann & Komar [22] (for \(\phi=g_{\mu\nu}\)) and Salisbury & Sundermeyer [23] in their investigations of the largest possible symmetry group of General Relativity - see eq.(3.1)-(3.2) in [22] and eq.(2.1) in [23]. It was latter reintroduced by Barnich & Troessaert in their study [24] of asymptotic symmetries of gravity in the flat limit at null infinity. This Bergmann-Komar-Salisbury-Sundermeyer (BKSS) bracket \([\mathbf{X},\mathbf{Y}]_{\text{hsss}}=-[\mathbf{X},\mathbf{Y}]\) appears in eq.(8) in [24] and more recently e.g. in eq.(3.3) in [26], eq.(2.12) in [27], eq.(2.21) in [25], eq.(2.3) in [28], eq.(1.1) in [35]. As this references show, this bracket is commonly used in the literature concerned with the covariant symplectic structure of gravity and with the analysis of its asymptotic symmetries (BMS and extensions). Some have tried to interpret this bracket as a Lie algebroid bracket (see [58] eq. (28) - or (3.11) of the preprint). In addition of showing here how (24) actually arises simply from standard bundle geometry, we show in the next section below that it is also an instance in degree 0 of the Frolicher-Nijenhuis bracket of vector valued forms.
Finally, let us state an elementary yet important result (proven in Appendix A.2): The pushforward by a vertical diffeomorphism \(\Xi\in\mathbf{Diff}_{\nu}(\Phi)\), to which corresponds \(\psi\in C^{\infty}(\Phi,\mathrm{Diff}(M))\), is a map \(\Xi_{\star}:T_{\phi}\Phi\to T_{\Xi(\phi)}\Phi=T_{\psi^{\prime}\phi}\Phi\). Applied on a generic \(\mathfrak{X}\in\Gamma(T\Phi)\) it results in,
\[\Xi_{\star}\mathfrak{X}_{i\phi} =R_{\mathbf{\phi}(\phi)\star}\mathfrak{X}_{i\phi}+\left\{\mathbf{\psi}( \phi)_{\star}^{-1}\mathbf{d}\mathbf{\psi}_{i\phi}(\mathfrak{X}_{i\phi})\right\}_{|\Xi( \phi)}^{\nu},\] \[=R_{\mathbf{\psi}(\phi)\star}\left(\mathfrak{X}_{i\phi}+\left\{\mathbf{d} \mathbf{\psi}_{i\phi}(\mathfrak{X}_{i\phi})\circ\mathbf{\psi}(\phi)^{-1}\right\}_{|\phi }^{\nu}\right). \tag{26}\]
The proof is independent from the equivariance of \(\Xi\sim\mathbf{\psi}\), thus holds the same for \(\Xi\in\mathbf{Aut}_{\nu}(\Phi)\sim\mathbf{\psi}\in\mathbf{Diff}(M)\). This relation can be used to obtain the formula for repeated pushforwards: e.g. to obtain the result for \((\Xi^{\prime}\circ\Xi)_{\star}\mathfrak{X}_{i\phi}\), per (3), one only needs to substitute \(\mathbf{\psi}\to\mathbf{\psi}\circ(\mathbf{\psi}^{\prime}\circ R_{\mathbf{\psi}})\). In case \(\Xi,\Xi^{\prime}\in\mathbf{Aut}_{\nu}(\Phi)\), one substitutes \(\mathbf{\psi}\to\mathbf{\psi}^{\prime}\circ\mathbf{\psi}\). This is key to the geometric definition of general vertical transformations and gauge transformations on field space.
#### 2.2.2 Differential forms and their derivations
As a manifold, \(\Phi\) has a space of forms \(\Omega^{\bullet}(\Phi)\), together with the graded Lie algebra of its derivations \(\mathrm{Der}_{\bullet}\left(\Omega^{\bullet}(\Phi)\right)=\bigoplus_{k} \mathrm{Der}_{k}\left(\Omega^{\bullet}(\Phi)\right)\) whose graded brackets is \([D_{k},D_{l}]=D_{k}\circ D_{l}-(-)^{kl}D_{l}\circ D_{k}\), with \(D_{i}\in\mathrm{Der}_{l}\left(\Omega^{\bullet}(\Phi)\right)\).
The de Rham complex of \(\Phi\) is \((\Omega^{\bullet}(\Phi);\mathbf{d})\) with \(\mathbf{d}\in\mathrm{Der}_{1}\) the de Rham (exterior) derivative, which is nilpotent - \(\mathbf{d}^{2}=0=\nicefrac{{1}}{{2}}[\mathbf{d},\mathbf{d}]\) - and defined via the Koszul formula. Given the exterior product \(\wedge\) defined as usual on scalar-valued forms, we have that \((\Omega^{\bullet}(\Phi,\mathbb{K}),\wedge,\mathbf{d})\) is a differential graded algebra.8
Footnote 8: The exterior product can also be defined on the space \(\Omega^{\bullet}(\Phi,\Lambda)\) of variational differential forms with values in an algebra \((\mathbf{\Lambda},\cdot)\), using the product in \(\mathbf{\Lambda}\) instead of the product in the field \(\mathbb{K}\). So \(\left(\Omega^{\bullet}(\Phi,\Lambda),\wedge,\mathbf{d}\right)\) is a again a differential graded algebra. On the other hand, an exterior product cannot be defined on \(\Omega^{\bullet}(\Phi,\mathbf{V})\) where \(\mathbf{V}\) is merely a vector space.
One may define the subset of vector-field valued differential forms \(\Omega^{\bullet}(\Phi,T\Phi)=\Omega^{\bullet}(\Phi)\otimes T\Phi\). Then, the subalgebra of "_algebraic_" derivations is defined as \(D_{|\Omega^{\rho}(\Phi)}=0\), they have the form \(\iota_{\mathbf{K}}\in\mathrm{Der}_{k-1}\) for \(\mathbf{K}\in\Omega^{k}(\Phi,T\Phi)\), with \(\iota\) the inner product. For \(\omega\otimes\mathfrak{X}\in\Omega^{\bullet}(\Phi,T\Phi)\) we have : \(\iota_{\mathbf{K}}(\omega\otimes\mathfrak{X}):=\iota_{\mathbf{K}}\omega\otimes \mathfrak{X}=\omega\circ\mathbf{K}\otimes\mathfrak{X}\). On \(\Omega^{\bullet}(\Phi,T\Phi)\), the _Nijenhuis-Richardson bracket_ (or _algebraic_ bracket) is defined by:
\[[\mathbf{K},\mathbf{L}]_{\mathrm{NR}}:=\iota_{\mathbf{K}}\mathbf{L}-(-)^{(k-1)(l-1)}\iota_{\bm {L}}\mathbf{K}. \tag{27}\]
It generalises the inner contraction of a form on a vector field and makes the map,
\[\iota:\Omega^{\bullet}(\Phi,T\Phi) \to\mathrm{Der}_{\bullet}\left(\Omega^{\bullet}(\Phi)\right) \tag{28}\] \[\mathbf{K} \mapsto\iota_{\mathbf{K}}\]
a graded Lie algebra morphism;
\[[\iota_{\mathbf{K}},\iota_{\mathbf{L}}]=\iota_{[\mathbf{K},\mathbf{L}]_{\mathrm{NR}}}. \tag{29}\]
The _Nijenhuis-Lie derivative_ is the map,
\[\mathbf{L}:=[\iota,\mathbf{d}]:\Omega^{\bullet}(\Phi,T\Phi) \to\mathrm{Der}_{\bullet}\left(\Omega^{\bullet}(\Phi)\right)\] \[\mathbf{K} \mapsto\mathbf{L}_{\mathbf{K}}:=\iota_{\mathbf{K}}\mathbf{d}-(-)^{k-1}\mathbf{d}\iota_{ \mathbf{K}}\]
We have \(\mathbf{L}_{\mathbf{K}}\in\mathrm{Der}_{k}\) for \(\mathbf{K}\in\Omega^{k}(\Phi,T\Phi)\). It generalises the Lie derivative along vector fields, \(\mathbf{L}_{\mathbf{X}}\in\mathrm{Der}_{0}\). It is such that \([\mathbf{L}_{\mathbf{K}},\mathbf{d}]=0\), and it is a morphism of graded Lie algebras:
\[[\mathbf{L}_{\mathbf{K}},\mathbf{L}_{\mathbf{J}}]=\mathbf{L}_{[\mathbf{K},\mathbf{J}]_{\mathrm{NR}}}, \tag{30}\]
where \([\mathbf{K},\mathbf{J}]_{\mathrm{NR}}\) is the _Frolicher-Nijenhuis bracket_. Explicitely, for \(\mathbf{K}=K\otimes\mathfrak{X}\in\Omega^{k}(\Phi,T\Phi)\) and \(\mathbf{J}=J\otimes\mathfrak{Y}\in\Omega^{l}(\Phi,T\Phi)\), it is:
\[[\mathbf{K},\mathbf{J}]_{\mathrm{NR}}=K\wedge J\otimes[\mathfrak{X},\mathfrak{Y}]+K \wedge\mathbf{L}_{\mathfrak{X}}J\otimes\mathfrak{Y}-\mathbf{L}_{\mathfrak{Y}}K\wedge J \otimes\mathfrak{X}+(-)^{k}\big{(}dK\wedge\iota_{\mathfrak{X}}J\otimes \mathfrak{Y}\big{)}+\iota_{\mathfrak{Y}}K\wedge dJ\otimes\mathfrak{X}\big{)}. \tag{31}\]
We further have the relations:
\[[\mathbf{L}_{\mathbf{K}},\iota_{\mathbf{J}}]=\iota_{[\mathbf{K},\mathbf{J}]_{\mathrm{NR}}}-(-)^{k(l-1 )}\mathbf{L}_{(\iota_{\mathbf{K}}\mathbf{J})}, \tag{32}\]
\[[\iota_{\mathbf{J}},\mathbf{L}_{\mathbf{K}}]=\mathbf{L}_{(\iota_{\mathbf{K}}\mathbf{J})}+(-)^{k}\iota_{[ \mathbf{J},\mathbf{K}]_{\mathrm{PN}}}.\]
We refer to [29] (Chap II, section 8) for a systematic presentation of these notions in the finite dimensional case, and for proofs of the above relations.
The FN bracket (31) reproduces as a special case the bracket (23): Indeed, specialising the above formulae in degree 0, for \(\mathbf{f}=f\otimes\bar{\mathbf{x}}\) and \(\mathbf{g}=g\otimes\bar{\mathbf{y}}\in\Omega^{0}(\Phi,T\Phi)\) we have:
\[\begin{split}[\mathbf{f},\mathbf{dg}]_{\rm{xx}}&=\iota_{ \mathbf{f}}\mathbf{dg}-(-)^{0}\iota_{\mathbf{fg}}\mathbf{df}=(f\wedge\iota_{\mathbf{x}}dg)\otimes \bar{\mathbf{y}}=(f\wedge\mathbf{L}_{\mathbf{x}}g)\otimes\bar{\mathbf{y}},\\ [\mathbf{df},\mathbf{g}]_{\rm{xx}}&=\iota_{\mathbf{df}}\mathbf{g}-(- )^{0}\iota_{\mathbf{g}}\mathbf{df}=-[\mathbf{g},\mathbf{df}]_{\rm{xx}}=-(g\wedge\mathbf{L}_{\bar{ \mathbf{y}}}f)\otimes\bar{\mathbf{x}}.\end{split} \tag{33}\]
So that,
\[[\mathbf{f},\mathbf{g}]_{\rm{xx}} =f\wedge g\otimes[\bar{\mathbf{x}},\bar{\mathbf{y}}]+f\wedge\mathbf{L}_{\mathbf{ x}}g\otimes\bar{\mathbf{y}}-\mathbf{L}_{\bar{\mathbf{y}}}f\wedge g\otimes\bar{\mathbf{x}},\] \[=f\wedge g\otimes[\bar{\mathbf{x}},\bar{\mathbf{y}}]+[\mathbf{f},\mathbf{dg}]_{ \rm{xx}}-[\mathbf{g},\mathbf{df}]_{\rm{xx}}, \tag{34}\]
and
\[[\mathbf{L}_{\mathbf{f}},\iota_{\mathbf{g}}]=\iota_{[\mathbf{f},\mathbf{g}]_{\rm{xx}}}. \tag{35}\]
Now, the map \(|^{r}:C^{\infty}(\Phi,\mathrm{bit}[M))\to\Gamma(\Psi)\), \(\mathbf{X}\mapsto\mathbf{X}^{v}\), allows to think of \(\mathbf{X}^{v}\in\mathrm{bit}[\mathrm{t}_{v}(\Phi)\) as a (vertical) vector-valued 0-form on \(\Phi\), i.e. \(\mathbf{X}^{v}\in\Omega^{0}(\Phi,V\Phi)\subset\Omega^{\bullet}(\Phi,T\Phi)\). Therefore, the Nihenhuis-Richardson and Frolicher-Nijenhuis brackets naturally apply: in particular, specialising further eq.(33) we have, for \(\mathbf{X}^{v},\mathbf{Y}^{v}\in\Omega^{0}(\Phi,V\Phi)\),
\[\begin{split}[\mathbf{X}^{v},\mathbf{d}\mathbf{Y}^{v}]_{\rm{xx}}& =\{\mathbf{x}^{v}\mathbf{d}\mathbf{Y}\}^{v}=\{\mathbf{X}^{v}(\mathbf{Y})\}^{v},\\ [\mathbf{d}\mathbf{X}^{v},\mathbf{Y}^{v}]_{\rm{xx}}&=-[\mathbf{Y}^{ v},\mathbf{d}\mathbf{X}^{v}]_{\rm{xx}}=-[\mathbf{\iota_{\mathbf{Y}}}\mathbf{d}\mathbf{X}]^{v}=-[\mathbf{Y}^{ v}(\mathbf{X})]^{v},\end{split} \tag{36}\]
so that the FN bracket for 0-forms (34) specialises to:
\[[\mathbf{X}^{v},\mathbf{Y}^{v}]_{\rm{FN}} =\left([\mathbf{X},\mathbf{Y}]_{\rm{nn}}\right)^{v}+[\mathbf{X}^{v},\mathbf{d}\mathbf{ Y}^{v}]_{\rm{xx}}-[\mathbf{Y}^{v},\mathbf{d}\mathbf{X}^{v}]_{\rm{xx}},\] \[=\left(-[\mathbf{X},\mathbf{Y}]_{\Gamma(\mathbf{x}\mathbf{x})}+\mathbf{X}^{v}(\mathbf{Y})- \mathbf{Y}^{v}(\mathbf{X})\right)^{v}=\{\mathbf{X},\mathbf{Y}\}^{v}. \tag{37}\]
Then of course we have the following special cases of identities eq.(29), (32), and (30) among derivations in \(\mathrm{Der}^{\bullet}\):
\[[\iota_{\mathbf{X}^{v}},\iota_{\mathbf{df}^{v}}] =\iota_{[\mathbf{X}^{v},\mathbf{d}\mathbf{Y}^{v}]_{\rm{RR}}}=\iota_{[\mathbf{x}^{ v}\mathbf{d}\mathbf{Y}]^{v}}, \tag{38}\] \[[\mathbf{L}_{\mathbf{X}^{v}},\iota_{\mathbf{Y}^{v}}] =\iota_{[\mathbf{X}^{v},\mathbf{Y}^{v}]_{\rm{FN}}},\] (39) \[[\mathbf{L}_{\mathbf{X}^{v}},\mathbf{L}_{\mathbf{Y}^{v}}] =\mathbf{L}_{[\mathbf{X}^{v},\mathbf{Y}^{v}]_{\rm{FN}}}=\mathbf{L}_{[\mathbf{X},\mathbf{Y} ]^{v}}. \tag{40}\]
As one should expect, (40) reproduces (25). These identities were derived heuristically in the appendices of various works, sometimes at quite a computational cost. For example, eq.(40)/(25) reproduces e.g. eq.(3.1) in [26], eq.(2.13) in [27], or eq.(A.9) in [28]. We show here that they flow naturally from well-established general geometric structures.
Remarkable formsThe structure group \(\mathrm{Diff}(M)\) of \(\Phi\) acts on a form \(\mathbf{\alpha}\in\Omega^{\bullet}(\Phi)\) by pullback, \(R^{\bullet}_{\mathbf{\phi}}\alpha\), defining its _equivariance_. The action by pullback of \(\mathbf{Diff}_{v}(\Phi)\simeq C^{\infty}(\Phi,\mathrm{Diff}(M))\), i.e. \(\mathbf{\alpha}^{\mathbf{\phi}}:=\Xi^{\bullet}\mathbf{\alpha}\), defines _general vertical transformations_, while the action by pullback of \(\mathbf{Aut}_{v}(\Phi)\simeq\mathbf{Diff}(M)\) similarly defines _gauge transformations_.
Let us express such a generic form at \(\phi\in\Phi\) as,
\[\mathbf{\alpha}_{|\phi}=\alpha(\wedge^{\bullet}\mathbf{d}\phi_{|\phi};\phi), \tag{41}\]
where \(\mathbf{d}\phi\in\Omega^{1}(\Phi)\) is the basis 1-form on \(\Phi\) (the infinite dimensional analogue of \(dx^{\mu}\) on a manifold \(M\)), and \(\alpha(\ ;\ )\) is the functional expression of \(\mathbf{\alpha}\), alternating multilinear in the first arguments, and whose dependence in the second argument \(\phi\) is a priori arbitrary - but often in physics it will be polynomial. Then, its equivariance and general vertical transformation are:
\[R^{\bullet}_{\mathbf{\phi}}\mathbf{\alpha}_{|\phi^{\phi}} =\alpha(\wedge^{\bullet}R^{\bullet}_{\mathbf{\phi}}\mathbf{d}\phi_{|\phi ^{\phi}};\ R_{\phi}\phi)=\alpha(\wedge^{\bullet}R^{\bullet}_{\mathbf{\phi}}\mathbf{d} \phi_{|\phi^{\phi}};\ \phi^{\phi}),\quad\text{for}\ \ \psi\in\mathrm{Diff}(M), \tag{42}\] \[\mathbf{\alpha}^{\psi}_{|\phi}:=\Xi^{\bullet}\mathbf{\alpha}_{|\Xi(\phi)} =\alpha(\wedge^{\bullet}\Xi^{\bullet}\mathbf{d}\phi_{|\Xi(\phi)};\ \Xi(\phi))=\alpha(\wedge^{\bullet}\Xi^{\bullet}\mathbf{d}\phi_{|\phi^{ \phi}};\ \phi^{\phi}),\quad\text{for}\ \ \Xi\in\mathbf{Diff}_{v}(\Phi)\sim\mathbf{\psi}\in C^{\infty}(\Phi,\mathrm{Diff}(M)). \tag{43}\]
The infinitesimal equivariance and general vertical transformations are given by the (Nijenhuis-) Lie derivative along the elements of \(\Gamma(V\Phi)\) generated respectively by \(\mathsf{bif}(M)\) and \(C^{\infty}(\Phi,\mathsf{bif}(M)):\)
\[\mathbf{L}_{X^{\prime}}\mathbf{\alpha}=\tfrac{d}{dt}R^{\star}_{\psi_{t}}\mathbf{\alpha} \big{|}_{\tau=0}\quad\text{ with }\ X\in\mathsf{bif}(M),\qquad\mathbf{L}_{X^{\prime}}\mathbf{\alpha}= \tfrac{d}{dt}\overline{\mathbf{\pi}^{\star}}\mathbf{\alpha}\big{|}_{\tau=0}\quad\text{ with }\ X\in C^{\infty}(\Phi,\mathsf{bif}(M)), \tag{44}\]
Concrete results may be computed explicitly. But for some types of forms, there are shortcut for structural reasons. In that regard, and as part of the elementary bundle geometry of field space, we may describe forms of special interest.
First, _equivariant_ forms are those whose equivariance is well-behaved in some sense. _Standard_ equivariant forms are valued in representations \((\rho,\mathbf{V})\) of the structure group \(\mathrm{Diff}(M)\) and s.t.:
\[\Omega^{\bullet}_{\mathrm{eq}}(\Phi,\rho):=\left\{\mathbf{\alpha}\in\Omega^{ \bullet}(\Phi,\mathbf{V})\,|\,R^{\star}_{\psi}\mathbf{\alpha}_{\psi^{\theta}}=\rho( \psi)^{-1}\mathbf{\alpha}_{\psi}\right\}. \tag{45}\]
The infinitesimal version of the equivariance property is \(\mathbf{L}_{X^{\prime}}\mathbf{\alpha}=-\rho_{\ast}(X)\mathbf{\alpha}\).
The latter is a subspace of _twisted_ equivariant forms [30]: their equivariance is controlled by a 1-cocycle for the action of \(\mathrm{Diff}(M)\) on \(\Phi\), i.e. a map:
\[\begin{split} C:\Phi\times\mathrm{Diff}(M)&\to G,\quad G\text{ some Lie group (possibly infinite dimensional)}.\\ (\phi,\psi)&\mapsto C(\phi;\psi)\qquad\text{ s.t.}\quad C (\phi;\psi^{\prime}\circ\psi)=C(\phi;\psi^{\prime})\cdot C(\phi^{\psi^{\prime} };\psi).\end{split} \tag{46}\]
Manifestly, \(\phi\)-independent 1-cocycles are just group morphisms, thus typical 1-cocycles generalise representations. From the 1-cocycle property (46) follows that \(C(\phi;\mathsf{id}_{M})=\mathsf{id}_{G}=C(\phi^{\phi};\mathsf{id}_{M})\), thus that \(C(\phi;\psi)^{-1}=C(\phi^{\phi};\psi^{-1})\). If \(\mathbf{V}\) is a \(G\)-space, twisted equivariant forms are defined as,
\[\Omega^{\bullet}_{\mathrm{eq}}(\Phi,C):=\left\{\mathbf{\alpha}\in\Omega^{\bullet }(\Phi,\mathbf{V})\,|\,R^{\star}_{\psi}\mathbf{\alpha}_{\psi^{\theta}}=C(\phi;\psi)^{ -1}\mathbf{\alpha}_{\psi}\right\}. \tag{47}\]
The 1-cocycle relation (46) ensures compatibility with the right action: \(R^{\star}_{\psi^{\prime}}R^{\star}_{\psi}=R^{\star}_{\psi^{\prime}\circ\psi}\). The infinitesimal equivariance is \(\mathbf{L}_{X^{\prime}}\mathbf{\alpha}=-a(X;\phi)\mathbf{\alpha}\), where \(a(X,\phi):=\tfrac{d}{dt}\,C(\phi,\psi_{\tau})_{\tau=0}\) is a 1-cocycle for the action of \(\mathsf{bif}(M)\) on \(\Phi\):
\[a:\Phi\times\mathsf{bif}(M) \to\mathfrak{g},\quad\text{$\mathfrak{g}$ the Lie algebra of $G$}. \tag{48}\] \[(\phi,X) \mapsto a(X;\phi)\qquad\text{ s.t.}\quad X^{\nu}\cdot a(Y;\phi)-Y^{ \nu}\cdot a(X;\phi)+\left[a(X;\phi)\,,\ a(Y;\phi)\right]_{\mathfrak{g}}=a([X,Y]_{\mathrm{bif}};\phi).\]
The infinitesimal 1-cocycle relation (48) ensures compatibility with the right action: \([\mathbf{L}_{X^{\prime}},\mathbf{L}_{X^{\prime}}]=\mathbf{L}_{[X^{\prime},Y^{\prime}]}= \mathbf{L}_{[(X,Y_{\mathrm{bif}})^{\nu}}\). The reader familiar with gauge anomalies may discern that it is a non-Abelian generalisation of the Wess-Zumino consistency condition for a \(\mathrm{Diff}(M)\) (Einstein) anomaly \(a(X;\phi)\) - reproduced for \(G\) Abelian.
The subspace of _invariant_ forms are those whose equivariance is trivial: \(\Omega^{\bullet}_{\mathrm{inv}}(\Phi)=\left\{\mathbf{\alpha}\in\Omega^{\bullet}( \Phi)\,|\,R^{\star}_{\psi}\mathbf{\alpha}=\mathbf{\alpha}\,\right\}\). Infinitesimally we have \(\mathbf{L}_{X^{\prime}}\mathbf{\alpha}=0\).
The space of of _horizontal_ forms is \(\Omega^{\bullet}_{\mathrm{hor}}(\Phi)=\left\{\mathbf{\alpha}\in\Omega^{\bullet}( \Phi)\,|\,\iota_{X^{\prime}}\mathbf{\alpha}=0\right\}\). Now, a form which is both equivariant and horizontal is said _tensorial_. We have thus standard tensorial forms
\[\Omega^{\bullet}_{\mathrm{tens}}(\Phi,\rho):=\left\{\mathbf{\alpha}\in\Omega^{ \bullet}(\Phi,\mathbf{V})\,|\,R^{\star}_{\psi}\mathbf{\alpha}=\rho(\psi)^{-1}\mathbf{ \alpha},\ \ \&\ \iota_{X^{\prime}}\mathbf{\alpha}=0\right\}. \tag{49}\]
And similarly, we have the generalisation: the space of _twisted_ tensorial forms,
\[\Omega^{\bullet}_{\mathrm{tens}}(\Phi,C):=\left\{\mathbf{\alpha}\in\Omega^{\bullet}( \Phi,\mathbf{V})\,|\,R^{\star}_{\psi}\mathbf{\alpha}=C(\phi;\psi)^{-1}\mathbf{\alpha},\ \ \&\ \ \iota_{X^{\prime}}\mathbf{\alpha}=0\right\}. \tag{50}\]
In either case, we have obviously \(\Omega^{0}_{\mathrm{tens}}(\Phi)=\Omega^{0}_{\mathrm{eq}}(\Phi)\).
One observes that any field theory, given by a Lagrangian \(L:\Phi\to\Omega^{n}(M)\), \(\phi\mapsto L(\phi)\), with \(n=\)dim\(M\), or an action \(S:\Phi\to\mathbb{R}\), \(\phi\mapsto S(\phi)\), is actually a twisted tensorial 0-form. Indeed, let us define \(Z\in\Omega^{0}(\Phi,\mathbb{C}^{\ast})\), i.e. \(Z:\Phi\to\mathbb{C}^{\ast}\), by \(\phi\mapsto Z(\phi):=\exp iS(\phi)\). The action of \(\mathrm{Diff}(M)\) is \(R^{\star}_{\psi}L=\psi^{\ast}L\), one may then define the object \(C:\Phi\times\mathrm{Diff}(M)\to U(1)\), by \(C(\phi;\psi):=\exp-i\int c(\phi;\psi)\), where \(c(\ ;\psi):=R^{\star}_{\phi}L-L\). This implies that \(R^{\star}_{\psi}Z=C(\ ;\psi)^{-1}Z\). Furthermore, it is easily checked that \(c(\phi;\psi^{\prime}\circ\psi)=c(\phi;\psi^{\prime})+c(\phi^{\psi^{\prime}};\psi)\), i.e. \(C(\phi;\psi)\) satisfies (46), meaning that it is a \(U(1)\)-valued \(\mathrm{Diff}(M)\)-1-cocycle. Therefore \(Z\in\Omega^{0}_{\mathrm{tens}}(\Phi,C)\).
The infinitesimal equivariance is thus \(\mathbf{L}_{X}Z=-a(X;\phi)Z\), with \(-a(X;\ )=\frac{1}{d\tau}C\left(:\psi_{\tau}\right)^{-1}\big{|}_{\tau=0}=\frac{d}{d \tau}i\int c\left(:\psi_{\tau}\right)\big{|}_{\tau=0}=i\int\frac{d}{d\tau}R^{ \star}_{\psi_{\tau}}L-L\big{|}_{\tau=0}=i\int\frac{d}{d\tau}\psi^{\star}_{\tau}L \big{|}_{\tau=0}=i\int\alpha(X;\ )\). In other words, \(a(X;\phi)\) is the classical \(\mathrm{Diff}(M)\)-anomaly. In case \(Z\) is absent the path integral of \(S\), the quantity \(a(X;\phi)\) is the quantum \(\mathrm{Diff}(M)\)-anomaly, or Einstein anomaly [59; 60]. In both cases, since \(\mathrm{Lie}U(1)=i\mathbb{R}\) is abelian, (48) is \(X^{\nu}\cdot a(Y;\phi)-Y^{\nu}\cdot a(X;\phi)=a([X,Y]_{\mathrm{sol}};\phi)\), i.e. the Wess-Zumino consistency condition for the anomaly.
Remark that this view of the non-trivial equivariance of \(Z=\exp{iS}\) relies on a convention for the action of \(\mathrm{Diff}(M)\) on integrals (over \(M\)) that is not the one we use (and argue for) in this paper.9 We elaborate on this issue in section 2.5, in connection to a wider discussion of associated bundles (whose space of sections are the 0-degree of the spaces of tensorial forms (49)-(50)) and of the fundamental representation of \(\mathrm{Diff}(M)\).
Footnote 9: This is not the case for the path integral \(Z(\phi)=\int d\phi\exp\frac{iS}{\hbar}S(\phi)\) defining quantum theories. Even with our convention, giving an invariant action \(S\), the twisted equivariance of \(Z\) comes from the non-trivial transformation of the measure \(\mathbf{d}\phi\): the 1-cocycle \(C(\phi;\psi):=\exp\-i\int c(\phi;\psi)\) is then the integrated quantum \(\mathrm{Diff}(M)\)-anomaly. See e.g. chapter 12 of [59].
Let us highlight the fact that the de Rham derivative \(\mathbf{d}\) does not preserve the space of tensorial forms (one loses the property of horizontality).10 This is one reason one needs to introduce an adequate notion of _connection_ on \(\Phi\) so as to define a _covariant derivative_ on the space of tensorial forms. For standard tensorial forms, one needs an Ehresmann connection 1-form, while for twisted tensorial forms one needs a generalisation called _twisted_ connection [30]. See section 2.4 below.
Footnote 10: This is easy to check: starting on a 1-form \(\mathbf{\sigma}\), using the Koszul formula for \(\mathbf{d}\) we have that for any \(\mathbb{X},\mathfrak{Y}\in\Gamma(T\Phi)\): \(\mathbf{d}\mathbf{\alpha}(\mathbb{X},\mathfrak{Y})=(\mathbf{L}_{X}\mathbf{\alpha})(\mathfrak{ Y})-\mathfrak{Y}\cdot\mathbf{\alpha}(X)\). So, for \(\mathbb{X}=X^{\nu}\), \(\mathbf{d}\mathbf{\alpha}(X^{\nu},\mathfrak{Y})=(\mathbf{L}_{X^{\nu}}\mathbf{\alpha})(\mathfrak{ Y})-\mathfrak{Y}\cdot\mathbf{\alpha}(X^{\nu})\). If \(\mathbf{\alpha}\) is horizontal, \(\mathbf{d}\mathbf{\alpha}(X^{\nu},\mathfrak{Y})=(\mathbf{L}_{X^{\nu}}\mathbf{\alpha})(\mathfrak{ Y})\) and is non-zero unless \(\mathbf{\alpha}\) is also invariant. See next.
Finally, forms that are both invariant and horizontal are called _basic_:
\[\Omega^{\bullet}_{\mathrm{basic}}(\Phi):=\left\{\mathbf{\alpha}\in\Omega^{ \bullet}(\Phi)\,|\,R^{\bullet}_{\phi}\mathbf{\alpha}=\mathbf{\alpha}\ \ \&\ \ \iota_{X^{\nu}}\mathbf{\alpha}=0\right\}. \tag{51}\]
This space _is_ preserved by \(\mathbf{d}\),11 which means that \((\Omega^{\bullet}_{\mathrm{basic}}(\Phi),\mathbf{d})\) is a subcomplex of the de Rham complex of \(\Phi\): the _basic subcomplex_. Alternatively, basic forms can be defined as \(\mathrm{Im}(\pi^{\star})\) (hence the name), that is:
Footnote 11: See previous footnote. In other words, \(\mathbf{d}\) is a covariant derivative for basic forms!
\[\Omega^{\bullet}_{\mathrm{basic}}(\Phi):=\left\{\mathbf{\alpha}\in\Omega^{\bullet }(\Phi)\,|\,\mathfrak{Y}\mathbf{\beta}\in\Omega^{\bullet}(\mathcal{M})\ \mathrm{s.t.}\ \mathbf{\alpha}=\pi^{\star}\mathbf{\beta}\right\}. \tag{52}\]
Since \([\mathbf{d},\pi^{\star}]=0\), the cohomology of the basic subcomplex is isomorphic to the cohomology \((\Omega^{\bullet}(\mathcal{M}),\mathbf{d})\) of the moduli space: The cohomology of \((\Omega^{\bullet}_{\mathrm{basic}}(\Phi),\mathbf{d})\) is known as the _equivariant cohomology_ of \(\Phi\).
Remark that the "form" analogue of \(\Gamma_{\mathrm{inv}}(T\Phi)\) - projecting to well defined vector fields \(\Gamma(TM)\) - is not \(\Omega^{\bullet}_{\mathrm{inv}}(\Phi)\), but \(\Omega^{\bullet}_{\mathrm{basic}}(\Phi)\): Only the latter, per the second definition, projects to well-defined forms in \(\Omega^{\bullet}(\mathcal{M})\). It will be a main endeavor of this paper to show how, given some form \(\mathbf{\alpha}\in\Omega^{\bullet}(\Phi)\), to formally built its basic counterpart \(\mathbf{\alpha}^{b}\in\Omega^{\bullet}_{\mathrm{basic}}(\Phi)\). This will be achieved via the dressing field method, section 3.
### General vertical transformations, and gauge transformations
As already stated above, the general vertical transformation a form \(\mathbf{\alpha}\in\Omega^{\bullet}(\Phi)\) is its pullback by \(\mathbf{Diff}_{\nu}(\Phi)\simeq C^{\infty}(\Phi,\mathrm{Diff}(M))\): \(\mathbf{\alpha}^{\phi}:=\Xi^{\star}\mathbf{\alpha}\). The notation on the left-hand side is operationally defined by the right-hand side, and justified by the fact that the vertical transformation is expressible in term of the generating element \(\psi\in C^{\infty}(\Phi,\mathrm{Diff}(M))\) associated to \(\Xi\in\mathbf{Diff}_{\nu}(\Phi)\). Performing two vertical transformations, using (3), one has
\[(\mathbf{\alpha}^{\phi})^{\psi^{\prime}}:=\Xi^{\star}\,\Xi^{\star}\mathbf{\alpha}=( \Xi\circ\Xi^{\prime})^{\star}\mathbf{\alpha}=:\mathbf{\alpha}^{\psi^{\circ}(\psi \circ\psi\circ\psi\circ\psi\circ\psi\circ\psi\circ\psi)}. \tag{53}\]
In the case of gauge transformations defined by the action of \(\mathbf{Aut}_{\nu}(\Phi)\simeq\mathbf{Diff}(M)\), since the defining equivariance of gauge group elements is \(R^{\star}_{\phi}\psi=\psi\circ R_{\phi}=\psi^{-1}\circ\psi\circ\psi\), the above expression simplifies to a more familiar form
\[(\mathbf{\alpha}^{\phi})^{\psi^{\prime}}=\mathbf{\alpha}^{\psi\circ\psi^{\prime}}. \tag{54}\]
Naturally, infinitesimal general vertical transformations are given by the action of \(\mathbf{diff}_{\nu}(\Phi)\simeq C^{\infty}(\Phi,\mathrm{bit}\mathbb{f}(M))\) via the Nijenhuis-Lie derivative:
\[\mathbf{L}_{X^{\nu}}\mathbf{\alpha}=\begin{cases}\frac{d}{d\tau}\,\Xi^{\star}_{\tau} \mathbf{\alpha}\big{|}_{\tau=0}\\ \left[\iota_{X^{\nu}},\mathbf{d}\right]\mathbf{\alpha}\end{cases}\qquad\mathrm{so} \qquad\left[\mathbf{L}_{X^{\nu}},\mathbf{L}_{Y^{\nu}}\right]\mathbf{\alpha}=\mathbf{L}_{[X^{\nu},Y^{\nu}]\mathbf{\alpha}}\mathbf{\alpha}=\mathbf{L}_{[X,Y^{\nu}]}\mathbf{\alpha}. \tag{55}\]
The second equation, using (25)(40), is the infinitesimal reflection of (53). For infinitesimal gauge transformations, given by the action of \(\mathbf{aut}_{\nu}(\Phi)\simeq\mathfrak{hif}(M)\), this simplifies to \([\mathbf{L}_{\mathbf{X}^{\prime}},\mathbf{L}_{\mathbf{Y}^{\prime}}]\,\mathbf{\alpha}=\mathbf{L}_{(-[\bm {X},\mathbf{Y}]_{\mathrm{Int}(M)^{\prime}})}\,\mathbf{\alpha}=\mathbf{L}_{([\mathbf{X},\mathbf{Y}]_{ \mathrm{I}(TM)^{\prime}})}\,\mathbf{\alpha}\).
To be more explicit, it is possible to take a further step: For any \(\bar{\mathbf{x}},\bar{\mathbf{x}}^{\prime},\ldots\in\Gamma(T\Phi)\), given (26) and the duality between pullback and pushforward, one has that
\[\begin{split}\mathbf{\alpha}^{\psi}_{|\psi}(\bar{\mathbf{x}}_{|\phi}, \ldots)=\Xi^{\star}\mathbf{\alpha}_{|\Xi(\phi)}(\bar{\mathbf{x}}_{|\phi},\ldots)& =\mathbf{\alpha}_{|\psi(\phi)}\left(R_{\psi(\phi)\star}\left(\bar{\mathbf{ x}}_{|\phi}+\left\{d\mathbf{\psi}_{|\phi}(\bar{\mathbf{x}}_{|\phi})\circ\mathbf{\psi}( \phi)^{-1}\right\}_{|\phi}^{v}\right),\ldots\right),\\ &=R^{\star}_{\mathbf{\psi}(\phi)}\,\mathbf{\alpha}_{|\psi(\phi)}\left( \bar{\mathbf{x}}_{|\phi}+\left\{d\mathbf{\psi}_{|\phi}(\bar{\mathbf{x}}_{|\phi})\circ\mathbf{ \psi}(\phi)^{-1}\right\}_{|\phi}^{v},\ldots\right).\end{split} \tag{56}\]
Cleary therefore, the vertical transformation of a form is controlled by its equivariance and verticality properties. In particular, from (56) follows that the vertical transformation of a _tensorial_ form is simply given by its equivariance:
\[\begin{split}\text{For}\ \ \mathbf{\alpha}\in\Omega^{\bullet}_{ \mathrm{cens}}(\Phi,\rho),\quad\mathbf{\alpha}^{\psi}=\rho(\mathbf{\psi})^{-1}\mathbf{ \alpha}.\\ \text{For}\ \ \mathbf{\alpha}\in\Omega^{\bullet}_{\mathrm{cens}}(\Phi,C ),\quad\mathbf{\alpha}^{\psi}=C(\mathbf{\psi})^{-1}\mathbf{\alpha}.\end{split} \tag{57}\]
In the second line we introduced the shorter notation \([C(\mathbf{\psi})](\phi):=C(\phi;\mathbf{\psi}(\phi))\), so the map \(C(\mathbf{\psi})\) has thus a double dependency on the point \(\phi\in\Phi\). Due to their horizontality, their vertical transformation is homogeneous, tensorial so to speak, hence their name. It should be remarked that \(\mathbf{\alpha}^{\psi}=\Xi^{\star}\mathbf{\alpha}\not\in\Omega^{\bullet}_{\mathrm{cens }}(\Phi,\rho)\) unless \(\mathbf{\psi}\in\mathbf{Diff}(M)\sim\Xi\in\mathbf{Aut}_{\nu}(\Phi)\) - see [57] - making _gauge transformations_ special indeed.
It is standard, and easy to show, that the infinitesimal versions of (56) is, by definition (55), simply,
\[\mathbf{L}_{\mathbf{X}^{\prime}}\mathbf{\alpha}=\tfrac{d}{d\tau}\,R^{\star}_{\mathbf{\psi}_{ \tau}}\mathbf{\alpha}\,\big{|}_{\tau=0}+\iota_{|\mathbf{dX}|^{\gamma}}\mathbf{\alpha}. \tag{58}\]
where of course \(\mathbf{X}=\tfrac{d}{d\tau}\,\mathbf{\psi}_{\tau}\big{|}_{\tau=0}\) and \(\mathbf{\psi}=\mathbf{\psi}(\phi)\in\mathrm{Diff}(M)\) in the equivariance term is \(\phi\)-constant. Remark that \([\mathbf{dX}]^{\nu}\) can be considered an element of \(\Omega^{1}(\Phi,V\Phi)\), so \(\iota_{|\mathbf{dX}|^{\gamma}}\) is an algebraic derivation (of degree \(0\)) of the type discussed in section 2.2.2. Again, it is clear that a vertical transformation depends on both the equivariance and the verticality properties of a form. For \(\mathbf{\alpha}\in\Omega^{\bullet}_{\mathrm{eq}}(\phi)\), depending if it is standard or twisted equivariant, (58) specialises to:
\[\mathbf{L}_{\mathbf{X}^{\prime}}\mathbf{\alpha}=\begin{cases}-\rho_{\star}(\mathbf{X})\,\mathbf{ \alpha}+\iota_{|\mathbf{dX}|^{\gamma}}\mathbf{\alpha},\\ -a(\mathbf{X})\,\mathbf{\alpha}+\iota_{|\mathbf{dX}|^{\gamma}}\mathbf{\alpha},\end{cases} \tag{59}\]
where we introduce the notation \([a(\mathbf{X})](\phi):=a(\mathbf{X}(\phi);\phi)\) for the linearised \(1\)-cocycle. And in particular for \(\rho(\mathbf{X})^{-1}=\psi^{\star}\) the pullback representation, i.e. for \(\Omega^{\bullet}(M)\)- (or tensor-) valued forms \(\mathbf{\alpha}\), (58) gives naturally:
\[\mathbf{L}_{\mathbf{X}^{\prime}}\mathbf{\alpha}=\mathfrak{L}_{\mathbf{X}}\mathbf{\alpha}+\iota_{| \mathbf{dX}|^{\gamma}}\mathbf{\alpha}. \tag{60}\]
We remark that the above formula features in an indirect way in (and clarify the meaning of) the so-called "anomaly operator" appearing in recent covariant phase space literature: \(\Delta_{\mathbf{X}}:=\mathbf{L}_{\mathbf{X}^{\prime}}-\mathfrak{L}_{\mathbf{X}}-\iota_{|\mathbf{dX }|^{\gamma}}\) in our notations - compare e.g. to [61] eq.(2.39), [34] eq.(2.1), [26] eq.(2.13), [27] eq.(2.3), or [62] eq.(2.2). Of course, this operator can be non-zero on \(\Phi\) only in theories that are fundamentally "non-general-relativisitic" in admitting background non-dynamical structures or fields "breaking" \(\mathrm{Diff}(M)\)-covariance (see section 3 for further elaboration on this comment).
The infinitesimal versions of (57), for tensorial forms, are then obviously:
\[\mathbf{L}_{\mathbf{X}^{\prime}}\mathbf{\alpha}=-\rho_{\star}(\mathbf{X})\mathbf{\alpha},\quad\text{ or }\quad\mathbf{L}_{\mathbf{X}^{\prime}}\mathbf{\alpha}=-a(\mathbf{X})\mathbf{\alpha}, \tag{61}\]
From applying (25)/(40), the commutativity property of the Nijenhuis-Lie derivative, on a twisted tensorial form \(\mathbf{\alpha}\): \([\mathbf{L}_{\mathbf{X}^{\prime}},\mathbf{L}_{\mathbf{Y}^{\prime}}]\,\mathbf{\alpha}=\mathbf{L}_{[\mathbf{X} ^{\prime},\mathbf{Y}^{\prime}]_{\mathrm{N}}}\,\mathbf{\alpha}=\mathbf{L}_{[\mathbf{X},\mathbf{Y}]^{ \prime}}\,\mathbf{\alpha}\), we get the following relation for the infinitesimal \(1\)-cocycle,
\[\begin{split}\mathbf{X}^{\prime}(a(\mathbf{Y};\phi))-\mathbf{Y}^{\nu}(a(\mathbf{X} ;\phi))-a((\mathbf{X},\mathbf{Y});\phi)+[a(\mathbf{X};\phi),a(\mathbf{Y};\phi)]_{\mathrm{s}}= 0,\\ \mathbf{X}^{\prime}(a(\underline{\mathbf{Y}};\phi))-\mathbf{Y}^{\nu}(a( \underline{\mathbf{X}};\phi))-a([\mathbf{X},\mathbf{Y}]_{\mathrm{Int}(M)};\phi)+[a(\mathbf{X}; \phi),a(\mathbf{Y};\phi)]_{\mathrm{s}}=0.\end{split} \tag{62}\]
One obtains the second line from the first using the FN bracket (37)/(23): The notation \(\underline{\mathbf{Y}},\underline{\mathbf{X}}\) is meant to indicate that the element \(\mathbf{Y},\mathbf{X}\) are considered \(\phi\)-independent in the formula, so that \(\mathbf{X}^{\nu},\mathbf{Y}^{\prime}\) pass through it. Therefore, (62) simply reproduces the defining infinitesimal \(1\)-cocycle property (48) even for \(\phi\)-dependent \(\mathfrak{hif}(M)\)-parameters.
As examples, we may consider the case of elements of the gauge group \(\mathbf{Diff}(M)\) and its Lie algebra \(\mathbf{bifif}(M)\). These are 0-forms, so trivially horizontal, and their equivariance are specified by definition: they are thus tensorial 0-forms. For \(\boldsymbol{\eta}\in\mathbf{Diff}(M)\) whose equivariance is given by \(\psi\)-conjugation, see (4), and for \(\boldsymbol{Y}\in\mathbf{bifif}(M)\) whose equivariance if given by \(\psi\)-relatedness, see (15), we have thus:
\[\boldsymbol{\eta}^{\boldsymbol{\phi}}=\boldsymbol{\psi}^{-1} \circ\boldsymbol{\eta}\circ\boldsymbol{\psi},\qquad\text{and}\quad\boldsymbol {Y}^{\boldsymbol{\psi}}=(\boldsymbol{\psi}^{-1})_{*}\boldsymbol{Y}\circ \boldsymbol{\psi} \tag{63}\]
The infinitesimal gauge transformation of \(\boldsymbol{Y}\) is thus: \(\boldsymbol{L}_{\boldsymbol{X}^{\boldsymbol{Y}}}\boldsymbol{Y}=\mathfrak{L}_ {\boldsymbol{X}}\boldsymbol{Y}=[\boldsymbol{Y},\boldsymbol{X}]_{\text{het}(M)}\) - in coherence with its infinitesimal equivariance given by (16).
And as a special case of the above, or given their definition, basic forms are strictly gauge invariant:
\[\text{For}\;\;\boldsymbol{\alpha}\in\Omega^{\bullet}_{\text{basic} }(\Phi):\quad\boldsymbol{\alpha}^{\boldsymbol{\psi}}=\boldsymbol{\alpha}, \quad\text{so}\quad L_{\boldsymbol{X}^{\boldsymbol{\alpha}}}\boldsymbol{ \alpha}=0. \tag{64}\]
That is another way to detect a variational form who would induce (or come from) a corresponding form on the moduli space \(\mathcal{M}\), where presumably physical degree of freedom (d.o.f.) live.
An example worth looking at closely is that of the basis 1-form \(\boldsymbol{d}\phi\in\Omega^{1}(\Phi)\), given that its vertical transformation \(\boldsymbol{d}\phi^{\boldsymbol{\psi}}:=\Xi^{\star}\boldsymbol{d}\phi\) intervene in the general formulae (43) for the vertical transformation of a generic form. To compute it geometrically via (56) we must know the equivariance and verticality properties of \(\boldsymbol{d}\phi\). These are given by definition:
\[R^{\star}_{\phi}\boldsymbol{d}\phi:=\psi^{*}\boldsymbol{d}\phi, \qquad\text{and}\qquad\iota_{\boldsymbol{X}^{\boldsymbol{\cdot}}} \boldsymbol{d}\phi:=\mathfrak{L}_{\boldsymbol{X}}\phi. \tag{65}\]
That is, the equivariance is controlled by the pullback representation, while the verticality just reproduces the infinitesimal (field-independent) diffeo/gauge transformation of the field \(\phi\). It is therefore almost immediate that its vertical transformation is,
\[\boldsymbol{d}\phi^{\boldsymbol{\psi}}:=\Xi^{\star}\boldsymbol{d} \phi=\boldsymbol{\psi}^{*}(\boldsymbol{d}\phi+\mathfrak{L}_{[\boldsymbol{d} \phi\phi^{-1}]}\phi). \tag{66}\]
It is maybe pedagogically useful to see it unfold in more detail:
\[\boldsymbol{d}\phi^{\boldsymbol{\phi}}_{[\phi]}(\mathfrak{L}_{ \boldsymbol{\psi}}):=\Xi^{\star}\boldsymbol{d}\phi_{[\phi]}(\mathfrak{L}_{ \boldsymbol{\psi}}\mathfrak{L}_{\boldsymbol{\psi}}) =\boldsymbol{d}\phi_{[\phi]\phi^{[\phi]\phi}}\left(R_{\psi(\phi) \star}(\mathfrak{L}_{\phi}+\left\{\boldsymbol{d}\psi_{[\phi]}(\mathfrak{L}_{ \phi})\circ\boldsymbol{\psi}(\phi)^{-1}\right\}_{]\phi}^{v}\right)\Big{)},\] \[=R^{\star}_{\psi(\phi)}\boldsymbol{d}\phi_{[\phi]}\left( \mathfrak{L}_{\psi\phi}+\left\{\boldsymbol{d}\psi_{[\phi]}(\mathfrak{L}_{ \phi})\circ\boldsymbol{\psi}(\phi)^{-1}\right\}_{]\phi}\right),\] \[=\boldsymbol{\psi}(\phi)^{*}\boldsymbol{d}\phi_{[\phi]}( \mathfrak{L}_{\phi})+\mathfrak{L}_{[\boldsymbol{d}\phi_{[\phi]}\phi}( \mathfrak{L}_{\phi})\circ\boldsymbol{\psi}(\phi)^{-1}]\phi,\] \[=\left(\boldsymbol{\psi}(\phi)^{*}(\boldsymbol{d}\phi_{[\phi]}+ \mathfrak{L}_{[\boldsymbol{d}\phi_{[\phi]}\phi]\phi]}^{-1}\phi)\right)( \mathfrak{L}_{[\phi]}),\]
This reproduces a result now standard in the literature on the covariant phase space of gravity: See e.g. eq.(3.5) and (3.6) in [10] where the relation is proven in Appendix B via more heuristic algebraic computations. Notice the gain in efficiency stemming from the geometric approach. By (60), the infinitesimal version is
\[\boldsymbol{L}_{\boldsymbol{X}^{\boldsymbol{\cdot}}}\boldsymbol{d} \phi=\mathfrak{L}_{\boldsymbol{X}}\boldsymbol{d}\phi+\mathfrak{L}_{ \boldsymbol{dX}}\phi. \tag{67}\]
The same could be found via \(\boldsymbol{L}_{\boldsymbol{X}^{\boldsymbol{\cdot}}}\boldsymbol{d}\phi= \boldsymbol{d}(\iota_{\boldsymbol{X}^{\boldsymbol{\cdot}}}\boldsymbol{d}\phi)= \boldsymbol{d}(\mathfrak{L}_{\boldsymbol{X}}\phi)\).
### Connections on field space
As we've observed above, the exterior derivative \(\boldsymbol{d}\) does not preserve \(\Omega^{\star}_{\text{lens}}(\Phi)\) of standard/twisted tensorial forms. To build a first order linear differential operator that does, the covariant derivative, one needs to endow \(\Phi\) with an adequate notion of connection 1-form.
#### 2.4.1 Ehresmann connections
A Ehresmann connection 1-form \(\mathbf{\omega}\in\Omega^{1}_{\mathrm{cq}}(\Phi,\mathsf{bif}(M))\) is defined by:
\[\begin{split}\omega_{|\mathbf{\phi}}(X^{\nu}_{|\mathbf{\phi}})& =X,\quad\text{for }X\in\mathsf{bif}(M),\\ R^{\star}_{\mathbf{\phi}}\omega_{|\mathbf{\phi}}&=\psi_{ \ast}^{-1}\,\omega_{|\mathbf{\phi}}\circ\,\psi.\end{split} \tag{68}\]
Infinitesimally, the equivariance of the connection under \(\mathsf{bif}(M)\) is,
\[\mathbf{L}_{X^{\nu}}\mathbf{\omega}=\tfrac{d}{d\tau}R^{\star}_{\mathbf{\phi}_{ \tau}}\mathbf{\omega}\big{|}_{\tau=0}=\tfrac{d}{d\tau}\,\psi_{\tau}^{-1}\,\mathbf{ \omega}\circ\,\psi_{\tau}\big{|}_{\tau=0}=[X,\mathbf{\omega}]_{\Gamma(M)}=[\mathbf{ \omega},X]_{\mathsf{bif}(M)}. \tag{69}\]
The space of connection \(\mathcal{C}\) is an affine space modelled on the vector space \(\Omega^{1}_{\mathrm{tems}}(\Phi,\mathsf{bif}(M))\): Indeed, it is clear that for \(\mathbf{\omega},\mathbf{\omega}^{\prime}\in\mathcal{C}\), we have that \(\mathbf{\beta}:=\mathbf{\omega}^{\prime}-\mathbf{\omega}\in\Omega^{1}_{\mathrm{tems}}( \Phi,\mathsf{bif}(M))\). Or, given \(\mathbf{\omega}\in\mathcal{C}\) and \(\mathbf{\beta}\in\Omega^{1}_{\mathrm{tems}}(\Phi,\mathsf{bif}(M))\), we have that \(\mathbf{\omega}^{\prime}=\mathbf{\omega}+\mathbf{\beta}\in\mathcal{C}\). This means that in general one cannot add connections.12
Footnote 12: Only affine combinations like \(\mathbf{\omega}_{\tau}:=\tau\mathbf{\omega}^{\prime}+(1-\tau)\mathbf{\omega}=\mathbf{\omega}+ \tau\mathbf{\beta}\), \(\tau\in[0,1]\), are possible. Then the midpoint \(\omega_{\tau\sim\sim}=\tfrac{1}{2}(\mathbf{\omega}+\mathbf{\omega}^{\prime})\) is an averaged sum of two connections.
A connection allows to define the horizontal subbundle \(H\Phi:=\ker\mathbf{\omega}\) complementary to the vertical subbundle, \(T\Phi=V\Phi\oplus H\Phi\). The horizontal projection is thus the map \(\mathfrak{h}:T\Phi\to H\Phi\), \(\mathfrak{X}\mapsto\mathfrak{X}^{h}:=\mathfrak{X}-[\mathbf{\omega}(\mathfrak{X})]^ {\nu}\), as clearly \(\mathbf{\omega}(\mathfrak{X}^{h})=0\).
The covariant derivative is thus defined as \(\mathbf{D}:=\mathbf{d}\circ\mathfrak{h}^{\dagger}:\Omega^{\bullet}_{\mathrm{eq}}( \Phi,\rho)\rightarrow\Omega^{\bullet+1}_{\mathrm{tems}}(\Phi,\rho)\). On tensorial forms is has the familiar expression, \(\mathbf{D}:\Omega^{\bullet}_{\mathrm{tems}}(\Phi,\rho)\rightarrow\Omega^{\bullet +1}_{\mathrm{tems}}(\Phi,\rho)\), \(\mathbf{\alpha}\mapsto\mathbf{D}\mathbf{\alpha}=\mathbf{d}\mathbf{\alpha}+\rho_{\ast}(\mathbf{\omega})\mathbf{\alpha}\). Which is the first order linear operator we were looking for.
The curvature 2-form is defined as \(\mathbf{\Omega}:=\mathbf{d}\mathbf{\omega}\circ\mathfrak{h}^{\dagger}\), from which follows that \(\mathbf{\Omega}\in\Omega^{2}_{\mathrm{tems}}(\Phi,\mathsf{bif}(M))\). As is well known, it is also given by Cartan structure equation,
\[\mathbf{\Omega}=\mathbf{d}\mathbf{\omega}+\tfrac{1}{2}[\mathbf{\omega},\mathbf{\omega }]_{\mathsf{bif}(M)}. \tag{70}\]
From this it is easy to see that it satisfies Bianchi identity, \(\mathbf{D}\mathbf{\Omega}=\mathbf{d}\mathbf{\Omega}+[\mathbf{\omega},\mathbf{\Omega}]_{\mathsf{bif}(M)}=0\). On tensorial forms we have \(\mathbf{D}\circ\mathbf{D}=\rho_{\ast}(\mathbf{\Omega})\). We may remark that the FN bracket (37)(23) plays a key role in the last step of proving (70), which requires to show that both sides of the equality vanish on \(\mathbf{X}^{\nu},\mathbf{Y}^{\nu}\in\mathsf{bif}_{\nu}(\Phi)\) with \(\mathbf{X},\mathbf{Y}\in C^{\infty}(\Phi,\mathsf{bif}[(M))\): It is the case of the left-hand side by definition - since \((\mathbf{X}^{\nu})^{h}\equiv 0\) - while for the right-hand side one has,
\[\mathbf{d}\mathbf{\omega}(\mathbf{X}^{\nu},\mathbf{Y}^{\nu})+[\mathbf{\omega}(\mathbf{X }^{\nu}),\mathbf{\omega}(\mathbf{Y}^{\nu})]_{\mathsf{bif}(M)} =\mathbf{X}^{\nu}(\mathbf{\omega}(\mathbf{Y}^{\nu}))-\mathbf{Y}^{\nu}(\mathbf{\omega}( \mathbf{X}^{\nu}))-\mathbf{\omega}([\mathbf{X}^{\nu},\mathbf{Y}^{\nu}])+[\mathbf{X},\mathbf{Y}]_{ \mathsf{bif}(M)}\] \[=\mathbf{X}^{\nu}(\mathbf{Y})-\mathbf{Y}^{\nu}(\mathbf{X})-\mathbf{\omega}((\mathbf{X}, \mathbf{Y})^{\nu})+[\mathbf{X},\mathbf{Y}]_{\mathsf{bif}(M)}\] \[=[\mathbf{X},\mathbf{Y}]_{\mathsf{bif}(M)}+\mathbf{X}^{\nu}(\mathbf{Y})-\mathbf{Y}^{ \nu}(\mathbf{X})-\mathbf{\{X},\mathbf{Y}\}\equiv 0.\]
The Koszul formula for \(\mathbf{d}\) was used in the first line, and (23) in the last to conclude.
Given the defining equivariance and verticality properties (68) of a connection, it is easy, using (26)/(56), to see that its general vertical transformation under \(\mathsf{Diff}_{\nu}(\Phi)\simeq C^{\infty}(\Phi,\mathsf{Diff}(M))\) is
\[\mathbf{\omega}^{\mathbf{\psi}}:=\Xi^{\star}\mathbf{\omega}=\mathbf{\psi}_{\ast}^{-1}\,\mathbf{ \omega}\circ\mathbf{\psi}+\mathbf{\psi}_{\ast}^{-1}\mathbf{d}\mathbf{\psi}. \tag{71}\]
Naturally, the formula is the same for its gauge transformation under \(\mathbf{Aut}_{\nu}(\Phi)\simeq\mathsf{Diff}(M)\): The difference arises upon repeated transformations of each type, as we stressed in section 2.3, see (53)-(54). Here again, it should be remarked that \(\mathbf{\omega}^{\mathbf{\psi}}=\Xi^{\star}\mathbf{\omega}\in\mathcal{C}\) unless \(\mathbf{\psi}\in\mathsf{Diff}(M)\sim\Xi\in\mathbf{Aut}_{\nu}(\Phi)\) - see [57] - highlighting again the special status of _gauge transformations_. Infinitesimally, by (55), transformations of a connection under \(\mathsf{bif}_{\nu}(\Phi)\simeq C^{\infty}(\Phi,\mathsf{bif}(M))\) are given by the Nijenhuis-Lie derivative,
\[\mathbf{L}_{\mathbf{X}^{\nu}}\mathbf{\omega}=\mathbf{dX}+[\mathbf{\omega},\mathbf{X}]_{\mathsf{bif}(M)}. \tag{72}\]
Infinitesimal gauge transformations, under \(\mathsf{aut}_{\nu}(\Phi)\simeq\mathsf{bif}(M)\), are given by the same relation, but can be written \(\mathbf{L}_{\mathbf{X}^{\nu}}\mathbf{\omega}=\mathbf{DX}\) as \(\mathbf{X}\in\mathsf{bif}(M)\) is a tensorial 0-form. In the same way, finite and infinitesimal general vertical transformations of the curvature are given, as special cases of (57) and (61), by:
\[\mathbf{\Omega}^{\mathbf{\psi}}:=\Xi^{\star}\mathbf{\Omega}=\mathbf{\psi}_{\ast}^{-1}\,\mathbf{ \Omega}\circ\mathbf{\psi},\qquad\text{so}\qquad\mathbf{L}_{\mathbf{X}^{\nu}}\mathbf{\Omega}=[ \mathbf{\Omega},\mathbf{X}]_{\mathsf{bif}(M)}. \tag{73}\]
The same hold for its gauge transformations, with the same caveat expressed above.
The above, eq. (71), allows to write a little lemma that we will occasionally use: For \(\mathbf{\alpha},\mathbf{D}\mathbf{\alpha}\in\Omega^{\bullet}_{\text{tems}}(\Phi,\rho)\), we have on the one hand \(\mathbf{d}\mathbf{\Xi}^{\star}\mathbf{\alpha}=\mathbf{d}(\rho(\psi)^{-1}\alpha)\). On the other hand, by \(\mathbf{\Xi}^{\star}\mathbf{D}\mathbf{\alpha}=\rho(\psi)^{-1}\mathbf{D}\mathbf{\alpha}\), we have that
\[\mathbf{\Xi}^{\star}\mathbf{d}\mathbf{\alpha} =\rho(\psi)^{-1}\mathbf{D}\mathbf{\alpha}-\mathbf{\Xi}^{\star}(\rho_{*}(\mathbf{ \omega})\mathbf{\alpha})=\rho(\psi)^{-1}\mathbf{d}\mathbf{\alpha}+\rho(\psi)^{-1}\rho_{*}( \mathbf{\omega})\mathbf{\alpha}-\rho_{*}(\omega^{\psi})\mathbf{\alpha}^{\psi}=\rho(\psi)^{ -1}\mathbf{d}\mathbf{\alpha}-\rho_{*}(\psi_{*}^{-1}\mathbf{d}\psi)\rho(\psi)^{-1}\mathbf{ \alpha},\] \[=\rho(\psi)^{-1}\left(\mathbf{d}\mathbf{\alpha}-\rho_{*}(\mathbf{d}\psi\circ \psi^{-1})\alpha\right).\]
By naturality of the pullback \([\mathbf{\Xi}^{\star},\mathbf{d}]=0\), we obtain the identity:
\[\mathbf{d}(\rho(\psi)^{-1}\mathbf{\alpha})=\rho(\psi)^{-1}\left(\mathbf{d}\mathbf{\alpha}- \rho_{*}(\mathbf{d}\psi\circ\psi^{-1})\,\mathbf{\alpha}\right) \tag{74}\]
In particular, for the pullback representation, \(\rho(\psi)^{-1}=\psi^{*}\) and \(-\rho_{*}(X)=\mathfrak{L}_{X}\), this is:
\[\mathbf{d}(\psi^{\star}\mathbf{\alpha})=\psi^{*}\left(\mathbf{d}\mathbf{\alpha}+\mathfrak{L}_{ \mathbf{d}\psi\circ\psi^{-1}}\mathbf{\alpha}\right). \tag{75}\]
The latter special case of the lemma appears in the covariant phase space literature, e.g. in [10] and [12]. Despite the superficial similarity, it ought not to be confused with with (66) as the two results are distinct geometric statements.
**Horizontalisation of forms.** The definition of the curvature showcases the operation dual to the horizontal projection for vector fields: the horioralisation of forms,
\[{}^{|h}:\Omega^{\bullet}(\Phi)\to\Omega^{\bullet}_{\text{hor}}(\Phi),\quad\mathbf{ \alpha}\mapsto\mathbf{\alpha}^{h}:=\mathbf{\alpha}\circ|^{h}. \tag{76}\]
Of course, \({}^{|h}:\Omega^{\bullet}_{\text{eq}}(\Phi)\to\Omega^{\bullet}_{\text{tems}}(\Phi)\) and \({}^{|h}:\Omega^{\bullet}_{\text{inv}}(\Phi)\to\Omega^{\bullet}_{\text{basic}}(\Phi)\). So, a connection can be used to extract the _basic_ version of some forms, invariant ones.
In recent years, this has been used in the covariant phase space literature to offer a possible solution of the boundary problem arising in the study of the (pre)symplectic structure of a gauge field theory over a bounded region: The technical core of the problem being that the invariant presymplectic potential and 2-forms lacks horizontality due to boundary terms contributions to their verticality properties, thus fail to be invariant under \(\textbf{Diff}_{r}(\Phi)\) or \(\textbf{Aut}_{r}(\Phi)\). An issue we will spell out in more detail in section 4. The proposal, developed by Gomes and Riello, was to use a connection to horizontalize the presymplectic potential - see [63, 64, 65, 25], and [66, 67, 68] for a more philosophical discussion of the proposal.
For a 1-form \(\mathbf{\alpha}\in\Omega^{1}(\Phi)\) we have the explicit expression in term of the connection
\[\mathbf{\alpha}^{h}:=\mathbf{\alpha}-\iota_{[\mathbf{\alpha}\,\,]^{h}}\mathbf{\alpha}\quad \in\Omega^{\bullet}_{\text{hor}}(\Phi). \tag{77}\]
In particular for the basis 1-form \(\mathbf{d}\phi\in\Omega^{\bullet}_{\text{eq}}(\Phi)\), with properties (65):
\[\mathbf{d}\phi^{h}:=\mathbf{d}\phi-\iota_{[\mathbf{\alpha}\,\,]^{h}}\mathbf{d}\phi=\mathbf{d}\phi -\mathfrak{L}_{\mathbf{\alpha}\phi}\quad\in\Omega^{\bullet}_{\text{tems}}(\Phi). \tag{78}\]
We remark that applying the covariant derivative to it, as a tensorial form, one finds that \(\mathbf{D}(\mathbf{d}\phi^{h})=-\mathfrak{L}_{\mathbf{\alpha}}\phi\). So, from (78) we have that the horioralisation map (76) is also written explicitly as,
\[\mathbf{\alpha}=\alpha(\mathbf{\wedge}^{\bullet}\mathbf{d}\phi;\phi)\quad\mapsto\quad\mathbf{ \alpha}^{h}=\alpha(\mathbf{\wedge}^{\bullet}\mathbf{d}\phi^{h};\phi). \tag{79}\]
Then, (77) is also written as,
\[\mathbf{\alpha}^{h}=\mathbf{\alpha}-\alpha(\mathfrak{L}_{\mathbf{\alpha}}\phi;\phi)\quad \in\Omega^{\bullet}_{\text{hor}}(\Phi). \tag{80}\]
This relation is key to the horioralisation of the presymplectic potential \(\mathbf{\theta}\) via a connection, i.e. to the the extraction of a _basic counterpart_\(\mathbf{\theta}^{b}\) of \(\mathbf{\theta}\in\Omega^{\bullet}_{\text{inv}}(\Phi)\). For an invariant 1-form \(\mathbf{\alpha}\in\Omega^{1}_{\text{inv}}(\Phi)\), we have \(\mathbf{\alpha}^{h}=:\mathbf{\alpha}^{b}\,\in\Omega^{1}_{\text{basic}}(\Phi)\) and one shows that the following relation between the covariant derivative and the horioralised form holds,
\[\mathbf{D}\mathbf{\alpha}=\mathbf{d}\mathbf{\alpha}^{b}+\iota_{[\mathbf{\alpha}]}\mathbf{\alpha}=\mathbf{ d}\mathbf{\alpha}^{b}+\alpha(\mathfrak{L}_{\mathbf{\alpha}}\phi;\phi). \tag{81}\]
Therefore for a flat connection \(\dot{\omega}\), \(\dot{\mathbf{D}}\mathbf{\alpha}=\mathbf{d}\mathbf{\alpha}^{b}\). The formula (81) generalises the special case applying to the presymplectic potential \(\mathbf{\alpha}^{b}=\mathbf{\theta}^{b}\) discussed by Gomes & Riello (in the YM case) in corollary 3.2 and section 3.4 of [65], also at the end of section 3.1. in [69].
**Non-uniqueness.** One may highlight that since a priori the choice of connection is not unique (nor canonical),13 nor is the horizontalisation map (76). For any \(\mathbf{\beta}\in\Omega^{1}_{\text{tens}}(\Phi,\mathfrak{bit}\mathfrak{f}(M))\), both \(\mathbf{\omega}\) and \(\mathbf{\omega}^{\prime}=\mathbf{\omega}+\mathbf{\beta}\) are valid choices to implement (76). Given \(\mathbf{\alpha}\in\Omega^{\star}(\Phi)\), the corresponding horizontal forms obtained are \(\mathbf{\alpha}^{b}_{\mathbf{\omega}}\) and \(\mathbf{\alpha}^{b}_{\mathbf{\omega}^{\prime}}\).
Footnote 13: Unless there is a natural choice, as would be e.g. the case if a connection was derived from a natural metric on \(\Phi\): This is what happens in the Yang-Mills (YM) case, where \(\Phi\) is the space of connection \(\mathcal{A}\) of a \(H\)-bundle \(P\), on which a bundle metric induces the Singer-DeWitt connection. This is the approach followed in [63, 64, 65, 25]. See section 2.2 of [9] for a discussion.
One may ask how they are related. The answer is easily found for 1-forms: Consider the affine path \(\mathbf{\omega}_{\tau}:=\mathbf{\omega}+\tau\mathbf{\beta}\), with \(\tau\in[0,1]\), in \(\mathcal{C}\). Given \(\mathbf{\alpha}\in\Omega^{1}(\Phi)\), define
\[\mathbf{\alpha}^{h}_{\mathbf{\omega}_{\tau}} =\mathbf{\alpha}-\iota_{[\mathbf{\omega}_{\tau}]}\mathbf{\alpha}\quad\in \Omega^{1}_{\text{hor}}(\Phi).\] \[\text{So},\quad\int_{0}^{1}d\tau\ \tfrac{d}{d\tau}\mathbf{\alpha}^{h}_{\mathbf{\omega}_{ \tau}} =\mathbf{\alpha}^{h}_{\mathbf{\omega}^{\prime}}-\mathbf{\alpha}^{h}_{\mathbf{\omega}}=- \iota_{[\mathbf{\beta}]}\mathbf{\alpha},\] \[\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\
These, we may observe, is the type of mathematical objects that underly the "relational observables" introduced in [43]: Indeed, eq.(3.183)-(3.185) in section 3.4.4. displays the fact that, up to a left/right action convention, these "relational observables" are twisted equivariant/tensorial 0-forms.14 Then, eq.(3.190) is an instance of \(\mathbf{Diff}_{\nu}(\Phi)\simeq C^{\infty}(\Phi,\mathrm{Diff}(M))\) transformation (57) of twisted tensorial forms.
Footnote 14: We have a right action convention, so that e.g. if \(V\) are purely vectors the equivariance (87) is \(R^{*}_{\psi}\boldsymbol{\alpha}=[C(\phi;\psi)^{-1}]\). \(\boldsymbol{\alpha}\), to be compared to the equivariance eq.(3.183) in [43]. Compare also the cocycle property (86) to their eq.(3.185).
A _twisted_ connection 1-form \(\boldsymbol{\varpi}\in\Omega^{1}_{\mathrm{eq}}(\Phi,\mathfrak{g})\) is defined by:
\[\begin{split}\boldsymbol{\varpi}_{|\phi}(X^{\nu}_{|\phi})& =a(X,\phi)\ \in\mathfrak{g},\quad\text{for}\ X\in\mathfrak{iv}\mathfrak{if}(M),\\ R^{*}_{\phi}\boldsymbol{\varpi}_{|\phi^{\mu}}& =\mathrm{Ad}(C(\phi;\psi))^{-1}\ \boldsymbol{\varpi}_{|\phi}+C(\phi;\psi)^{-1}\boldsymbol{d}C(\ \ ;\psi)_{|\phi}.\end{split} \tag{88}\]
In the special case \(G=\mathrm{Diff}(M)\), the equivariance of \(\boldsymbol{\varpi}\in\Omega^{1}_{\mathrm{eq}}(\Phi,\mathfrak{iv}\mathfrak{ if}(M))\) is
\[R^{*}_{\psi}\boldsymbol{\varpi}_{|\phi^{\mu}}=[C(\phi;\psi))^{-1}]_{*}\ \boldsymbol{\varpi}_{|\psi}\circ C(\phi;\psi)+[C(\phi;\psi))^{-1}]_{*} \boldsymbol{d}C(\ \ ;\psi)_{|\psi}, \tag{89}\]
to be compared to that (68) of an Ehresmann connection. The infinitesimal equivariance under \(\mathfrak{iv}\mathfrak{iv}\mathfrak{if}(M)\) is
\[\boldsymbol{L}_{X^{\nu}}\boldsymbol{\varpi}=\frac{d}{dt}\,R^{*}_{\phi_{\nu}} \boldsymbol{\varpi}\,\big{|}_{\tau=0}=\boldsymbol{d}a(X;\ )+[\boldsymbol{\varpi},a(X;\ )]_{ \mathfrak{g}} \tag{90}\]
Or, in the case \(G=\mathrm{Diff}(M)\), \(\boldsymbol{L}_{X^{\nu}}\boldsymbol{\varpi}=\boldsymbol{d}a(X;\ )+[ \boldsymbol{\varpi},a(X;\ )]_{\mathfrak{en}(M)}\).
The space of twisted connection \(\tilde{\mathcal{C}}\) is an affine space modelled on the vector space \(\Omega^{1}_{\mathrm{tens}}(\Phi,\mathfrak{g})\): Clearly, for \(\boldsymbol{\omega},\boldsymbol{\omega}^{\prime}\in\mathcal{C}\), we have that \(\boldsymbol{\beta}:=\boldsymbol{\omega}^{\prime}-\boldsymbol{\omega}\in\Omega ^{1}_{\mathrm{tens}}(\Phi,\mathfrak{g})\). Or, given \(\boldsymbol{\omega}\in\tilde{\mathcal{C}}\) and \(\boldsymbol{\beta}\in\Omega^{1}_{\mathrm{tens}}(\Phi,\mathfrak{g})\), we have that \(\boldsymbol{\omega}^{\prime}=\boldsymbol{\omega}+\boldsymbol{\beta}\in \tilde{\mathcal{C}}\). This means that in general one cannot add connections, only affine paths are possible.
A _twisted covariant derivative_ is defined as \(\tilde{\boldsymbol{D}}:\Omega^{\bullet}_{\mathrm{eq}}(\Phi,C)\to\Omega^{ \bullet+1}_{\mathrm{tens}}(\Phi,C)\), \(\boldsymbol{\alpha}\mapsto\tilde{\boldsymbol{D}}\boldsymbol{\alpha}:= \boldsymbol{d}\boldsymbol{\alpha}+\rho_{*}(\boldsymbol{\varpi})\boldsymbol{\alpha}\). This is the first order linear operator adapted to twisted equivariant/tensorial forms.
As a relevant example, consider \(Z=\exp iS\in\Omega^{0}_{\mathrm{eq}}(\Phi,C)\) as defined earlier to illustrate (50): we claim that \(\tilde{\boldsymbol{D}}Z\in\Omega^{1}_{\mathrm{eq}}(\Phi,C)\). Indeed, in this case \(\boldsymbol{\varpi}\in\Omega^{1}_{\mathrm{eq}}(\Phi,\mathbb{R})\), so (88) gives \(\boldsymbol{\varpi}_{|\phi}(X^{\nu}_{|\phi})=a(X,\phi)=-i\int\alpha(X;\phi)\) - the classical/quantum Diff(\(M\))-anomaly - and \(R^{*}_{\psi}\boldsymbol{\varpi}_{|\phi^{\mu}}=\boldsymbol{\varpi}_{|\phi}-i \boldsymbol{d}C(\ ;\psi)_{|\phi}\). From this is easily verified that \(R^{*}_{\psi}\tilde{\boldsymbol{D}}Z=C(\phi;\psi)^{-1}\tilde{\boldsymbol{D}}Z\) and \(\tilde{\boldsymbol{D}}Z(X^{\nu})=0\). We may remark that, in this case, \(\tilde{\boldsymbol{D}}Z=(i\boldsymbol{d}S+\boldsymbol{\varpi})Z\). Now since \(Z\) and \(\tilde{\boldsymbol{D}}Z\) are both tensorial, it means that
\[i\boldsymbol{d}S+\boldsymbol{\varpi}\ \in\Omega^{1}_{\mathrm{basic}}(\Phi). \tag{91}\]
This is a kind of generalisation of the Wess-Zumino trick to build counterterms and "improve" and action by restoring its gauge invariance: the original trick is recovered for flat \(\boldsymbol{\varpi}_{\circ}\), equivalent to a dressing field, as we show in section 3.1 below. Conversely, the WZ trick is seen to come from a special case of twisted covariant derivative.
Contrary to an Ehresmann connection, a twisted connection does not to defined an equivariant horizontal distribution on \(\Phi\). Yet, its curvature 2-form can be defined, via Cartan structure equation
\[\tilde{\boldsymbol{\Omega}}:=\boldsymbol{d}\boldsymbol{\varpi}+\tfrac{1}{2}[ \boldsymbol{\varpi},\boldsymbol{\varpi}]_{\mathfrak{g}}\ \in\Omega^{2}_{\mathrm{tens}}(\Phi,\mathfrak{g}) \tag{92}\]
It satisfies Bianchi identity, \(\tilde{\boldsymbol{D}}\tilde{\boldsymbol{\Omega}}=\boldsymbol{d}\tilde{ \boldsymbol{\Omega}}+[\boldsymbol{\varpi},\tilde{\boldsymbol{\Omega}}]_{i\circ \alpha}=0\). And it is still true that \(\boldsymbol{D}\circ\boldsymbol{D}=\rho_{*}(\boldsymbol{\Omega})\). We may observe that the horizontality of \(\tilde{\boldsymbol{\Omega}}\) encode interesting information. For \(X^{\nu}\), \(Y^{\nu}\in\Gamma(V\Phi)\) with \(X,Y\in\mathfrak{iv}\mathfrak{if}(M)\), we have
\[\tilde{\boldsymbol{\Omega}}(X^{\nu},Y^{\nu}) =X^{\nu}(\omega(Y^{\nu}))-Y^{\nu}(\omega(X^{\nu}))-\omega([X^{ \nu},Y^{\nu}])+[\omega(X^{\nu}),\omega(Y^{\nu})]_{\mathfrak{g}}\] \[0 =X^{\nu}(a(Y;\phi))-Y^{\nu}(a(X;\phi))-a([X,Y]_{\mathfrak{en}(M)}; \phi)+[a(X;\phi),a(Y;\phi)]_{\mathfrak{g}}, \tag{93}\]
which reproduces the infinitesimal 1-cocycle property (48) generalizing the Wess-Zumino consistency condition on gauge/Diff(\(M\)) anomalies.
Given the defining equivariance and verticality properties (88) of a twisted connection, one shows using (26)/(56), that its general vertical transformation under \(\mathbf{Diff}_{\nu}(\Phi)\simeq C^{\infty}(\Phi,\mathrm{Diff}(M))\) is
\[\boldsymbol{\varpi}^{\phi}:=\Xi^{\star}\boldsymbol{\varpi}=\mathrm{Ad}\big{(} C(\boldsymbol{\psi})^{-1}\big{)}\,\boldsymbol{\varpi}+C(\boldsymbol{\psi})^{-1} \boldsymbol{d}C(\boldsymbol{\psi}), \tag{94}\]
where we remind the short-hand notation \([C(\mathbf{\psi})](\phi):=C(\phi;\mathbf{\psi}(\phi))\). In case \(G=\mathrm{Diff}(M)\) the above specialises to: \(\mathbf{\varpi}^{\mathbf{\psi}}=C(\mathbf{\psi})_{-}^{-1}\mathbf{\varpi}\circ C(\mathbf{\psi})+C( \mathbf{\psi})_{*}^{-1}\mathbf{d}C(\mathbf{\psi})\). Gauge transformations under \(\mathbf{Aut}_{v}(\Phi)\simeq\mathbf{Diff}(M)\) are given by the same formula: The difference being noticed upon repeated transformations of each type, as stressed by (53)-(54) in section 2.3. Infinitesimally, transformations of a twisted connection under \(\mathbf{bif}_{v}(\Phi)\simeq C^{\infty}(\Phi,\mathsf{bif}|(M))\) are given by the Nijenhuis-Lie derivative,
\[\mathbf{L}_{\mathbf{X}^{v}}\mathbf{\varpi}=\mathbf{d}a(\mathbf{X})+[\mathbf{\varpi},a(\mathbf{X})]_{\text{s}}. \tag{95}\]
where we remind that \([a(\mathbf{X})](\phi):=a(\mathbf{X}(\phi);\phi)\). Infinitesimal gauge transformations, under \(\mathbf{aut}_{v}(\Phi)\simeq\mathbf{bif}|(M)\), are given by the same relation. The difference being seen upon iteration, as reflected by the commutation property (25)/(40) of the Nijenhuis-Lie derivative along \(\mathbf{bif}_{v}(\Phi)\simeq C^{\infty}(\Phi,\mathsf{bif}|(M))\), \([\mathbf{L}_{\mathbf{X}^{v}},\mathbf{L}_{\mathbf{Y}^{v}}]=L_{[\mathbf{X}^{v},\mathbf{Y}^{v}]_{\mathbf{X} }}=L_{[\mathbf{X},\mathbf{Y}]^{v}}\), which, in the case \(\mathbf{aut}_{v}(\Phi)\simeq\mathbf{bif}|(M)\), simplifies \([\mathbf{L}_{\mathbf{X}^{v}},\mathbf{L}_{\mathbf{Y}^{v}}]=L_{[-[\mathbf{X},\mathbf{Y}]_{\text{bif}|(M )},v^{v}}=\mathbf{L}_{[\mathbf{X},\mathbf{Y}]_{\text{bif}|(M)},v^{v}}\) - as we observed in section 2.3.
Finite and infinitesimal general vertical transformations of the curvature are given by,
\[\bar{\mathbf{\Omega}}^{\mathbf{\psi}}:=\Xi^{*}\bar{\mathbf{\Omega}}=\mathrm{Ad}(C(\mathbf{\psi })^{-1})\ \bar{\mathbf{\Omega}},\qquad\text{so}\qquad\mathbf{L}_{\mathbf{X}^{v}}\bar{\mathbf{\Omega}}=[ \bar{\mathbf{\Omega}},\mathbf{a}(\mathbf{X})]_{\text{s}}. \tag{96}\]
or, \(\bar{\mathbf{\Omega}}^{\mathbf{\psi}}:=\Xi^{*}\bar{\mathbf{\Omega}}=\mathbf{\psi}_{-}^{-1}\bar {\mathbf{\Omega}}\circ\mathbf{\psi}\) and \(\mathbf{L}_{\mathbf{X}^{v}}\bar{\mathbf{\Omega}}=[\bar{\mathbf{\Omega}},\mathbf{X}]_{\text{aif}|(M)}\) in the case \(G=\mathrm{Diff}(M)\). These are special cases of (57) and (61). The same hold for its gauge transformations, with the usual caveat.
For \(\mathbf{X}^{v},\mathbf{Y}^{v}\in\mathbf{bif}_{v}(\Phi)\) with \(\mathbf{X},\mathbf{Y}\in C^{\infty}(\Phi,\mathsf{bif}|(M))\), we have
\[\bar{\mathbf{\Omega}}(\mathbf{X}^{v},\mathbf{Y}^{v}) =\mathbf{X}^{v}(a(\mathbf{Y}^{v}))-\mathbf{Y}^{v}(\omega(\mathbf{X}^{v}))-\omega( [\mathbf{X}^{v},\mathbf{Y}^{v}])+[\omega(\mathbf{X}^{v}),\omega(\mathbf{Y}^{v})]_{\text{s}} \tag{97}\] \[0 =\mathbf{X}^{v}(a(\mathbf{Y};\phi))-\mathbf{Y}^{v}(a(\mathbf{X};\phi))-a((\mathbf{X}, \mathbf{Y}];\phi)+[a(\mathbf{X};\phi),a(\mathbf{Y};\phi)]_{\text{s}},\] \[0 =\mathbf{X}^{v}(a(\underline{\mathbf{Y}};\phi))-\mathbf{Y}^{v}(a(\underline{ \mathbf{X}};\phi))-a([\mathbf{X},\mathbf{Y}]_{\text{aif}|(M)};\phi)+[a(\mathbf{X};\phi),a(\bm {Y};\phi)]_{\text{s}},\]
where the FN bracket (37)/(23) is used. This reproduces (62), where the notation of the last line was first used. As the relation holds for \(\mathbf{X}^{v},\mathbf{Y}^{v}\in\mathbf{aut}_{v}(\Phi)\), the horizontality properties (93)-(97) of the twisted curvature therefore encode the same generalisation of the Wess-Zumino consistency condition.
### Associated bundles, fondamental representation of \(\mathrm{Diff}(M)\), and integration on \(M\)
Starting from a principal bundle, one of the most natural construction one can perform is that of its associated bundles. The standard theory specify how a class of them is built from representations of the structure group, while a natural extension of this classic construction involve the use of 1-cocycles for the action of the structure group instead of representations [30]. Let us consider them in turn in our case, where the principal bundle is \(\Phi\) with structure group \(\mathrm{Diff}(M)\).
Given a representation space \((\rho,\mathbf{V})\) of \(\mathrm{Diff}(M)\), consider the direct product space \(\Phi\times\mathbf{V}\), with the two natural projections: \(\pi_{\Phi}:\Phi\times\mathbf{V}\to\Phi\) and \(\pi_{\mathbf{V}}:\Phi\times\mathbf{V}\to\mathbf{V}\). One defines a right action of \(\mathrm{Diff}(M)\) on \(\Phi\times\mathbf{V}\) by:
\[(\Phi\times\mathbf{V})\times\mathrm{Diff}(M) \to\Phi\times\mathbf{V}, \tag{98}\] \[((\phi,v),\psi) \mapsto\left(\psi^{*}\phi,\,\rho(\psi)^{-1}v\right)=(R_{\psi} \phi,\,\rho(\psi)^{-1}v)=:\bar{R}_{\psi}(\phi,v).\]
The bundle \(\mathbf{E}\) associated to \(\Phi\) via the representation \(\rho\) is then defined as the quotient of the product space \(\Phi\times\mathbf{V}\) by this right action:
\[\mathbf{E}=\Phi\times_{\rho}\mathbf{V}:=\Phi\times\mathbf{V}/\sim \tag{99}\]
where \((\phi^{\prime},v^{\prime})\sim(\phi,v)\) when \(\exists\,\psi\in\mathrm{Diff}(M)\) s.t. \((\phi^{\prime},v^{\prime})=\bar{R}_{\psi}(\phi,v)\). We may write \(\bar{\pi}_{\mathbf{E}}:\Phi\times\mathbf{V}\to\mathbf{E}\). A point in \(\mathbf{E}\) is thus an equivalence class \(e=[\phi,v]\). The projection of \(\mathbf{E}\xrightarrow{\pi_{\mathbf{E}}}\mathcal{M}\) is defined by \(\pi_{\mathbf{E}}([\phi,v]):=\pi(\phi)=[\phi]\). It is a standard result of bundle theory that sections of \(\mathbf{E}\) are in 1:1 correspondence with \(V\)-valued \(\rho\)-equivariant functions on \(\Phi\):
\[\Gamma(\mathbf{E}):=\{\mathbf{s}:\mathcal{M}\to\mathbf{E}\}\ \ \simeq\ \ \Omega_{\text{eq}}^{0}(\Phi,\rho):=\{\mathbf{\varphi}:\Phi\to\mathbf{V}\,|\,R_{\psi}^{ \star}\mathbf{\varphi}=\rho(\psi)^{-1}\mathbf{\varphi}\}. \tag{100}\]
The isomorphism being \(\mathbf{s}([\phi])=[\phi,\mathbf{\varphi}(\phi)]\). Naturally, equivariant functions are tensorial 0-forms, and like all equivariant forms their vertical transformations (gauge transformations) is given by (57)-(61):
\[\mathbf{\varphi}^{\mathbf{\psi}}=\rho(\mathbf{\psi})^{-1}\mathbf{\varphi},\quad\text{ and }\quad\mathbf{L}_{\mathbf{X}^{v}}\mathbf{\varphi}=-\rho_{*}(\mathbf{X})\mathbf{\varphi}. \tag{101}\]
for \(\mathbf{\psi}\in C^{\infty}(\Phi,\mathrm{Diff}(M))\simeq\mathbf{Diff}_{v}(\Phi)\) and \(\mathbf{X}\in C^{\infty}(\Phi,\mathrm{bit}(M))\simeq\mathbf{diff}_{v}(\Phi)\). A principal connection as discussed in section 2.4.1 is needed for their covariant differentiation.
The right action of \(\mathrm{bit}(M)\) on \(\Phi\times\mathbf{V}\) defines the vertical subbundle \(V(\Phi\times\mathbf{V})\simeq V\Phi\oplus V\mathbf{V}\subset T(\Phi\times\mathbf{V}) \simeq T\Phi\oplus T\mathbf{V}\). By linearisation of (98), it is:
\[\begin{split}(\Phi\times\mathbf{V})\times\mathrm{bit}(M)& \to\,V(\Phi\times\mathbf{V})\simeq V\Phi\oplus V\mathbf{V},\\ ((\phi,v),X)\mapsto&\,X^{v}_{|(\phi,v)}:=\frac{d}{d \tau}\bar{R}_{\phi_{t}}(\phi,v)\,\big{|}_{\tau=0}&=\frac{d}{d \tau}\big{(}\psi^{+}_{\tau}\phi,\,\rho(\psi_{\tau})^{-1}v\big{)}\,\big{|}_{\tau =0}&=\big{(}\mathfrak{L}_{X}\phi,v\big{)}\oplus\big{(}\phi,- \rho_{*}(X)v\big{)},\\ &=\big{(}(\phi,v);(\mathfrak{L}_{X}\phi,-\rho_{*}(X)v)\big{)}\\ &=\big{(}\phi;\mathfrak{L}_{X}\phi\big{)}\oplus\big{(}v;-\rho_{*} (X)v\big{)}&=:X^{v}_{|\phi}+X^{v}_{|v}.\end{split} \tag{102}\]
Therefore, the tangent bundle of \(\mathbf{E}\) is defined as \(\mathbf{TE}:=T(\Phi\times\mathbf{V})/V(\Phi\times\mathbf{V})\)
The whole construction carries through if one replaces the representation \(\rho\) by a \(1\)-cocycle \(\,C\colon\Phi\times\mathrm{Diff}(M)\to G\), as defined by (46) in section 2.2.2. If \(\mathbf{V}\) is a \(G\)-space, one may define a right action of \(\mathrm{Diff}(M)\) on \(\Phi\times\mathbf{V}\) via: \((\Phi\times\mathbf{V})\times\mathrm{Diff}(M)\to\Phi\times\mathbf{V}\), \(((\phi,v),\psi)\mapsto\bar{R}_{\phi}(\phi,v)=(\psi^{+}\phi,\,C(\phi;\psi)^{-1}v)\). The _twisted_ bundle \(\mathbf{\widetilde{E}}\to\mathcal{M}\) associated to \(\Phi\) via the \(1\)-cocycle \(C\) is then defined as: \(\mathbf{\widetilde{E}}=\Phi\times_{C}\mathbf{V}:=\Phi\times\mathbf{V}/\sim\), with \((\psi^{+}\phi,\,C(\phi;\psi)^{-1}v)\sim(\phi,v)\). As above we have the isomorphism,
\[\Gamma(\mathbf{\widetilde{E}}):=\{\mathfrak{s}:\mathcal{M}\to\mathbf{\widetilde{E}}\} \leavevmode\nobreak\ \leavevmode\nobreak\ \simeq\leavevmode\nobreak\ \leavevmode\nobreak\ \Omega^{0}_{\mathrm{eq}}(\Phi,C):=\{\tilde{\mathbf{\varphi}}:\Phi\to V\,|\,R^{+}_ {\phi}\tilde{\mathbf{\varphi}}=C(\phi;\psi)^{-1}\tilde{\mathbf{\varphi}}\}, \tag{103}\]
where the latter is the space of _twisted_ equivariant function on \(\Phi\). As we highlighted in section 2.2.2, neglecting the action of \(\mathrm{Diff}(M)\) on the integration domain, the action functional \(S\) of a theory defines a twisted equivariant function \(Z=\exp(iS)\) on \(\Phi\) - i.e. a section of a twisted associated (\(\mathbb{C}\)-line) bundle. See [8; 9; 30]. In this paper, we adopt the more natural stance taking into account the natural action of \(\mathrm{Diff}(M)\) on integration domains,15 see below. The vertical transformation of such twisted equivariant functions is given by (57)-(61). A twisted connection as discussed in section 2.4.2 is needed for their covariant differentiation.
Footnote 15: Under which quantum functionals \(Z=\int d\phi\,\exp(\frac{i}{\hbar}S)\) remains twisted equivariant, i.e. sections of a twisted \(\mathbb{C}\)-bundle. See footnote 9.
#### 2.5.1 Associated bundle of regions
There is a particular bundle, \(\mathbf{E}=\bar{\mathbf{U}}(M)\), canonically associated to \(\Phi\), as it is built via the _defining representation_ of its structure group \(\mathrm{Diff}(M)\):16 the field (or \(\sigma\)-algebra) of open sets of \(M\), \(\mathbf{V}=\mathbf{U}(M):=\{U\subset M\,|\,U\text{ open set}\}\).17 The right action of \(\mathrm{Diff}(M)\) on the product space \(\Phi\times\mathbf{U}(M)\) is,
Footnote 16: In that respect it is the conceptual equivalent of e.g. \(TM\) for the frame bundle \(LM\) of \(M\). The tangent bundle can indeed be seen as associated to \(LM\) via the defining representation \(\mathbb{R}^{\oplus}\) of its structure group \(GL(n)\): \(TM=LM\times_{GL(n)}\mathbb{R}^{n}\).
Footnote 17: Purists will notice that this highlights the fact that \(\mathrm{Diff}(M)\) is a (Lie) pseudo-group rather than a (Lie) group. See e.g. [70]. Indeed, one may aim to generalise the whole formalism presented here to pseudo-groups, or even groupoids.
\[(\Phi\times\mathbf{U}(M))\times\mathrm{Diff}(M) \to\Phi\times\mathbf{U}(M), \tag{104}\] \[\big{(}(\phi,U),\psi\big{)} \mapsto\bar{R}_{\phi}(\phi,U):=\big{(}\psi^{+}\phi,\psi^{-1}(U) \big{)}.\]
Notice the right action on \(\mathbf{U}(M),R_{\phi}U=\psi^{-1}(U)\) (preimage convention), instead of the familiar defining left action \(L_{\phi}U=\psi(U)\) (direct image convention), is not accidental here but required by the bundle geometry framework (where the right action convention is now settled standard practice). The associated bundle is thus:
\[\bar{\mathbf{U}}(M)=\Phi\times_{\mathrm{Diff}(M)}\mathbf{U}(M):=\Phi\times\mathbf{U}(M)/\sim. \tag{105}\]
Call it the "_associated bundle of regions_" of \(M\). Its space of sections \(\Gamma(\bar{\mathbf{U}}(M)):=\{\bar{\mathbf{s}}:\mathcal{M}\to\bar{\mathbf{U}}(M)\}\) is isomorphic to
\[\Omega^{0}_{\mathrm{eq}}(\Phi,U(M)):=\{\mathbf{U}:\Phi\to\mathbf{U}(M)\,|\,R^{+}_{\phi} \mathbf{U}=\psi^{-1}(\mathbf{U})\}. \tag{106}\]
We may interpret \(\phi\to\mathbf{U}(\phi)\) as being "field-dependent" open sets of \(M\), or yet, regions of \(M\) defined in a "\(\phi\)-_relative_" - and \(\mathrm{Diff}(M)\)-equivariant - way. Their transformations under \(\mathbf{Diff}_{v}(\Phi)\simeq C^{\infty}(\Phi,\mathrm{Diff}(M))\) and \(\mathbf{diff}_{v}(\Phi)\simeq C^{\infty}(\Phi,\mathbf{bit}(M))\) are respectively, by (57)-(61):
\[\mathbf{U}^{\phi}=\mathbf{\psi}^{-1}(\mathbf{U}),\quad\text{ and }\quad\mathbf{L}_{\mathbf{X}^{\ast}}\mathbf{U}=-\mathbf{X}(\mathbf{U}). \tag{107}\]
An example of such equivariant functions is provided by a familiar operation: integration. To show this, in the next section, we frame integration as a special case of a natural construction over the product space \(\Phi\times\mathbf{U}(M)\).
#### 2.5.2 Integration map
To obtain an associated bundle, we defined above a right action of \(\mathrm{Diff}(M)\) on \(\Phi\times\mathbf{U}(M)\): \(\bar{R}_{\psi}(\phi,v):=(\psi^{*}\phi,\,\rho(\psi)^{-1}v)\). Let us now also define a corresponding action of \(C^{\infty}(\Phi,\mathrm{Diff}(M))\simeq\mathbf{Diff}_{v}(\Phi)\):
\[(\Phi\times V)\times C^{\infty}(\Phi,\mathrm{Diff}(M)) \to\Phi\times\mathbf{V}, \tag{108}\] \[((\phi,v),\psi) \mapsto(\psi^{*}\phi,\,\rho(\psi)^{-1}v)=(\Xi(\phi),\,\rho(\psi)^{ -1}v)=:\tilde{\Xi}(\phi,v).\]
It is easily checked that in particular, for \(\mathbf{\psi}\in\mathbf{Diff}(M)\), we have \(\tilde{\Xi}\circ\bar{R}_{\psi}=\bar{R}_{\psi}\circ\tilde{\Xi}\). The action of \(C^{\infty}(\Phi,\mathrm{biff}(M))\simeq\mathbf{biff}_{v}(\Phi)\) may be seen as the linearisation of the above, \((\Phi\times\mathbf{V})\times C^{\infty}(\Phi,\mathrm{biff}(M))\to V(\Phi\times\mathbf{V })\simeq V\Phi\oplus\mathbf{V}\mathbf{V}\subset T(P\times\mathbf{V})\), which is as (102) under the replacement \(X\to\mathbf{X}\).
The induced actions of \(\mathrm{Diff}(M)\) and \(C^{\infty}(\Phi,\mathrm{Diff}(M))\simeq\mathbf{Diff}_{v}(\Phi)\) on the space \(\Omega^{\bullet}(\Phi)\times\mathbf{V}\) are
\[\begin{split}(\Omega^{\bullet}(\Phi)\times\mathbf{V})\times\mathrm{ Diff}(M)&\to\Omega^{\bullet}(\Phi)\times\mathbf{V},\\ ((\alpha,v),\psi)&\mapsto(R_{\psi}^{\star}\alpha,\, \rho(\psi)^{-1}v)=:\bar{R}_{\psi}(\alpha,v).\end{split} \tag{109}\]
and
\[\begin{split}(\Omega^{\bullet}(\Phi)\times\mathbf{V})\times C^{ \infty}(\Phi,\mathrm{Diff}(M))&\to\Omega^{\bullet}(\Phi)\times \mathbf{V},\\ ((\alpha,v),\psi)&\mapsto(\Xi^{\star}\alpha,\, \rho(\psi)^{-1}v)=:\tilde{\Xi}(\alpha,v).\end{split} \tag{110}\]
The induced actions of \(\mathrm{bif}(M)\) and \(C^{\infty}(\Phi,\mathrm{bif}|(M))\simeq\mathbf{bif}_{v}(\Phi)\) are the linearisations:
\[\begin{split}((\alpha,v),X)&\mapsto\frac{d}{d\tau }\bar{R}_{\psi},(\alpha,v)\,\Big{|}_{\tau=0}=(\mathbf{L}_{\mathbf{X}^{\star}}\alpha,\,v )\oplus(\alpha,\,-\rho_{*}(X)v),\\ ((\alpha,v),\mathbf{X})&\mapsto\frac{d}{d\tau}\tilde{ \Xi}_{\tau}(\alpha,v)\,\Big{|}_{\tau=0}=(\mathbf{L}_{\mathbf{X}^{\star}}\alpha,\,v) \oplus(\alpha,\,-\rho_{*}(\mathbf{X})v)\end{split} \tag{111}\]
We further observe that given a representation \((\tilde{\mathbf{\rho}},\mathbf{W})\) of \(\mathrm{Diff}(M)\):
\[\begin{split}&\text{if}\;\;\mathbf{\alpha}\in\Omega^{\bullet}_{ \mathrm{col}}(\Phi,\mathbf{W})\;\;\text{then}\;\;\bar{R}_{\psi}(\alpha,v)=(R_{\psi }^{\star}\mathbf{\alpha},\,\rho(\psi)^{-1}v)=(\tilde{\rho}(\psi)^{-1}\alpha,\, \rho(\psi)^{-1}v),\\ &\text{if}\;\;\mathbf{\alpha}\in\Omega^{\bullet}_{\mathrm{tems}}( \Phi,\mathbf{W})\;\;\text{then}\;\;\tilde{\Xi}(\alpha,v)=(\Xi^{\star}\alpha,\, \rho(\psi)^{-1}v)=(\tilde{\rho}(\psi)^{-1}\alpha,\,\rho(\psi)^{-1}v),\end{split} \tag{112}\]
Linear versions are obviously read from (111). Remark that the exterior derivative \(\mathbf{d}\) on \(\Phi\) a priori extends to \(\Phi\times\mathbf{V}\) as \(\mathbf{d}\to\mathbf{\bar{d}}=\mathbf{d}\times\mathrm{id}\), which we may still write \(\mathbf{d}\) for simplicity. Yet, following the action of \(C^{\infty}(\Phi,\mathrm{bif}|(M))\simeq\mathbf{bif}_{v}(\Phi)\), it will also act on the second factor \(\rho(\mathbf{\psi})^{-1}v\) due to the \(\phi\)-dependence of \(\mathbf{\psi}\).
Consider \((\tilde{\mathbf{\rho}},\mathbf{V}^{*})\) a representation of \(\mathrm{Diff}(M)\)_dual_ to \((\rho,\mathbf{V})\) w.r.t. a non-degenerate \(\mathrm{Diff}(M)\)-invariant _pairing_
\[\begin{split}\langle\;,\;\rangle:\mathbf{V}^{*}\times\mathbf{V}& \to\mathbb{R},\\ (w,v)&\mapsto\langle w,v\rangle,\quad\text{s.t.}\quad \langle\bar{\rho}(\psi)w,\rho(\psi)v\rangle=\langle w,v\rangle.\end{split} \tag{113}\]
Invariance of the pairing means that under \(\mathrm{bif}|(M)\), for which we have the induced representation \(\bar{\rho}_{*}\) and \(\rho_{*}\), the following identity holds:
\[\langle\bar{\rho}_{*}(X)w,v\rangle+\langle\,w,\rho_{*}(X)v\rangle=0 \tag{114}\]
For \(\mathbf{\alpha}\in\Omega^{\bullet}(\Phi,\mathbf{V}^{*})\), let us then define the operation \(\mathcal{I}\) on \(\Omega^{\bullet}(\Phi,\mathbf{V}^{*})\times\mathbf{V}\) by,
\[\begin{split}\mathcal{I}:\Omega^{\bullet}(\Phi,\mathbf{V}^{*})\times \mathbf{V}&\to\Omega^{\bullet}(\Phi),\\ (\mathbf{\alpha},v)&\mapsto\mathcal{I}(\mathbf{\alpha},v): =\langle\mathbf{\alpha},v\rangle.\end{split} \tag{115}\]
Which can also be seen as an object on \(\Phi\times\mathbf{V}\) by
\[\begin{split}\mathcal{I}(\mathbf{\alpha},\;):\Phi\times\mathbf{V}& \to\Lambda^{\bullet}(\Phi),\\ (\phi,v)&\mapsto\mathcal{I}(\mathbf{\alpha}_{|\phi},v): =\langle\mathbf{\alpha}_{|\phi},v\rangle.\end{split} \tag{116}\]
Naturally, we thus have:
\[\mathbf{d}\mathcal{I}(\mathbf{\alpha},\;)=\mathcal{I}(\mathbf{d}\mathbf{\alpha},\;),\quad \text{ and }\quad\iota_{\bar{x}}\mathcal{I}(\mathbf{\alpha},\;)=\mathcal{I}(\iota_{\bar{x}} \alpha,\;)\quad\text{ for }\bar{x}\in\Gamma(T\Phi). \tag{117}\]
The induced actions of \(\mathrm{Diff}(M)\) and \(C^{\infty}(\Phi,\mathrm{Diff}(M))\simeq\mathrm{Diff}_{\nu}(\Phi)\) on such an object are:
\[\begin{split}&\tilde{R}^{\star}_{\psi}\mathcal{I}(\alpha,\ )_{|(\psi^{+}\phi,\ \rho(\psi)^{-1}v)}:=\langle\,\ \rangle\circ\tilde{R}_{\psi}(\alpha,v)=\langle R^{\star}_{\psi}\alpha_{\psi^{+} \phi},\ \rho(\psi)^{-1}v\rangle,\\ &\tilde{\Xi}^{\star}\mathcal{I}(\alpha,\ )_{|(\Xi(\phi),\ \rho(\psi)^{-1}v)}:=\langle\,\ \rangle\circ\tilde{\Xi}(\alpha,v)=\langle\Xi^{\star}\alpha_{|\Xi(\phi)},\ \rho(\psi)^{-1}v\rangle.\end{split} \tag{118}\]
The induced actions of \(\mathrm{bif}(M)\) and \(C^{\infty}(\Phi,\mathrm{bif}(M))\simeq\mathrm{Diff}_{\nu}(\Phi)\) are thus (with simplified but obvious notation):
\[\begin{split}&\tfrac{d}{d\tau}\ \tilde{R}^{\star}_{\psi, \tau}\mathcal{I}(\alpha,v)\,\Big{|}_{\tau=0}=\langle\mathbf{L}_{X^{\star}}\alpha, \ v\rangle+\langle\mathbf{\alpha},\ -\rho_{\star}(X)v\rangle,\\ &\tfrac{d}{d\tau}\ \tilde{\Xi}^{\star}_{\tau}\mathcal{I}( \alpha,v)\,\Big{|}_{\tau=0}=\langle\mathbf{L}_{X^{\star}}\alpha,\ v\rangle+ \langle\mathbf{\alpha},\ -\rho_{\star}(\mathbf{X})v\rangle.\end{split} \tag{119}\]
We then observe that, if \(\mathbf{\alpha}\in\Omega^{\star}_{\mathrm{eq}}(\Phi,\mathbf{V}^{\star})\):
\[\begin{split}\tilde{R}^{\star}_{\psi}\mathcal{I}(\alpha,\ )_{|(\psi^{+}\phi,\ \rho(\psi)^{-1}v)}&:=\langle R^{\star}_{\psi}\alpha_{|\psi^{+} \phi},\ \rho(\psi)^{-1}v\rangle,\\ &=\langle\tilde{\varphi}(\psi)^{-1}\alpha_{|\phi},\ \rho(\psi)^{-1}v\rangle=\langle\alpha_{|\phi},\ v\rangle=:\mathcal{I}(\alpha, \ )_{|(\phi,\nu)}.\end{split} \tag{120}\]
From which follows, by (119), the identity:
\[\begin{split}\langle\mathbf{L}_{X^{\star}}\alpha,\ v\rangle+\langle \mathbf{\alpha},\ -\rho_{\star}(\mathbf{X})v\rangle&=0,\qquad X\in\mathrm{bif}\dagger(M).\\ \langle-\tilde{\rho}_{\star}(X)\,\alpha,\ v\rangle+\langle\mathbf{ \alpha},\ -\rho_{\star}(X)v\rangle&=0\end{split} \tag{121}\]
If \(\mathbf{\alpha}\in\Omega^{\star}_{\mathrm{ens}}(\Phi,\mathbf{V}^{\star})\):
\[\begin{split}\tilde{\Xi}^{\star}\mathcal{I}(\alpha,\ )_{|(\Xi(\phi),\ \rho(\psi)^{-1}v)}&:=\langle\Xi^{\star}\alpha_{|\Xi(\phi)},\ \rho(\psi)^{-1}v\rangle,\\ &=\langle\tilde{\varphi}(\psi)^{-1}\alpha_{|\phi},\ \rho(\psi)^{-1}v\rangle=\langle\alpha_{|\phi},v\rangle=:\mathcal{I}(\alpha, \ )_{|(\phi,\nu)}.\end{split} \tag{122}\]
From which follows, by (119), the identity:
\[\begin{split}\langle\mathbf{L}_{X^{\star}}\alpha,\ v\rangle+\langle \mathbf{\alpha},\ -\rho_{\star}(\mathbf{X})v\rangle&=0,\qquad\mathbf{X}\in C^{\infty}(\Phi, \mathrm{bif}(M)).\\ \langle-\tilde{\rho}_{\star}(\mathbf{X})\,\alpha,\ v\rangle+\langle \mathbf{\alpha},\ -\rho_{\star}(\mathbf{X})v\rangle&=0\end{split} \tag{123}\]
This last case, \(\mathbf{\alpha}\) tensorial, ensures that \(\mathcal{I}(\alpha,\ )\) is "basic" on \(\Phi\times\mathbf{V}\), thus well-defined as an object on \(\mathbf{E}=\Phi\times\mathbf{V}/\sim\). Being constant along a \(\mathrm{Diff}(M)\)-orbit in \(\Phi\times\mathbf{V}\), \(\mathcal{I}(\alpha,\ )\) then allows to define \(\varphi_{\mathcal{I}(\alpha)}\in\Omega^{0}_{\mathrm{eq}}(\Phi,\rho)\) via the prescription:
\[\begin{split}\varphi_{\mathcal{I}(\alpha)}(\phi)&:= \pi_{\mathbf{V}}(\phi,v)_{|\mathcal{I}(\alpha_{|\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\
the above computation also supplies the following identity:
\[\begin{split}\tilde{\Xi}^{\star}\langle\mathbf{d}\alpha,v\rangle& =\langle\mathbf{d}\alpha,v\rangle+\langle\mathbf{\alpha},-\rho_{\star}(\bm {d}\psi\circ\psi^{-1})v\rangle,\\ &=\langle\mathbf{d}\alpha,v\rangle+\langle\bar{\rho}_{\star}(\mathbf{d} \psi\circ\psi^{-1})\alpha,v\rangle.\end{split} \tag{126}\]
Let us specialise the above construction, considering the "fundamental" representation \(\mathbf{V}=\mathbf{U}(M)\) on which \(\mathrm{Diff}(M)\) acts via the preimage convention, and the representation \(\mathbf{V}^{\star}=\Omega^{\mathrm{op}}(U)\) of volume forms on \(U\in\mathbf{U}(M)\) on which \(\mathrm{Diff}(M)\) acts by pullback. These are dual under the invariant _integration pairing_:
\[\begin{split}\langle\,,\,\,\rangle:\mathbf{\Omega}^{\mathrm{op}}(U) \times\mathbf{U}(M)&\to\mathbb{R},\\ (\omega,U)&\mapsto\langle\omega,U\rangle:=\int_{U} \omega.\end{split} \tag{127}\]
The invariance property assumes the familiar form:
\[\langle\,\psi^{\star}\omega,\psi^{-1}(U)\rangle=\langle\omega,U\rangle\quad \to\quad\int_{\psi^{-1}(U)}\psi^{\star}\omega=\int_{U}\omega. \tag{128}\]
This, as a special case of (114) with \(-\bar{\rho}_{\star}(X)=\mathfrak{L}_{X}\) and \(-\rho_{\star}(X)=-X\), gives the identity:
\[\langle\,\mathfrak{L}_{X}\omega,U\rangle+\langle\,\omega,-X(U)\rangle=0\quad \to\quad\int_{U}\mathfrak{L}_{X}\omega+\int_{-X(U)}\omega=0, \tag{129}\]
which amounts to a continuity equation for the action of \(\mathfrak{slif}(M)\). The form is reminiscent of Stokes theorem, i.e. the duality between the DeRham derivative \(d\) on \(\Omega^{\star}(U)\) and the boundary operator \(\partial\) on \(\mathbf{U}(M)\), adjoint operators w.r.t. to the integration pairing:
\[\langle\,d\omega,U\rangle=\langle\,\omega,\partial U\rangle\to\quad\int_{U}d \omega=\int_{\partial U}\!\omega, \tag{130}\]
degree and dimension being appropriately adjusted.
Considering \(\mathbf{\alpha}\in\Omega^{\star}(\Phi,\Omega^{\mathrm{op}}(U))\), the field-dependent volume forms, we define the integration map on \(\Phi\times\mathbf{U}(M)\):
\[\mathcal{I}(\mathbf{\alpha}_{|\phi},U)=\langle\alpha_{|\phi},U\rangle:=\int_{U} \mathbf{\alpha}_{|\phi}. \tag{131}\]
We will sometimes use the simplified notation \(\mathbf{\alpha}_{U}\) when more convenient. We have naturally \(\mathbf{d}\mathcal{I}(\mathbf{\alpha},U)=\mathcal{I}(\mathbf{d}\mathbf{\alpha},U)\), and \(\iota_{X}\mathcal{I}(\mathbf{\alpha},U)=\mathcal{I}(\iota_{X}\mathbf{\alpha},U)\) for \(\mathfrak{X}\in\Gamma(T\Phi)\) - as a special case of (117). The induced actions of \(\mathrm{Diff}(M)\) and \(C^{\infty}(\Phi,\mathrm{Diff}(M))\simeq\mathrm{\mathbf{Diff}}_{V}(\Phi)\) on integrals are:
\[\tilde{R}^{\star}_{\psi}\mathcal{I}(\mathbf{\alpha},\,\,\big{)}_{ \left(\psi^{+}\phi,\,\psi^{-1}(U)\right)} :=\langle R^{\star}_{\psi}\mathbf{\alpha}_{|\psi^{+}\phi},\,\psi^{-1} (U)\rangle=\int_{\psi^{-1}(U)}R^{\star}_{\psi}\mathbf{\alpha}_{|\psi^{+}\phi} \tag{132}\] \[\tilde{\Xi}^{\star}\mathcal{I}(\mathbf{\alpha},\,\,\big{)}_{\left( \Xi(\phi),\,\psi^{-1}(U)\right)} :=\langle\Xi^{\star}\mathbf{\alpha}_{|\Xi(\phi)},\,\psi^{-1}(U)\rangle= \int_{\psi^{-1}(U)}\Xi^{\star}\mathbf{\alpha}_{|\Xi(\phi)}. \tag{133}\]
One may notice that, in the latter case, the derivative \(\mathbf{d}\) will now act also on the transformed region \(\psi^{-1}(U)\) due to the \(\phi\)-dependence of \(\mathbf{\psi}\). Using the notation just mentioned, we may write the above as \(\mathbf{\alpha}_{U}\psi\) and \(\mathbf{\alpha}_{U}\psi\) respectively. The induced actions of \(\mathfrak{slif}(M)\) and \(C^{\infty}(\Phi,\mathfrak{slif}(M))\simeq\mathfrak{slif}_{V}(\Phi)\) on integrals are thus (with simplified notation):
\[\begin{split}\tfrac{d}{d\tau}\,\tilde{R}^{\star}_{\psi_{\tau}} \mathcal{I}(\mathbf{\alpha},U)\,\big{|}_{\tau=0}=\langle\mathbf{L}_{X}\mathbf{\alpha},\,U \rangle+\langle\mathbf{\alpha},\,-X(U)\rangle&=\int_{U}\mathbf{L}_{X^{ \star}}\mathbf{\alpha}+\int_{-X(U)}\mathbf{\alpha},\\ \tfrac{d}{d\tau}\,\tilde{\Xi}^{\star}_{\tau}\,\mathcal{I}(\mathbf{ \alpha},U)\,\big{|}_{\tau=0}=\langle\mathbf{L}_{X^{\star}}\mathbf{\alpha},\,U \rangle+\langle\mathbf{\alpha},\,-X(U)\rangle&=\int_{U}\mathbf{L}_{X^{ \star}}\mathbf{\alpha}+\int_{-X(U)}\mathbf{\alpha}.\end{split} \tag{134}\]
When convenient, we may write the above as \(\delta_{X}\mathbf{\alpha}_{U}\) and \(\delta_{X}\mathbf{\alpha}_{U}\) respectively.
In case \(\boldsymbol{\alpha}\) is tensorial on \(\Phi\), we have the special case of the results (122) above, so that \(\boldsymbol{\alpha}_{U}=\mathcal{I}(\boldsymbol{\alpha}_{|\!\phi},U)\) is \(C^{\infty}(\Phi,\mathrm{Diff}(M))\)-invariant: \(\boldsymbol{\alpha}_{U}\psi=\boldsymbol{\alpha}_{U}\). From which follows, as a special case of (123) and (134), that:
\[\begin{split}\delta_{\boldsymbol{X}}\boldsymbol{\alpha}_{U}=0 \quad\Rightarrow&\langle\boldsymbol{L}_{\boldsymbol{X}^{*}} \boldsymbol{\alpha},\,U\rangle+\langle\boldsymbol{\alpha},\,-\boldsymbol{X} (U)\rangle=0,&\qquad\boldsymbol{X}\in C^{\infty}(\Phi,\mathrm{ bit}(M)).\\ &\langle\mathfrak{R}_{\boldsymbol{X}}\boldsymbol{\alpha},\,U \rangle+\langle\boldsymbol{\alpha},\,-\boldsymbol{X}(U)\rangle=0& \quad\rightarrow\quad\int_{U}\mathfrak{R}_{\boldsymbol{X}}\boldsymbol{ \alpha}+\int_{-\boldsymbol{X}(U)}\boldsymbol{\alpha}=0.\end{split} \tag{135}\]
This can be interpreted as a continuity equations or sort. For \(\boldsymbol{\alpha}\) equivariant, we have \(\mathrm{Diff}(M)\)-invariance of its integral: \(\boldsymbol{\alpha}_{U}\psi=\boldsymbol{\alpha}_{U}\), and (135) holds mutatis mutandis (\(\boldsymbol{X}\to X\)). For \(\boldsymbol{\alpha}\) tensorial, \(\boldsymbol{\alpha}_{U}=\mathcal{I}(\boldsymbol{\alpha},U)\) is thus well-defined on the bundle of regions \(\bar{\boldsymbol{U}}(M)=\Phi\times U(M)/\sim\), and one may - quite formally still - define an equivariant \(U(M)\)-valued function on \(\Phi\) as in (124). It also mean that \(\boldsymbol{d}(\boldsymbol{\alpha}_{U}\psi)=\boldsymbol{d}\boldsymbol{\alpha} _{U}\), i.e.
\[\boldsymbol{d}\,\langle\psi^{*}\boldsymbol{\alpha},\psi^{-1}(U)\rangle= \langle\boldsymbol{d}\boldsymbol{\alpha},U\rangle=\boldsymbol{d}\langle \boldsymbol{\alpha},U\rangle\quad\rightarrow\quad\boldsymbol{d}\int_{\psi^{ -1}(U)}\psi^{*}\boldsymbol{\alpha}=\int_{U}\boldsymbol{d}\boldsymbol{\alpha} =\boldsymbol{d}\int_{U}\boldsymbol{\alpha}. \tag{136}\]
Which is also proven in exactly the same way as (125), using the special form (75) of the lemma (74) and concluding by the invariance property (129). Also, specialising (126) we get the identity:
\[\begin{split}\boldsymbol{(d}\boldsymbol{\alpha}_{U})^{\psi}& =\boldsymbol{d}\boldsymbol{\alpha}_{U}+\langle\mathfrak{R}_{ \boldsymbol{d}\psi\psi^{-1}}\boldsymbol{\alpha},U\rangle,\\ \langle\boldsymbol{d}\boldsymbol{\alpha},U\rangle^{\psi}& =\langle\boldsymbol{d}\boldsymbol{\alpha},U\rangle+\langle \mathfrak{R}_{\boldsymbol{d}\psi\psi^{-1}}\boldsymbol{\alpha},U\rangle.\end{split} \tag{137}\]
These are relevant to the variational principle.
Indeed, consider a Lagrangian \(L\in\Omega^{0}_{\mathrm{tens}}(\Phi,\Omega^{\mathrm{top}}(U))\), a tensorial \(0\)-form on \(\Phi\) and volume form on \(U\). The corresponding action functional is the invariant object on \(\Phi\times\boldsymbol{U}(M)\): \(S=\mathcal{I}(L,U)=\langle L,U\rangle=\int_{U}L\), or yet \(S(\phi)=\mathcal{I}(L(\phi),U)=\langle L(\phi),U\rangle=\int_{U}L(\phi)\), and s.t. \(S^{\psi}=\langle L^{\psi},\psi^{-1}(U)\rangle=\langle\psi^{*}L,\psi^{-1}(U) \rangle=\langle L,U\rangle=S\). The variational principle is expressed as \(\boldsymbol{dS}=\boldsymbol{d}\mathcal{I}(L,U)=\mathcal{I}(\boldsymbol{d}L,U)= \int_{U}\boldsymbol{d}L\equiv 0\). Under the action of \(C^{\infty}(\Phi,\mathrm{Diff}(M))\simeq\mathbf{Diff}_{v}(\Phi)\), we have on the one hand by (136) that \(\boldsymbol{d}(S^{\psi})=\boldsymbol{dS}\), from which follows as a special case of (135) that \(\delta_{\boldsymbol{X}}S=0\), i.e. the relation \(\langle\mathfrak{R}_{\boldsymbol{X}}L,U\rangle+\langle L,-\boldsymbol{X}(U)\rangle=0\). On the other hand, by (137) above, we have the relation:
\[\begin{split}\boldsymbol{(dS)}^{\psi}&=\boldsymbol{ dS}\,+\langle\mathfrak{R}_{\boldsymbol{d}\psi\psi^{-1}}L,U\rangle\\ &=\boldsymbol{dS}\,+\langle\iota_{\boldsymbol{d}\psi\psi^{-1}}L, U\rangle=\boldsymbol{dS}\,+\langle\iota_{\boldsymbol{d}\psi\psi^{-1}}L, \partial U\rangle.\end{split} \tag{138}\]
The fact that \(\boldsymbol{dS}\) and its transform differ by a boundary term implies that the variational principle remain well-defined under the action of \(C^{\infty}(\Phi,\mathrm{Diff}(M))\simeq\mathbf{Diff}_{v}(\Phi)\), and that the space of solution \(\mathcal{S}\) is stable under field-dependent transformations.18 This, in other words, proves that \(\mathrm{Diff}(M)\)-covariant theories \(L\) enjoy a larger \(\mathbf{Diff}_{v}(\Phi)\simeq C^{\infty}(\Phi,\mathrm{Diff}(M))\)-covariance - a fact pointed out in the particular case of GR by Bergmann and Komar in 1971 [22].
Footnote 18: As will be shown more explicitly in section 4.2, when we obtain the transformation of the field equations (235).
The equivariant \(\boldsymbol{U}(M)\)-valued function associated to \(S\) is:
\[\begin{split}\boldsymbol{U}_{S}(\phi)&:=\pi_{U(M)}( \phi,U)_{S=cst}=U,\\ \boldsymbol{U}_{S}(\psi^{*}\phi)&:=\pi_{U(M)}(\psi^{* }\phi,\psi^{-1}(U))_{S=cst}=\psi^{-1}(U).\end{split} \tag{139}\]
Its \(\mathbf{Diff}_{v}(\Phi)\simeq C^{\infty}(\Phi,\mathrm{Diff}(M))\) and \(\mathfrak{diff}_{v}(\Phi)\simeq C^{\infty}(\Phi,\mathrm{bit}(M))\) transformations are thus respectively, by (57)-(61):
\[\boldsymbol{U}_{S}^{\psi}=\psi^{-1}(\boldsymbol{U}_{S}),\quad\text{ and }\quad\boldsymbol{L}_{\boldsymbol{X}^{*}}\boldsymbol{U}_{S}=-\boldsymbol{X}( \boldsymbol{U}_{S}). \tag{140}\]
The function outputs the region on which the fields \(\phi\), and volume form \(\boldsymbol{\alpha}(\phi)=L(\phi)\), are defined and integrated over. So, it subverts the usual logic by making the region \(U\) individuated by the fields it hosts. This is not surprising since the viewpoint articulated here,"dual" to the usual one, reframes familiar geometric objects in a way that gives primacy to the bundle of fields \(\Phi\), and sees regions as representation space associated to \(\Phi\) and its transformation groups. This, we may say, is a way to define regions of \(M\) in a "\(\phi\)-_relative_" and \(\mathrm{Diff}(M)\)-_equivariant_ way. As we will see in section 3 next, the DFM will allow a step further, suggesting a way to define regions of _spacetime_ in a \(\phi\)-_relational_ and \(\mathrm{Diff}(M)\)-_invariant_ way - in accordance with the key insight of general relativistic physics.
The dressing field method
We now turn the main topic of this paper. The dressing field method (DFM) is a systematic algorithm to build basic forms on a bundle, i.e. to obtain gauge-invariants in gauge theories. It was developed primarily for internal gauge groups, i.e. for Yang-Mills theories and gauge gravity theories formulated via Cartan geometry [1, 2, 3, 4, 5, 6, 7]. Relatively complete and self-contained expositions can be found in [8, 9].
In this section we detail the natural extension of the DFM to theories with \(\mathrm{Diff}(M)\) symmetry. As the next sections will make manifest, it is the general unifying geometric framework for the notions of _gravitational edge modes_ as introduced in [10] and further developed e.g. in [12, 28, 34, 35], of _gravitational dressings_ as proposed in [36, 37, 38, 39, 40] - see also [41] - as well as the more recent idea of "dynamical reference frames" as expounded in [42, 43], or yet that of "embedding maps/fields" as advocated in [35, 44, 45, 46].
But a more fundamental fact, we will argue, is that the DFM is a framework that formally makes manifest the _relational_ character of general relativistic physics: It does so by systematically implementing a notion of _relational observables_, or _Dirac variables_, for \(\mathrm{Diff}(M)\)-theories [47].
### Building basic forms via dressing
One may profitably compare what follows to the DFM for theories with internal gauge group as presented in [8, 9]. One defines the space of \(\mathrm{Diff}(M)\)-dressing fields as,
\[\mathcal{D}r[N,M]:=\left\{\,u:N\to M\,|\,u^{\psi}:=\psi^{-1}\circ u,\,\,\,\, \mathrm{with}\,\,\,\psi\in\mathrm{Diff}(M)\right\}, \tag{141}\]
where \(N\) is a reference \(n\)-dim manifold. From this, the action of \(\mathrm{bif}(M)\) in the dressing field \(u\) must be \(\delta xu:=-X\circ u\). Given a field \(\phi\in\Phi\) in \(M\), acted upon by \(\mathrm{Diff}(M)\) as \(R_{\psi}=\psi^{*}\phi\), the dressing map is defined as:
\[|^{u}:\Phi \to \Phi^{u},\] \[\phi \mapsto \phi^{u}:=\,u^{*}\phi. \tag{142}\]
Clearly, \(\phi^{u}\), called the _dressing_ of \(\phi\), is \(\mathrm{Diff}(M)\)-invariant: \((u^{*}\phi)^{\psi}:=(u^{\psi})^{*}(\phi^{\psi})=(\psi^{-1}\circ u)^{*}(\psi^{ *}\phi)=u^{*}\phi\). Which is unsurprising as the space \(\Phi^{u}\) of dressed fields is a subset of fields living on \(N\).
More generally, one defines a _field-dependent dressing field_ as a smooth map,
\[\mathbf{u}:\Phi \to \mathcal{D}r[N,M]\] \[\phi \mapsto \mathbf{u}(\phi),\qquad\mathrm{s.t.}\quad\quad R_{\psi}^{\star}\mathbf{u }=\psi^{-1}\circ\mathbf{u}\quad\mathrm{i.e.}\quad\mathbf{u}(\phi^{\psi})=\psi^{-1} \circ\mathbf{u}(\phi). \tag{143}\]
In other words, \(\mathbf{u}\) is an equivariant \(0\)-form on \(\Phi-\mathcal{D}r[N,M]\) being a representation space for \(\mathrm{Diff}(M)\). Its infinitesimal equivariance, i.e. \(\mathrm{bif}(M)\)-transformation, is then \(\mathbf{L}_{X}\mathbf{\cdot}\mathbf{u}=-X\circ\mathbf{u}\). And its transformations under \(\mathrm{\mathbf{Diff}}_{\nu}(\Phi)\simeq C^{\infty}(\Phi,\mathrm{Diff}(M))\) and \(\mathrm{\mathbf{bif}}_{\nu}(\Phi)\simeq C^{\infty}(\Phi,\mathrm{bif}(M))\) are respectively:
\[\mathbf{u}^{\psi}:=\Xi^{\star}\mathbf{u}=\psi^{-1}\circ\mathbf{u},\qquad\mathrm{ and}\qquad\mathbf{L}_{X}\mathbf{u}=-\mathbf{X}\circ\mathbf{u}. \tag{144}\]
Such a \(\phi\)-dependent dressing field allows to define the map
\[F_{\mathbf{u}}:\Phi \to \mathcal{M}\] \[\phi \mapsto F_{\mathbf{u}}(\phi):=\mathbf{u}(\phi)^{*}\phi\sim[\phi],\qquad\mathrm{ s.t.}\quad\ F_{\mathbf{u}}\circ R_{\psi}=F_{\mathbf{u}}. \tag{145}\]
Clearly, \(F_{\mathbf{u}}(\phi^{\psi})=\mathbf{u}(\phi^{\psi})^{*}(\phi^{\psi})=(\psi^{-1}\circ \mathbf{u}(\phi))^{*}\psi^{*}\phi=\mathbf{u}(\phi)^{*}\phi=:F_{\mathbf{u}}(\phi)\). This means that the image of \(F_{\mathbf{u}}\) is a "coordinatisation" of \(\mathcal{M}\) since each \(\mathrm{Diff}(M)\)-orbit \([\phi]\in\mathcal{M}\) of \(\phi\in\Phi\) is represented by a tensor field \(\mathbf{u}(\phi)^{*}\phi\) on \(N\) (rather than on \(M\)). A very natural understanding of the dressed field \(\mathbf{\phi}^{\mu}:=\mathbf{u}(\phi)^{*}\phi\), is that it is a _relational variable_. Indeed, being \(\mathrm{Diff}(M)\)-invariant it represents the physical d.o.f. of a theory, but it also does so in a manifestly _relational_ way: the very expression \(\mathbf{u}(\phi)^{*}\phi\) is explicitly a field-dependent coordinatisation of the physical d.o.f., i.e. a definition of physical d.o.f. with respect to each other. The dressed fields \(\phi^{\mu}\) are thus Dirac relational variables, or "complete observables" in the terminology of [47, 71]. And as we are about to see, the DFM is then a framework allowing to systematically reformulate a theory in a manifestly \(\mathrm{Diff}(M)\)-invariant and relational way.
Notice that the map (145) is a realisation of the bundle projection, \(F_{\mathbf{u}}\sim\pi\). Therefore, it becomes possible in principle to build basic forms on \(\Phi\) since, as seen in section 2.2.2, \(\Omega^{\bullet}_{\text{Basic}}(\Phi)=\text{Im}\,\pi^{\star}\simeq\text{Im}\,F^{ \star}_{\mathbf{u}}\). Given a form \(\alpha=\alpha(\wedge^{\bullet}\mathbf{d}\phi;\phi)\in\Omega^{\bullet}(\Phi)\), to build its basic counterpart we first consider its formal "version" on the base space, \(\bar{\alpha}=\alpha(\wedge^{\bullet}\mathbf{d}[\phi];[\phi])\in\Omega^{\bullet}( \mathcal{M})\), then simply define
\[\mathbf{\alpha}^{\mathbf{u}}:=F^{\star}_{\mathbf{u}}\bar{\alpha}=\alpha(\wedge^{\bullet}F^{ \star}_{\mathbf{u}}\mathbf{d}[\phi];F_{\mathbf{u}}(\phi))\quad\in\ \Omega^{\bullet}_{\text{Basic}}(\Phi). \tag{146}\]
This object, basic by definition, is called the _dressing_ of \(\alpha\). By construction it is invariant under \(\text{\bf Diff}_{v}(\Phi)\simeq C^{\infty}(\Phi,\text{Diff}(M))\) - thus also gauge-invariant, under \(\text{\bf Aut}_{v}(\Phi)\simeq\text{\bf Diff}(M)\): \((\mathbf{\alpha}^{\mathbf{u}})^{\mathbf{\psi}}=\mathbf{\alpha}^{\mathbf{u}}\).
To get a more operational expression for \(\mathbf{\alpha}^{\mathbf{u}}\), one needs only to find the explicit results for \(F^{\star}_{\mathbf{u}}\mathbf{d}[\phi]\) which is a basis for basic forms. This is done by first finding the general result of the pushforward \(F_{\mathbf{u}\star}:T_{\phi}\Phi\to T_{F_{\mathbf{u}}(\phi)}\mathcal{M}\), \(\bar{x}_{\phi\phi}\mapsto F_{\mathbf{u}\star}\,\bar{x}_{\phi\phi}\). Indeed, given a generic \(\bar{x}\in\Gamma(T\Phi)\) with flow \(\varphi_{\tau}:\Phi\to\Phi\), s.t. \(\bar{x}_{\phi\phi}=\frac{d}{dt}\varphi_{\tau}(\phi)\big{|}_{\tau=0}=\bar{x}( \phi)\frac{\delta}{\delta\phi}\), one has \(F^{\star}_{\mathbf{u}}\mathbf{d}[\phi]_{F_{\mathbf{u}}(\phi)}(\bar{x}_{\mathrm{i}\phi})= \mathbf{d}[\phi]_{F_{\mathbf{u}}(\phi)}(F_{\mathbf{u}\star}\bar{x}_{\mathrm{i}\phi})\). So,
\[F_{\mathbf{u}\star}\bar{x}_{\mathrm{i}\phi}:=F_{\mathbf{u}\star}\,\frac{ d}{dt}\,\varphi_{\tau}(\phi)\ \big{|}_{\tau=0}=\frac{d}{dt}\,F_{\mathbf{u}}(\varphi_{\tau}(\phi))\big{|}_{\tau=0} =\frac{d}{dt}\,\mathbf{u}(\varphi_{\tau}(\phi))^{*}(\varphi_{\tau}(\phi)) \big{|}_{\tau=0} =\frac{d}{dt}\,\mathbf{u}(\varphi_{\tau}(\phi))^{*}\phi\big{|}_{\tau=0}\] \[=\frac{d}{dt}\mathbf{u}(\varphi_{\tau}(\phi))^{*}\phi\big{|}_{\tau=0} +\frac{d}{dt}\,\mathbf{u}(\phi)^{*}(\varphi_{\tau}(\phi))\big{|}_{\tau=0}\]
The first contribution is, inserting \(\text{id}_{N}=\mathbf{u}(\phi)^{-1}\circ\mathbf{u}(\phi)\),
\[\frac{d}{dt}\,\mathbf{u}(\phi)^{*}\mathbf{u}(\phi)^{-1^{*}}\mathbf{u}(\varphi_{\tau}(\phi) )^{*}\phi\big{|}_{\tau=0}=\mathbf{u}(\phi)^{*}\frac{d}{dt}\,(\mathbf{u}(\varphi_{\tau} (\phi))\circ\mathbf{u}(\phi)^{-1})^{*}\phi\big{|}_{\tau=0}.\]
The term \(\mathbf{u}(\varphi_{\tau}(\phi))\circ\mathbf{u}(\phi)^{-1}\) is a curve in \(M\), so \(\frac{d}{dt}\,\mathbf{u}(\varphi_{\tau}(\phi))\circ\mathbf{u}(\phi)^{-1}\ \big{|}_{\tau=0}=\mathbf{d}\mathbf{u}_{\mathrm{i}\phi}(\bar{x}_{\mathrm{i}\phi})\circ \mathbf{u}(\phi)^{-1}\ \in\Gamma(TM)\). Therefore,
\[\frac{d}{dt}\mathbf{u}(\varphi_{\tau}(\phi))^{*}\phi\big{|}_{\tau=0}= \mathbf{u}(\phi)^{*}\frac{d}{dt}\,\big{(}\mathbf{u}(\varphi_{\tau}(\phi))\circ\mathbf{u}( \phi)^{-1}\big{)}^{*}\phi\big{|}_{\tau=0}=\mathbf{u}(\phi)^{*}\Box_{\mathbf{d}\mathbf{u}_{ \mathrm{i}\phi}(\bar{x}_{\mathrm{i}\phi})\circ\mathbf{u}(\phi)^{-1}}\phi. \tag{147}\]
The second contribution is \(\frac{d}{dt}\,\mathbf{u}(\phi)^{*}(\varphi_{\tau}(\phi))\big{|}_{\tau=0}=:\frac{d} {dt}\,F_{\mathbf{u}(\phi)}(\varphi_{\tau}(\phi))\big{|}_{\tau=0}=F_{\mathbf{u}(\phi) \star}\bar{x}_{\mathrm{i}\phi}\), where \(\mathbf{u}(\phi)\) is considered \(\phi\)-constant/ independent. Now, this is indeed a vector (field) on \(\mathcal{M}\), so we can find its expression as a derivation by applying it to \(\mathbf{g}\in C^{\infty}(\mathcal{M})\):
\[[F_{\mathbf{u}(\phi)}\star\bar{x}(\mathbf{g})]([\phi]) =\frac{d}{dt}\,\mathbf{g}\,\Big{(}F_{\mathbf{u}(\phi)}(\varphi_{\tau}( \phi))\Big{)}\ \big{|}_{\tau=0},\] \[=\Big{(}\frac{\delta}{\delta\phi}\Big{)}\big{(}F_{\mathbf{u}(\phi)}( \phi)\big{)}\ \underbrace{\frac{d}{dt}\,F_{\mathbf{u}(\phi)}(\varphi_{\tau}(\phi)) \big{|}_{\tau=0}}_{\big{[}\bar{x}(F_{\mathbf{u}(\phi)})\big{|}_{\phi}\big{)}}=\Big{(} \frac{\delta}{\delta\phi}\Big{)}\big{(}\underbrace{u(\phi)^{*}\phi}_{\sim[ \phi]}\,\underbrace{\big{(}\frac{\delta}{\delta\phi}F_{\mathbf{u}(\phi)}(\phi) \big{)}}_{\delta\frac{\delta}{\delta\phi}\mathbf{u}(\phi)^{*}\phi=\mathbf{u}(\phi)^{*}} \bar{x}(\phi),\] \[=\Big{(}\frac{\delta}{\delta\phi}\Big{)}\big{(}[\phi]\big{)}\ \mathbf{u}(\phi)^{*}\bar{x}(\phi)= \big{[}\mathbf{u}(\phi)^{*}\bar{x}(\phi)\frac{\delta}{\delta[\phi]}(\mathbf{g})\big{]}([ \phi]).\]
Then, gathering the two contributions, we have
\[F_{\mathbf{u}\star}\bar{x}_{\mathrm{i}\phi}=\mathbf{u}(\phi)^{*}\left(\bar{x}(\phi)+ \Box_{\mathbf{d}\mathbf{u}_{\mathrm{i}\phi}(\bar{x}_{\mathrm{i}\phi})\circ\mathbf{u}(\phi)^ {-1}\phi}\right)\,\frac{\delta}{\delta[\phi]_{F_{\mathbf{u}(\phi)}(\phi)}}. \tag{148}\]
From this we deduce, for any \(\bar{x}\in\Gamma(T\Phi)\):
\[F^{\star}_{\mathbf{u}}\mathbf{d}[\phi]_{F_{\mathbf{u}(\phi)}}(\bar{x}_{\mathrm{i}\phi}) =\mathbf{d}[\phi]_{F_{\mathbf{u}(\phi)}(\bar{x}_{\mathrm{i}\phi})}\left(F_ {\mathbf{u}\star}\,\bar{x}_{\mathrm{i}\phi}\right),\] \[=\mathbf{d}[\phi]_{F_{\mathbf{u}(\phi)}(\phi)}\,\left(\mathbf{u}(\phi)^{*} \left(\bar{x}(\phi)+\Box_{\mathbf{d}\mathbf{u}_{\mathrm{i}\phi}(\bar{x}_{\mathrm{i} \phi})\circ\mathbf{u}(\phi)^{-1}\phi}\right)\,\frac{\delta}{\delta[\phi]_{F_{\mathbf{u}( \phi)}(\phi)}}\right),\] \[=\mathbf{u}(\phi)^{*}\left(\mathbf{x}(\phi)+\Box_{\mathbf{d}\mathbf{u}_{\mathrm{i} \phi}(\bar{x}_{\mathrm{i}\phi})\circ\mathbf{u}(\phi)^{-1}\phi}\right),\] \[=\mathbf{u}(\phi)^{*}\Big{(}\mathbf{d}\phi_{\mathrm{i}\phi}\left(\bar{x}_{ \mathrm{i}\phi}\right)+\Box_{\mathbf{d}\mathbf{u}_{\mathrm{i}\phi}(\bar{x}_{\mathrm{i} \phi})\circ\mathbf{u}(\phi)^{-1}\phi}\Big{)}\,,\] \[=\mathbf{u}(\phi)^{*}\big{(}\mathbf{d}\phi_{\mathrm{i}\phi}+\Box_{\mathbf{d} \mathbf{u}_{\mathrm{i}\phi}\circ\mathbf{u}(\phi)^{-1}\phi}\big{)}\left(\bar{x}_{ \mathrm{i}\phi}\right).\]
We thus obtain the dressing of \(\
where \(\mathbf{u}=\mathbf{u}(\phi)\). This can be compared e.g. to eq.(3.30) in [10], eq.(2.7) in [35], or eq.(2.8) in [13]. The result (149) can then be inserted into (146) to get the dressing of any form \(\mathbf{\alpha}\):
\[\mathbf{\alpha}^{\mathbf{u}}=\alpha(\wedge^{\mathbf{\star}}\mathbf{d}\phi^{\mathbf{u}};\,\phi^{\mathbf{ u}})\ \ \in\ \Omega^{\mathbf{\star}}_{\rm Basic}(\Phi), \tag{150}\]
which is basic by construction, thus invariant under \(\mathbf{Diff}_{v}(\Phi)\simeq C^{\infty}(\Phi,\mathrm{Diff}(M))\) - and under \(\mathbf{Aut}_{v}(\Phi)\simeq\mathbf{Diff}(M)\).
Given the formal similarity between \(\Xi(\phi)=\mathbf{\psi}(\phi)^{\star}\phi\) and \(F_{\mathbf{u}}(\phi)=\mathbf{u}(\phi)^{\star}\phi\) on the one hand, and between \(\mathbf{d}\phi^{\mathbf{u}}\) (66) and \(\mathbf{d}\phi^{\mathbf{u}}\) (149) on the other hand, which results into the formal similarity between \(\mathbf{\alpha}^{\mathbf{\psi}}\) (43) and \(\mathbf{\alpha}^{\mathbf{u}}\) (146)-(150), we can spell out the following rule of thumb to obtain the dressing of any form \(\mathbf{\alpha}\): First compute its \(\mathbf{Diff}_{v}(\Phi)\simeq C^{\infty}(\Phi,\mathrm{Diff}(M))\) transformation \(\mathbf{\alpha}^{\mathbf{\psi}}\), then substitute \(\mathbf{\psi}\to\mathbf{u}\) in the resulting expression to obtain \(\mathbf{\alpha}^{\mathbf{u}}\). We will use this rule of thumb from now on.
Let us illustrate it by discussing the case of the dressing of twisted tensorial forms. For this to be well defined, one needs only to assume that the 1-cocycle (46), \(C:\Phi\times\mathrm{Diff}(M)\to G\), controlling the equivariance (47) of these forms has a functional expression s.t. is can be meaningfully extended to \(C:\Phi\times\mathrm{Diff}(N,M)\to G\). If so, the map
\[C(\mathbf{u}):\Phi \to G,\] \[\phi \mapsto[C(\mathbf{u})](\phi):=C(\phi;\mathbf{u}(\phi)), \tag{151}\]
is well-defined and is a twisted equivariant 0-form:
\[[R^{\mathbf{\star}}_{\phi}C(\mathbf{u})](\phi) =[C(\mathbf{u})\circ R_{\phi}](\phi):=C(\phi^{\phi};\mathbf{u}(\phi^{\phi} ))=C(\phi^{\phi};\psi^{-1}\circ\mathbf{u}(\phi))=C(\phi^{\phi};\psi^{-1})\cdot C( \phi;\mathbf{u}(\phi)),\] \[=C(\phi;\psi)^{-1}\cdot C(\phi;\mathbf{u}(\phi)),\] \[=:[C(\ ;\psi)^{-1}\cdot C(\mathbf{u})](\phi), \tag{152}\]
where the cocycle property (46) is used. The linearisation is \(\mathbf{L}_{X^{r}}C(\mathbf{u})=\iota_{X^{r}}dC(\mathbf{u})=-a(X;\ )\cdot C(\mathbf{u})\). In other words, it is a dressing field for twisted forms. It follows that, as a special case of (57), its \(\mathbf{Diff}_{v}(\Phi)\simeq C^{\infty}(\Phi,\mathrm{Diff}(M))\) transformation is,
\[C(\mathbf{u})^{\mathbf{\psi}}=C(\mathbf{\psi})^{-1}\cdot C(\mathbf{u}). \tag{153}\]
Applying e.g. the rule of thumb to \(\mathbf{\alpha}\in\Omega^{\mathbf{\star}}_{\rm tens}(\Phi,C)\), by (57) we know that its \(\mathbf{Diff}_{v}(\Phi)\)-transformation is \(\mathbf{\alpha}^{\mathbf{\psi}}=C(\mathbf{\psi})^{-1}\mathbf{\alpha}\), from which we read its dressing to be
\[\mathbf{\alpha}^{\mathbf{u}}=C(\mathbf{u})^{-1}\mathbf{\alpha}\ \in\Omega^{\mathbf{\star}}_{\rm basic }(\Phi). \tag{154}\]
It is by construction invariant under vertical transformations \(\mathbf{Diff}_{v}(\Phi)\), thus also under gauge transformations \(\mathbf{Aut}_{v}(\Phi)\): which in this case is easily checked using (153),
\[(\mathbf{\alpha}^{\mathbf{u}})^{\mathbf{\psi}}=\big{(}C(\mathbf{u})^{\mathbf{\psi}} \big{)}^{-1}\mathbf{\alpha}^{\mathbf{\psi}}=\big{(}C(\mathbf{\psi})^{-1}\cdot C(\mathbf{u}) \big{)}^{-1}\cdot C(\mathbf{\psi})^{-1}\mathbf{\alpha}=\mathbf{C}(\mathbf{u})^{-1}\mathbf{\alpha}= \mathbf{\alpha}^{\mathbf{u}}. \tag{155}\]
Of course this specialises to the case of \(\mathbf{\alpha}\in\Omega^{\mathbf{\star}}_{\rm tens}(\Phi,\rho)\), so \(\mathbf{\alpha}^{\mathbf{u}}=\rho(\mathbf{u})^{-1}\mathbf{\alpha}\). Taking the case of an Ehresmann connection, in view of (71) our rule of thumb ensures that \(\mathbf{\omega}^{\mathbf{u}}=\mathbf{u}_{-}^{-1}\mathbf{\omega}\circ\mathbf{u}+\mathbf{u}_{-}^{-1}\mathbf{d }\mathbf{u}\in\Omega^{1}_{\rm basic}(\Phi)\). This allows us to write an analogue of the lemma (74)-(75), also occasionally useful: For \(\mathbf{\alpha},\mathbf{D}\mathbf{\alpha}\in\Omega^{\mathbf{\star}}_{\rm tens}(\Phi,\rho)\), we have on the one hand \(\mathbf{d}\,F^{\mathbf{\star}}_{\mathbf{u}}\bar{\mathbf{\alpha}}=\mathbf{d}(\mathbf{\rho}(\mathbf{u})^{-1} \mathbf{\alpha})\). On the other hand, since \(F^{\mathbf{\star}}_{\mathbf{u}}\bar{\mathbf{D}}\bar{\mathbf{\alpha}}=\mathbf{d}\mathbf{\alpha}^{\mathbf{u} }+\rho_{\star}(\mathbf{\omega}^{\mathbf{u}})\mathbf{\alpha}^{\mathbf{u}}=\rho(\mathbf{u})^{-1}\bm {D}\mathbf{\alpha}\) (by our rule of thumb again), we have that,
\[F^{\mathbf{\star}}_{\mathbf{u}}\bar{\mathbf{d}}\bar{\mathbf{\alpha}} =\rho(\mathbf{u})^{-1}\mathbf{D}\mathbf{\alpha}-\rho_{\star}(\mathbf{\omega}^{\bm {u}})\mathbf{\alpha}^{\mathbf{u}}=\rho(\mathbf{u})^{-1}\mathbf{d}\mathbf{\alpha}+\rho(\mathbf{u})^{-1} \,\rho_{\star}(\mathbf{\omega})\mathbf{\alpha}-\rho_{\star}(\mathbf{\omega}^{\mathbf{u}})\mathbf{ \alpha}^{\mathbf{u}}=\rho(\mathbf{u})^{-1}\mathbf{d}\mathbf{\alpha}-\rho_{\star}(\mathbf{u}_{-}^{-1} \mathbf{d}\mathbf{u})\rho(\mathbf{u})^{-1}\mathbf{\alpha},\] \[=\rho(\mathbf{u})^{-1}\left(\mathbf{d}\mathbf{\alpha}-\rho_{\star}(\mathbf{d} \mathbf{u}\circ\mathbf{u}^{-1})\mathbf{\alpha}\right).\]
By naturality of the pullback \([F^{\mathbf{\star}}_{\mathbf{u}},\mathbf{d}]=0\) (understood that the exterior derivatives belong to different spaces here) we obtain:
\[\mathbf{d}\big{(}\rho(\mathbf{u})^{-1}\mathbf{\alpha}\big{)}=\rho(\mathbf{u})^{-1}\left(\mathbf{d} \mathbf{\alpha}-\rho_{\star}(\mathbf{d}\mathbf{u}\circ\mathbf{u}^{-1})\,\mathbf{\alpha}\right) \tag{156}\]
In particular, for the pullback representation \(\rho(u)^{-1}=u^{\star}\), this is:
\[\mathbf{d}(\mathbf{u}^{\star}\mathbf{\alpha})=\mathbf{u}^{\star}\left(\mathbf{d}\mathbf{\alpha}+ \mathfrak{L}_{\mathbf{d}\mathbf{u}\mathbf{\alpha}^{-1}}\mathbf{\alpha}\right). \tag{157}\]
The latter identity appears in the covariant phase space and edge mode literature, e.g. [10; 12; 41]. It must not be conflated with with (149) as the two results have distinct geometric origins and meaning.
#### 3.1.1 Dressing field and flat connections
Given a field-dependent dressing field \(\mathbf{u}\), the object \(\mathbf{\omega}_{0}:=-\mathbf{d}\mathbf{u}\circ\mathbf{u}^{-1}\in\Omega^{1}(\Phi)\) is a _flat Ehresmann connection_. Indeed, on the one hand, by (144) we have,
\[\mathbf{\omega}_{0|\phi}(X^{v}_{|\phi})=-\mathbf{d}\mathbf{u}_{|\phi}(X^{v}_{|\phi})\circ\bm {u}(\phi)^{-1}=-\mathbf{L}_{X^{v}}\mathbf{u}\circ\mathbf{u}(\phi)^{-1}=X\circ\mathbf{u}(\phi) \circ\mathbf{u}(\phi)^{-1}=X\ \in\mathfrak{bif}(M). \tag{158}\]
On the other hand, by definition (143) and using the naturality of \(\mathbf{d}\) (commuting with pullbacks),
\[R^{\mathbf{\star}}_{\phi}\mathbf{\omega}_{0} =-R^{\star}_{\phi}\mathbf{d}\mathbf{u}\circ(R^{\star}_{\phi}\mathbf{u})^{-1}= -\mathbf{d}(R^{\star}_{\phi}\mathbf{u})\circ(R^{\star}_{\phi}\mathbf{u})^{-1}=-\mathbf{d}( \psi^{-1}\circ\mathbf{u})\circ(\psi^{-1}\circ\mathbf{u})^{-1}=\psi_{*}^{-1}\mathbf{d}\bm {u}\circ\mathbf{u}^{-1}\circ\psi,\] \[=\psi_{*}^{-1}\mathbf{\omega}_{0}\circ\psi. \tag{159}\]
These are indeed the defining properties (68) of an Ehresmann connection. It is immediate to see that
\[\mathbf{d}\mathbf{\omega}_{0}+\tfrac{1}{2}[\mathbf{\omega}_{0},\mathbf{\omega}_{0}]_{\text{ int}(\mathbf{\omega})}\equiv 0, \tag{160}\]
i.e. that a dressing field supplies a _flat_ connection on \(\Phi\). An immediate corollary is that,
\[\mathbf{\omega}_{0}^{\psi}=\mathbf{\psi}_{*}^{-1}\mathbf{\omega}_{0}\circ\mathbf{\psi}+\mathbf{ \psi}_{*}^{-1}\mathbf{d}\mathbf{\psi},\qquad\text{ and }\qquad\mathbf{L}_{\mathbf{X}^{v}}\mathbf{\omega}_{0}=\mathbf{d}\mathbf{X}+[\mathbf{\omega}_{0},\mathbf{X}] _{\text{int}(\mathbf{\omega})}\,, \tag{161}\]
as a special case of (71) and (72). This may be compared to eq.(3.19) in [10].
Thus, (149) can also be written as
\[\mathbf{d}\phi^{\mathbf{u}}=\mathbf{u}^{*}(\mathbf{d}\phi-\mathfrak{L}_{\omega\omega}\phi) \quad\in\ \Omega^{1}_{\text{basic}}(\Phi). \tag{162}\]
Being basic by construction, we know it is \(\text{\bf Diff}_{v}(\Phi)\)-invariant (and also gauge-invariant, under \(\text{\bf Aut}_{v}(\Phi)\)), but let us check it as an exercise: Using (66), (144) and (164), we have
\[(\mathbf{d}\phi^{\mathbf{u}})^{\psi} =(\mathbf{u}^{\psi})^{*}(\mathbf{d}\phi^{\psi}-\mathfrak{L}_{\omega_{0}^{ \psi}}\phi^{\psi}),\] \[=(\mathbf{\psi}^{-1}\circ\mathbf{u})^{*}\left(\mathbf{\psi}^{*}(\mathbf{d}\phi+ \mathfrak{L}_{[\mathbf{d}\phi\psi^{-1}]}\phi)-\mathfrak{L}_{\psi_{*}^{-1}\mathbf{ \omega}_{0}\circ\mathbf{\psi}+\mathfrak{L}_{*}^{-1}\mathbf{d}\phi}\,\mathbf{\psi}^{*}\phi \right),\] \[=\mathbf{u}^{*}\mathbf{\psi}^{-1*}\,\mathbf{\psi}^{*}(\mathbf{d}\phi+\mathfrak{L} _{[\mathbf{d}\phi\psi^{-1}]}\phi)-\ \mathbf{u}^{*}\mathfrak{L}_{\psi_{*}^{-1}\mathbf{ \omega}_{0}\circ\mathbf{\psi}}\ \mathbf{\psi}^{*}\phi-\ \mathbf{u}^{*}\mathfrak{L}_{\psi_{*}^{-1}\mathbf{ \omega}_{0}\circ\mathbf{\psi}}\,\mathbf{\psi}^{*}\phi,\] \[=\mathbf{u}^{*}d\phi+\mathbf{u}^{*}\mathfrak{L}_{[\mathbf{d}\phi\psi^{-1}]} \phi\ -\ \mathbf{u}^{*}\mathfrak{L}_{\omega_{0}}\phi\ -\ \mathbf{u}^{*}\mathfrak{L}_{[\mathbf{d}\phi\psi^{-1}]}\phi,\] \[=\mathbf{u}^{*}(\mathbf{d}\phi-\mathfrak{L}_{\omega_{0}}\phi)=:\mathbf{d}\phi^ {\mathbf{u}}.\]
Where the lemma (8) is used in the 3rd to 4th line. Now, given (162), for \(\alpha=\alpha(\mathbf{d}\phi;\phi)\in\Omega^{1}_{\text{inv}}(\Phi)\), (150) may be written,
\[\mathbf{\alpha}^{\mathbf{u}}=\alpha(\mathbf{u}^{*}(\mathbf{d}\phi-\mathfrak{L}_{ \omega_{0}});\phi^{\mathbf{u}})=\alpha(\mathbf{d}\phi-\mathfrak{L}_{\omega_{0}};\phi) =\alpha-\alpha(\mathfrak{L}_{\omega_{0}};\phi), \tag{163}\] \[=\alpha-\iota_{[\mathbf{\omega}(\cdot)]}\mathbf{\alpha}\quad\in\Omega^{1} _{\text{basic}}(\Phi).\]
The formulae (162) and (163) may be compared to (78) and (77)/(80): It shows that using the DFM or Ehresmann connections to build a basic counterparts of invariant 1-forms, \(\mathbf{\alpha}\in\Omega^{1}_{\text{inv}}(\Phi)\), gives formally very analogous results.19 More on this in section 3.2.2 below.
Footnote 19: An observation first hinted at in [25] in the context of the study of the covariant phase space approach to field theory (over bounded regions) with internal gauge symmetries: there \(\mathbf{\alpha}=\mathbf{\theta}\in\Omega^{1}_{\text{inv}}(\Phi)\) is the presymplectic potential of the theory. See also [9] for a detailed discussion of this point for this class of theories.
This also tells us something noteworthy: The existence of a flat connection, hence of a global dressing field, is a strong topological constraint on a bundle.20 In some cases, a dressing field indeed gives a global trivialisation of the bundle \(\Phi\), yet in general the field space may not be trivial and Gribov-like obstructions may exclude the existence of dressing fields other than local. A generic, non-flat, connection \(\mathbf{\omega}\) does not entail such constraints.
Footnote 20: In the finite dimensional case of a bundle P over a simply connected manifold, it implies that P is trivial.
In the same way, a dressing field may supply a _flat twisted connection_\(\mathbf{\varpi}_{0}:=-\mathbf{d}C(\mathbf{u})\cdot C(\mathbf{u})^{-1}\), satisfying the defining properties (88) - as can be checked using (152) and \(\mathbf{L}_{X^{v}}C(\mathbf{u})=-a(X;\ )\cdot C(\mathbf{u})\). From which follows that, as a special case of (94)-(95):
\[\mathbf{\varpi}_{0}^{\psi}=\text{Ad}\big{(}C(\mathbf{\psi})^{-1}\big{)}\,\mathbf{\varpi}_{ 0}+C(\mathbf{\psi})^{-1}\mathbf{d}C(\mathbf{\psi}),\qquad\text{and}\qquad\mathbf{L}_{\mathbf{X}^{v}} \mathbf{\varpi}_{0}=\mathbf{d}\mathbf{a}(\mathbf{X})+[\mathbf{\varpi}_{0},a(\mathbf{X})]_{\text{a}}. \tag{164}\]
Reprising the example of \(Z=\exp iS\in\Omega^{0}_{eq}(\Omega,C)\), the flat twisted connection is \(\varpi_{\!{}_{0}}:=-dC(\mathbf{u})\cdot C(\mathbf{u})^{-1}=i\int\mathbf{dc}(\ ;\mathbf{u})\). Then we may notice that the associated twisted covariant derivative is \(\bar{\mathbf{D}}_{\phi}Z:=\mathbf{d}Z+\varpi_{\!{}_{0}}Z=(i\mathbf{dS}+\varpi_{\!{}_{0}})Z\), which implies, as special case of (91),
\[i\mathbf{dS}+\varpi_{\!{}_{0}}=i\mathbf{d}\int L+c(\ ;\mathbf{u})\quad\in\ \Omega^{1}_{\rm basic }(\Phi). \tag{165}\]
The object \(L^{\prime}:=L+c(\ ;\mathbf{u})\) is what one may call a Wess-Zumino improved Lagrangian, \(c(\ ;\mathbf{u})\) playing the role of the _Wess-Zumino-Witten counterterm_ restoring gauge-invariance - see [59] chap.4, [72] chap.12, or [60] chap.15. Given the definition of the 1-cocycle \(c(\ ;\psi):=\psi^{*}L-L\), it is clear that \(L^{\prime}=\mathbf{u}^{*}L:=L^{\mathbf{u}}\in\Omega^{0}_{\rm basic}(\Phi)\), i.e. it is but the dressed Lagrangian one would obtain by applying directly the DFM (and its rule of thumb) to a Lagrangian \(L\in\Omega^{0}_{\rm eq}(\Phi)\) with equivariance \(R^{\mathbf{*}}_{\phi}L=\psi^{*}L\) and therefore gauge transformation \(L^{\mathbf{\psi}}=\mathbf{\psi}^{*}L\). Such dressed Lagrangians are key objects of our subsequent discussions.
### Residual symmetries
In the context of the DFM, two kinds of _residual_ symmetries are to be discussed: The (genuine) residual symmetry coming from the elimination of only part of the original symmetry, and a "new" symmetry that may arise (in replacement of the eliminated one) due to possible ambiguities in the choice or contruction of a dressing field. Of course, both might arise simultaneously. Let us discuss them in turn.
#### 3.2.1 Residual symmetries of the first kind
Consider \({\rm Diff}_{0}(M)\subset{\rm Diff}(M)\) a subgroup of diffeomorphisms satisfying some condition (boundary conditions, compact support...). And suppose the dressing field \(\mathbf{u}:\Phi\ \to\ \mathcal{D}r[N,M]\) has defining equivariance given by
\[R^{\mathbf{*}}_{\psi}\mathbf{u}=\varphi^{-1}\circ\mathbf{u},\quad\text{for}\ \ \varphi\in{\rm Diff}_{0}(M), \tag{166}\]
i.e. \(\mathbf{u}(\varphi^{*}\phi)=\varphi^{-1}\circ\mathbf{u}(\phi)\). Application of the DFM then amounts to building a bundle projection \(\Phi\to\Phi/\operatorname{Diff}_{0}(M)\). For this to be a bundle map, i.e. for \(\Phi/\operatorname{Diff}_{0}(M)=:\Phi^{\mathbf{u}}\) to be principal (sub)bundle in its own right, the quotient \({\rm Diff}(M)/\operatorname{Diff}_{0}(M)\) must be a group, thus \({\rm Diff}_{0}(M)\) needs to be a _normal_ subgroup of \({\rm Diff}(M)\): \({\rm Diff}_{0}(M)\dashdot{\rm Diff}(M)\). Let us assume it is so,21 and denote \({\rm Diff}_{r}(M):={\rm Diff}(M)/\operatorname{Diff}_{0}(M)\) the (residual) structure group of \(\Phi^{\mathbf{u}}\).
Footnote 21: Such is e.g. the group \({\rm Diff}_{0}(M)=\operatorname{Diff}_{\rm c}(M)\) of compactly supported diffeomorphisms.
Any dressed object \(\mathbf{\alpha}^{\mathbf{u}}\) will then be \({\rm Diff}_{0}(M)\)-basic on \(\Phi\), i.e. invariant by construction under the action of \(C^{\infty}(\Phi,{\rm Diff}_{0}(M))\subset C^{\infty}(\Phi,{\rm Diff}(M))\simeq {\bf Diff}_{r}(\Phi)\) - and \({\bf Diff}_{0}(M)\subset{\bf Diff}(M)\) in particular. But it is expected to exhibit _residual transformations_ under \(C^{\infty}(\Phi,{\rm Diff}_{r}(M))\subset C^{\infty}(\Phi,{\rm Diff}(M))\simeq {\bf Diff}_{r}(\Phi)\) - and \({\bf Diff}_{r}(M)\subset{\bf Diff}(M)\) in particular. The transformation of \(\mathbf{\alpha}\) under \(C^{\infty}(\Phi,{\rm Diff}_{r}(M))\) being known by assumption, that of \(\mathbf{\alpha}^{\mathbf{u}}\) boils down to determining the residual \(C^{\infty}(\Phi,{\rm Diff}_{r}(M))\)-transformation of the dressing field \(\mathbf{u}\), itself controlled by its equivariance: \(R^{\mathbf{*}}_{\psi}\mathbf{u}=?\) for \(\psi\in{\rm Diff}_{r}(M)\).
In that regard, there are two noteworthy cases that we can treat in a systematic way. Let us e.g. consider \(\mathbf{\alpha}\in\Omega^{\bullet}_{\rm tens}(\Phi,\rho)\) and \(\mathbf{\omega}\in\mathcal{C}\), whose dressing by \(\mathbf{u}\) as in (166) above are \(\mathbf{\alpha}^{\mathbf{u}}\) and \(\mathbf{\omega}^{\mathbf{u}}\), thus \(C^{\infty}(\Phi,{\rm Diff}_{0}(M))\)-invariant.
**Proposition 1**.: _If \(\mathbf{u}:\Phi\ \to\ \mathcal{D}r[M,M]\) is s.t._
\[R^{\mathbf{*}}_{\psi}\mathbf{u}=\psi^{-1}\circ\mathbf{u}\circ\psi,\quad\text{ for }\ \psi\in{\rm Diff}_{r}(M), \tag{167}\]
_then \(\mathbf{\alpha}^{\mathbf{u}}\in\Omega_{\rm tens}(\Phi,\rho)\) and \(\mathbf{\omega}^{\mathbf{u}}\in\mathcal{C}\). Therefore, their residual \(C^{\infty}(\Phi;{\rm Diff}_{r}(M))\)-transformations are:_
\[(\mathbf{\alpha}^{\mathbf{u}})^{\psi}=\rho(\mathbf{\psi}^{-1})\mathbf{\alpha}^{\mathbf{u}},\qquad \text{ and }\qquad(\mathbf{\omega}^{\mathbf{u}})^{\psi}=\mathbf{\psi}_{*}^{-1}\mathbf{\omega}^{\mathbf{u}} \circ\mathbf{\psi}+\mathbf{\psi}_{*}^{-1}\mathbf{d}\mathbf{\psi}. \tag{168}\]
Indeed, firstly: It is easy to see that if \(\mathbf{\alpha}\) is horizontal, so is \(\mathbf{\alpha}^{\mathbf{u}}=\rho(\mathbf{u})^{-1}\mathbf{\alpha}\), and \(R^{\mathbf{*}}_{\psi}\mathbf{\alpha}^{\mathbf{u}}=\rho(R^{\mathbf{*}}_{\psi}\mathbf{u})^{-1}R^{ \mathbf{*}}_{\psi}\mathbf{\alpha}=\rho(\psi^{-1}\circ\mathbf{u}\circ\psi)^{-1}\mathbf{\alpha}= \rho(\psi^{-1})\mathbf{\alpha}^{\mathbf{u}}\). Secondly, on the one hand, given the linear version of (167) is
\[\mathbf{L}_{X^{\mathbf{\cdot}}}\mathbf{u}=\iota_{X^{\mathbf{\cdot}}}\mathbf{d}\mathbf{u}=\frac{d}{d\tau} \,R^{\mathbf{*}}_{\psi_{\tau}}\,\mathbf{u}\big{|}_{\tau=0}=-X\circ\mathbf{u}+\mathbf{u}_{*}X, \quad\text{ for }\ X:=\frac{d}{d\tau}\psi_{\tau}\,\big{|}_{\tau=0}\in{\mathfrak{ bif}}_{\tau}(M), \tag{169}\]
one has \(\mathbf{\omega^{n}}(X^{v})=\mathbf{u}_{-1}^{\ast}\mathbf{\omega}(X^{v})\circ\mathbf{u}+\mathbf{u}_{- 1}^{\ast}\mathbf{d}\mathbf{u}(X^{v})=\mathbf{u}_{-1}^{\ast}X\circ\mathbf{u}+\mathbf{u}_{-1}^{\ast}( -X\circ\mathbf{u}+\mathbf{u}_{\ast}X)=X\). Then, on the other hand, given (167) and (68), it is easy to show that: \(R_{\phi}^{\star}\mathbf{\omega}^{\mu}=\psi_{-1}^{-1}\,\mathbf{\omega}^{\mu}\circ\psi\), for \(\psi\in\mathrm{Diff}_{\mathrm{r}}(M)\). So indeed, \(\mathbf{\omega^{n}}\) satisfies the defining properties (68) of a \(\mathrm{Diff}_{\mathrm{r}}(M)\)-principal connection. The transformations (168) thus follow as a special case of (57) and (71). From both follows in particular that \(\mathbf{\Omega^{u}}\in\Omega^{2}_{\mathrm{lens}}(\Phi,\mathsf{bit}\mathsf{f}_{ \mathrm{r}}(M))\), so \((\mathbf{\Omega^{u}})^{\psi}=\mathbf{\psi}_{-}^{-1}\,\mathbf{\Omega^{u}}\circ\mathbf{\psi}\).
The other interesting case is:
**Proposition 2**.: _If \(\mathbf{u}:\Phi\ \to\ \mathcal{D}r[N,M]\) is s.t._
\[R_{\phi}^{\star}\mathbf{u}=\psi^{-1}\circ\mathbf{u}\circ C(\ ;\psi),\quad\text{ for }\ \psi\in\mathrm{Diff}_{\mathrm{r}}(M), \tag{170}\]
_and where \(C:\Phi\times\mathrm{Diff}_{\mathrm{r}}(M)\to\mathrm{Diff}(N)\) is a special case of 1-cocycle as defined in (46), thus satisfying the relation \(C(\phi;\psi^{\prime}\circ\psi)=C(\phi;\psi^{\prime})\circ C(\phi^{\psi^{ \prime}};\psi)\)._
_Then \(\mathbf{\alpha^{n}}\in\Omega_{\mathrm{lens}}(\Phi,C)\) and \(\mathbf{\omega^{n}}\in\bar{C}\). Therefore, their residual \(C^{\infty}(\Phi,\mathrm{Diff}_{\mathrm{r}}(M))\)-transformations are:_
\[(\mathbf{\alpha^{n}})^{\psi}=\rho(C(\psi))^{-1}\mathbf{\alpha^{n}},\qquad\text{ and }\qquad(\mathbf{\omega^{n}})^{\psi}=C(\psi)_{-}^{-1}\mathbf{\omega^{n}}\circ C(\psi)+C( \mathbf{\psi})_{-}^{-1}\mathbf{d}C(\psi). \tag{171}\]
As above we have, firstly, that if \(\mathbf{\alpha}\) is horizontal, so is \(\mathbf{\alpha^{n}}=\rho(\mathbf{u})^{-1}\mathbf{\alpha}\), and \(R_{\phi}^{\star}\mathbf{\alpha^{n}}=\rho(R_{\phi}^{\star}\mathbf{u})^{-1}R_{\psi}^{ \star}\mathbf{\alpha}=\rho(\psi^{-1}\circ\mathbf{u}\circ C(\ ;\psi))^{-1}\rho(\psi)^{-1}\mathbf{ \alpha}=\rho(C(\ ;\psi))^{-1}\mathbf{\alpha^{n}}\). That is, modulo \(\rho\), \(\mathbf{\alpha^{n}}\) is a twisted tensorial form. Remark in particular that if \(\rho\) is the pullback action, \(\rho(\psi)^{-1}=\psi^{\star}\), then
\[R_{\phi}^{\star}\mathbf{\alpha^{n}}=C(\ ;\psi)^{\ast}\mathbf{\alpha^{n}},\quad\text{ so }\quad(\mathbf{\alpha^{n}})^{\psi}=C(\mathbf{\psi}))^{\ast}\mathbf{\alpha^{n}}. \tag{172}\]
Secondly, on the one hand, given the linear version of (170) is
\[\mathbf{L}_{X^{v}}\mathbf{u}=\iota_{X^{v}}\mathbf{d}\mathbf{u}=\tfrac{d}{d\tau}R_{\phi_{\tau}}^ {\star}\mathbf{u}\big{|}_{\tau=0}=-X\circ\mathbf{u}+\mathbf{u}_{\ast}\,\tfrac{d}{d\tau}C( \ ;\psi_{\tau})\big{|}_{\tau=0}=-X\circ\mathbf{u}+\mathbf{u}_{\ast}\,a(X;\ ),\quad\text{ for }\ X\in\mathsf{bit}\mathsf{f}_{ \mathrm{r}}(M), \tag{173}\]
one has \(\mathbf{\omega^{n}}(X^{v})=\mathbf{u}_{-}^{-1}\mathbf{\omega}(X^{v})\circ\mathbf{u}+\mathbf{u}_{- 1}^{-1}\mathbf{d}\mathbf{u}(X^{v})=\mathbf{u}_{-}^{-1}X\circ\mathbf{u}+\mathbf{u}_{-}^{-1}(-X\circ \mathbf{u}+\mathbf{u}_{\ast}\,a(X;\ ))=a(X;\ )\in\mathsf{bit}\mathsf{f}(N)\). Then, on the other hand, given (170) and (68), it is easy to show that: \(R_{\phi}^{\star}\mathbf{\omega^{n}}=C(\ ;\psi)_{-}^{-1}\,\mathbf{\omega^{n}}\circ C(\ ;\psi)+C(\ ;\psi)_{-}^{-1}\mathbf{d}C(\ ;\psi)\), for \(\psi\in\mathrm{Diff}_{\mathrm{r}}(M)\). So indeed, \(\mathbf{\omega^{n}}\) satisfies the defining properties (88)-(89) of a \(\mathrm{Diff}_{\mathrm{r}}(M)\)-twisted connection. The transformations (171)-(172) thus follow as a special case of (57) and (94). From both follows in particular that \(\mathbf{\Omega^{u}}\in\Omega^{2}_{\mathrm{lens}}(\Phi,C)\), so \((\mathbf{\Omega^{n}})^{\psi}=C(\mathbf{\psi})_{-}^{-1}\,\mathbf{\Omega^{u}}\circ C(\mathbf{ \psi})\).
We observe that Proposition 2 reproduces (up to left/right-pushforward/pullback convention) the construction of "relational observables" in section 3.4.4 of [43] - which we already noticed to be twisted tensorial 0-forms in section 2.4.2 : compare indeed eq.(3.182) and eq. (3.185) to (170), then eq.(3.183) and eq.(3.190) to (172). In their language, the field-dependent dressing field \(\mathbf{u}(\phi)\) is referred to as a "frame field" (noted \(R[\phi]\)).
We may also signal that exact analogues of Proposition 1 and 2 are used in the (finite dimensional) context of Cartan conformal geometry to build the twistor/tractor connection and twistor and tractor fields - in an application of the DFM for field theories with internal gauge groups [5; 6], see also [3].22
Footnote 22: Considering the conformal Cartan geometry \((P,A)\), with \(A\) one among all conformal Cartan connections \(\mathcal{A}\), one can build a \(\mathcal{A}\)-dependent dressing field \(\mathbf{u}:\mathcal{A}\to\mathcal{D}r(K,K)\), \(A\mapsto\mathbf{u}(A)\), for \(\mathcal{K}\) the (abelian) subgroup of special conformal transformations. The residual symmetry is \(\mathcal{SO}(1,3)\times W\), i.e. the Lorentz group and Weyl rescalings. The dressing satisfies Proposition 1 w.r.t. Lorentz, and Proposition 2 w.r.t. Weyl rescalings. Using it to dress the conformal Cartan connection \(A\) itself one gets \(A^{n}\), which is then a _mixed_ connection: Ehresmann for \(\mathcal{SO}(1,3)\) and twisted for \({}^{n}\mathcal{W}\). This mixed Cartan connection \(A^{n}\) is none other than the twistor/tractor connection 1-form. Conformal tractors and twistors are obtained by dressing sections of the associated \(\mathbb{R}\)- and \(\mathbb{C}\)-vector bundles \(E\) and \(\mathbb{E}\) respectively. See [3; 5; 6]
#### 3.2.2 Residual symmetries of the second kind
The second kind of residual symmetry, may arise even when the original \(\mathrm{Diff}(M)\)-symmetry has been entirely neutralised via DFM. Indeed, if by definition of the dressing field \(\mathbf{u}:\Phi\ \to\ \mathcal{D}r[N,M]\) its equivariance \(R_{\phi}^{\star}\mathbf{u}=\psi^{-1}\circ\mathbf{u}\) means that \(\mathrm{Diff}(M)\) acts on its target space as the map \(\mathbf{u}(\phi):N\to M\), there may be a priori a natural right action of \(\mathrm{Diff}(N)\) on its source space:
\[\mathcal{D}r[N,M]\times\mathrm{Diff}(N) \to\mathcal{D}r[N,M]\] \[(\mathbf{u},\varphi) \mapsto\mathbf{u}\circ\varphi=:\mathbf{u}^{\psi}. \tag{174}\]
Another way to see it, is to remark that two candidate dressing fields \(\mathbf{u}^{\prime}\) and \(\mathbf{u}\) can a priori be related by an element \(\varphi\in\mathrm{Diff}(N)\) acting on their common source space, the model manifold \(N\): \(\mathbf{u}^{\prime}=\mathbf{u}\circ\varphi\). This indeed gives rise to the right action above, \(\mathbf{u}^{\prime}=\mathbf{u}^{\varphi}\).
Naturally, \(\mathrm{Diff}(N)\) does not act on \(\Phi\), a fact we may note by: \(\phi^{\varphi}=\phi\). But given that \(\phi^{\mu}=F_{\mathbf{u}}(\phi):=\mathbf{u}^{*}\phi\), it must be the case that \(\mathrm{Diff}(N)\) acts on the dressed variable via its action on the dressing field:
\[\phi^{\mathbf{u}}\mapsto\phi^{\mu^{\varphi}}:=(\mathbf{u}^{\varphi})^{*}\phi=(\mathbf{u} \circ\varphi)^{*}\phi=\varphi^{*}(\mathbf{u}^{*}\phi)=\varphi^{*}\phi^{\mu}. \tag{175}\]
Seen another way, \(\phi^{\mu}\) is a tensor field on \(N\), therefore \(\mathrm{Diff}(N)\) acts on it naturally via pullback \(\phi^{\mu}\mapsto(\phi^{\mu})^{\varphi}:=\varphi^{*}\phi^{\mu}\). Either way, this means that the space \(\Phi^{\mu}\) of dressed fields is fibered by the right action of \(\mathrm{Diff}(N)\):
\[\Phi^{\mu}\times\mathrm{Diff}(N) \rightarrow\Phi^{\mu}, \tag{176}\] \[(\phi^{\mu},\varphi) \mapsto R_{\varphi}\phi^{\mu}:=\varphi^{*}\phi^{\mu},\]
it is thus a principal bundle \(\Phi^{\mu}\xrightarrow{\tau}\Phi^{\mu}/\mathrm{Diff}(N)=:\mathcal{M}^{\mu}\). From there, the bundle geometry of \(\Phi^{\mu}\) parallels that of \(\Phi\) described in previous sections. Its associated SES of groups is
\[\mathbf{Diff}(N)\simeq\mathbf{Aut}_{\nu}(\Phi^{\mu})\rightarrow\mathbf{Aut}( \Phi^{\mu})\rightarrow\mathbf{Diff}(\mathcal{M}^{\mu}), \tag{177}\]
where \(\mathbf{Diff}(N):=\{\mathbf{\varphi}:\Phi^{\mu}\rightarrow\mathrm{Diff}(N)\,|\,R_ {\varphi}^{\star}\mathbf{\varphi}=\varphi^{-1}\circ\mathbf{\varphi}\circ\varphi\}\) is the gauge group of \(\Phi^{\mu}\). The latter being a subgroup of \(C^{\infty}(\Phi^{\mu},\mathrm{Diff}(N))\simeq\mathbf{Diff}_{\nu}(\Phi^{\mu})\), acting on \(\Gamma(T\Phi^{\mu})\) and \(\Omega^{\star}(\Phi^{\mu})\) the usual way. In particular, we may immediately write that, for \(\mathbf{\varphi}\in C^{\infty}(\Phi^{\mu},\mathrm{Diff}(N))\), a (dressed) form \(\mathbf{\alpha}^{\mu}=\alpha(\wedge^{\star}\mathbf{d}\phi^{\mu};\phi^{\mu})\in\Omega^{ \star}(\Phi^{\mu})\) transforms as:
\[(\mathbf{\alpha}^{\mu})^{\varphi} =\alpha(\wedge^{\star}(\mathbf{d}\phi^{\mu})^{\varphi};(\phi^{\mu})^ {\varphi}), \tag{178}\] \[\text{with}\qquad(\mathbf{d}\phi^{\mu})^{\varphi} =\mathbf{\varphi}^{*}(\mathbf{d}\phi^{\mu}+\Sigma_{[\mathbf{d}\varphi\varphi ^{-1}]}\phi^{\mu}). \tag{179}\]
In exact replica of (43) and (66) on the original field space \(\Phi\). From above, or first principles, one can also write: \((\mathbf{\alpha}^{\mu})^{\varphi}=R_{\varphi}^{\star}\mathbf{\alpha}^{\mu}+\iota_{[ \mathbf{d}\varphi\circ\varphi^{-1}]}\mathbf{\alpha}^{\mu}\).
**Affine changes of flat connection on field space.** As observed in section 3.1.1, a dressing field \(\mathbf{u}\) supplies a flat connection on \(\Phi,\mathbf{\omega}_{\circ}:=-\mathbf{d}\mathbf{u}\circ\mathbf{u}^{-1}\in\mathcal{C}\). A second dressing field related to the first by the action of \(C^{\infty}(\Phi^{\mu},\mathrm{Diff}(N))\simeq\mathbf{Diff}_{\nu}(\Phi^{\mu})\) (or \(\mathbf{Diff}(N)\simeq\mathbf{Aut}_{\nu}(\Phi^{\mu})\) as a special case), \(\mathbf{u}^{\prime}=\mathbf{u}^{\varphi}\), induce another flat connection related to the first as:
\[\mathbf{\omega}_{\circ}^{\prime}:=-\mathbf{d}\mathbf{u}^{\prime}\circ\mathbf{u}^{\prime-1}=- \mathbf{d}(\mathbf{u}\circ\mathbf{\varphi})\circ\mathbf{\varphi}^{-1}\circ\mathbf{u}^{-1}=-\mathbf{d} \mathbf{u}\circ\mathbf{u}^{-1}-\mathbf{u}_{*}(\mathbf{d}\mathbf{\varphi}\circ\mathbf{\varphi}^{-1}) \circ\mathbf{u}^{-1}=:\mathbf{\omega}_{\circ}+\mathbf{\beta}_{\circ}. \tag{180}\]
One must first observe that \(\mathbf{\beta}_{\circ}:=-\mathbf{u}_{*}(\mathbf{d}\mathbf{\varphi}\circ\mathbf{\varphi}^{-1}) \circ\mathbf{u}^{-1}\in\Omega^{1}_{\mathrm{ten}}(\Phi,\mathrm{biff}(M))\). Indeed, \(\mathbf{\varphi}\) being seen as \(\phi\)-dependent, \(\mathbf{d}\mathbf{\varphi}\circ\mathbf{\varphi}\in\Gamma(TN)\) is a vector field near the identity in \(\mathrm{Diff}(N)\). So \(\mathbf{u}_{*}(\mathbf{d}\mathbf{\varphi}\circ\mathbf{\varphi}^{-1})\circ\mathbf{u}^{-1}\in\Gamma(TM)\) is the \(\mathbf{u}^{-1}\)-related vector field on \(M\) (remembering (7)) near the identity in \(\mathrm{Diff}(M)\). So \(\mathbf{\beta}_{\circ}\in\Omega^{1}(\Phi,\mathrm{biff}(M))\).
Now, since by definition the \(\phi\)-dependence of \(\mathbf{\varphi}\) is via \(\phi^{\mu}\), one has that \(R_{\psi}^{\star}\mathbf{\varphi}=\mathbf{\varphi}\) for \(\psi\in\mathrm{Diff}(M)\) - which also secures the fact that \(\mathbf{u}^{\prime}=\mathbf{u}^{\varphi}=\mathbf{u}\circ\mathbf{\varphi}\) remains a well-defined dressing field. Which is, at the linear level, \(\mathbf{L}_{\mathcal{X}^{\prime}}\mathbf{\varphi}=\iota_{\mathcal{X}}\mathbf{d}\mathbf{\varphi}=0\), for \(X^{\nu}\in\Gamma(V\Phi)\) and \(X\in\mathrm{bif}(M)\). This implies that \(\mathbf{\beta}_{\circ}(X^{\nu})=0\), i.e. \(\mathbf{\beta}_{\circ}\in\Omega^{1}_{\mathrm{hor}}(\Phi,\mathrm{biff}(M))\). This also allows to work out that the equivariance is
\[R_{\psi}^{\star}\mathbf{\beta}_{\circ}=-R_{\psi}^{\star}(\mathbf{u}_{*}(\mathbf{d}\mathbf{ \varphi}\circ\mathbf{\varphi}^{-1})\circ\mathbf{u}^{-1})=-(\psi^{-1}\circ\mathbf{u})_{*}( \mathbf{d}\mathbf{\varphi}\circ\mathbf{\varphi}^{-1})\circ(\psi^{-1}\circ\mathbf{u})^{-1}=\psi_{ *}^{-1}\mathbf{\beta}_{\circ}\circ\psi, \tag{181}\]
as one would expect. So indeed \(\mathbf{\beta}_{\circ}\in\Omega^{1}_{\mathrm{ten}}(\Phi,\mathrm{bif}(M))\). Thus, as per the discussion of Ehresmann connections in section 2.4.1: \(\mathbf{\omega}_{\circ}^{\prime}=\mathbf{\omega}_{\circ}+\mathbf{\beta}_{\circ}\in C\). Said otherwise, the action of \(C^{\infty}(\Phi^{\mu},\mathrm{Diff}(N))\simeq\mathbf{Diff}_{\nu}(\Phi^{\mu})\) - or \(\mathbf{Diff}(N)\simeq\mathbf{Aut}_{\nu}(\Phi^{\mu})\) as a special case - on the space of dressing fields induces affine changes of flat connections (180).
One may then notice that, using (8), (179) can be rewritten as a relation between (basic) forms on \(\Phi\):
\[(\mathbf{d}\phi^{\mu})^{\varphi} =\mathbf{\varphi}^{*}(\mathbf{d}\phi^{\mu}-\mathbf{u}^{*}\Sigma_{-\mathbf{u}_{*}[ \mathbf{d}\varphi\circ\varphi^{-1}]\mathbf{\omega}^{-1}\phi}),\] \[=\mathbf{\varphi}^{*}(\mathbf{d}\phi^{\mu}-\mathbf{u}^{*}\Sigma_{\mathbf{\beta}_ {\circ}}\phi). \tag{182}\]
Therefore, for \(\mathbf{\alpha}=\alpha(\mathbf{d}\phi;\phi)\,\in\Omega^{1}_{\mathrm{inv}}(\Phi)\), one has that
\[(\mathbf{\alpha}^{\mu})^{\varphi} =\mathbf{\alpha}^{\mu}-\alpha(\Sigma_{\mathbf{\beta}_{\circ}}\phi;\phi),\] \[=\mathbf{\alpha}^{\mu}-\iota_{\mathbf{\beta}_{\circ}}\,\alpha,\quad\in \Omega^{1}_{\mathrm{basic}}(\Phi). \tag{183}\]
These might be compared to (83)-(84) which reflects the ambiguity in the construction of basic forms via the horizontalization procedure performed using Ehresmann connections, discussed in section 2.4.1.
This discussion reinforced the observation made in section 3.1.1 above, showing that, on \(\alpha\in\Omega^{1}_{\text{inv}}(\Phi)\), using the DFM and Ehresmann connections to build a basic counterpart \(\alpha^{b}\in\Omega^{1}_{\text{inv}}(\Phi)\) indeed give formally very analogous result, down to the "ambiguity" arising in both approaches. A key difference being that, when using Ehresmann connections, the ambiguity is an affine shift in \(\mathcal{C}\) that cannot be in general ascribed to the action of a group, while it is so (a priori) when using the DFM.
It must be observed that \((\phi^{\mathbf{n}})^{\varphi}\) is \(\text{Diff}(M)\)-invariant for all \(\varphi\in\text{Diff}(N)\), so any representative in the \(\text{Diff}(N)\)-orbit \(\mathcal{O}_{\text{Diff}(N)}[\phi^{\mathbf{n}}]\) of \(\phi^{\mathbf{n}}\) is as good a coordinatisation of \([\phi]\in\mathcal{M}\) as any other. In other words we have, a priori, \(\mathcal{O}_{\text{Diff}(N)}[\phi^{\mathbf{n}}]\simeq\mathcal{O}_{\text{Diff}(M)}[\phi]\), that is \(\mathcal{M}^{\mathbf{n}}\simeq\mathcal{M}\). Which in turn implies that the "physical symmetries" are to be found in \(\text{\bf Diff}(\mathcal{M}^{\mathbf{n}})\simeq\text{\bf Diff}(\mathcal{M})\). Given the SES (177), it is clear that \(\text{\bf Diff}(N)\simeq\text{Aut}_{v}(\Phi^{\mathbf{n}})\) is isomorphic to the original gauge group \(\text{\bf Diff}(M)\simeq\text{Aut}_{v}(\Phi)\) - and by extension \(\text{\bf Diff}_{v}(\Phi^{\mathbf{n}})\simeq\text{\bf Diff}_{v}(\Phi)\). The new symmetry \(\text{\bf Diff}_{v}(\Phi^{\mathbf{n}})\supset\text{Aut}_{v}(\Phi^{\mathbf{n}})\) a priori does not enjoy a more direct physical interpretation than the original it replaces. This suggests that the terminology "physical symmetry" often encountered in the literature to describe transformations like (174), (176) and (179) is essentially misleading - these are also often called "surface symmetries" or "corner symmetries" in the literature on the covariant phase space approach to field theory, which will be discussed in sections 4 and 5.1.
One may wonder why bother to apply the DFM if one ends up with an equivalent gauge symmetry. The point is that a dressing field may simply be available in a theory as a matter of fact, and its existence have implications for the formal properties and interpretation of the theory that one may as well be aware of. Such would be the case e.g. if part of the symmetry is reducible, as discussed in e.g. 3.2.1, and therefore _artificial_: The theory is then trimmed from its superfluous formal structure, down to its most economical ("Ockamized") and essential version.
Furthermore, it may be that the constructive process by which one obtain a dressing field \(\mathbf{u}(\phi)\) from existing fields of the theory is such that the ambiguity reflected in the a priori relation (174) is minimal, so that in effect \(\varphi\)'s form a subgroup of \(\text{Diff}(N)\), perhaps even a discrete one (the best possible situation would be that this subgroup is the trivial group). Clearly, this possibility is forfeited if a dressing field is introduced by hand in a theory - as is often the case in the edge mode literature. We further develop the discussion of these points in section 3.4.
### Dressed regions and integrals
As observed in section 2.5.2, for \(U\in\mathbf{U}(M)\) and \(\mathbf{\alpha}\in\Omega^{\bullet}(\Phi,\Omega^{\text{top}}(U))\), integrals \(\mathbf{\alpha}_{U}=\langle\alpha,U\rangle=\int_{U}\mathbf{\alpha}\) are objects on \(\Phi\times\mathbf{U}(M)\) (with values in \(\Omega^{\bullet}(\Phi)\)). Integrals that are invariant under the action of \(C^{\infty}(\Phi,\text{Diff}(M))\simeq\text{\bf Diff}_{v}(\Phi)\) as defined by (133) (i.e. integrals of tensorial integrand) are well-defined on the associated bundle of regions \(\tilde{U}(M):=\Phi\times\mathbf{U}(M)/\sim\), quotient of the product space by the action of \(\text{Diff}(M)\) (104): They have well-defined projection along \(\bar{\pi}:\Phi\times\mathbf{U}(M)\to\tilde{\mathbf{U}}(M)\), \((\phi,U)\mapsto[\phi,U]=[\psi^{\star}\phi,\psi^{-1}(U)]\), and can be said "basic" w.r.t. \(\bar{\pi}\).
In section 3.1, we defined dressed objects as being in \(\text{Im}\,\pi^{\star}\), i.e. basic on \(\Phi\), with the projection realised via a dressing field, \(F_{\mathbf{u}}\sim\pi\). And relying on the formal similarity of the actions of \(F_{\mathbf{u}}\) and \(\Xi\in\text{\bf Diff}_{v}(\Phi)\simeq C^{\infty}(\Phi,\text{Diff}(M))\), the rule of thumb to obtain the dressing \(\mathbf{\alpha}^{\mathbf{u}}\) of a form \(\mathbf{\alpha}\) is to replace the field-dependent parameter \(\mathbf{\psi}\) in \(\mathbf{\alpha}^{\mathbf{\psi}}:=\Xi^{\bullet}\mathbf{\alpha}\) by the dressing field \(\mathbf{u}\).
In the same way, we define dressed integrals as being basic on \(\Phi\times\mathbf{U}(M)\), i.e. in \(\text{Im}\,\bar{\pi}^{\star}\), with the projection realised as:
\[\begin{split}\bar{F}_{\mathbf{u}}:\Phi\times\mathbf{U}(M)& \to\bar{U}(M)\simeq\Phi^{\mathbf{n}}\times\mathbf{U}(N),\\ (\phi,U)&\mapsto\bar{F}_{\mathbf{u}}(\phi,U):=(F_{\mathbf{u} }(\phi),\mathbf{u}^{-1}(U))=(\phi^{\mathbf{n}},\mathbf{u}^{-1}(U)).\end{split} \tag{184}\]
We highlight the fact that the region \(U^{\mathbf{n}}:=\mathbf{u}^{-1}(U)\in\mathbf{U}(N)\) is a map \(U^{\mathbf{n}}:\Phi\times\mathbf{U}(M)\to\mathbf{U}(N)\) s.t. for \(\psi\in\text{Diff}(M)\):
\[(U^{\mathbf{n}})^{\psi}:=\bar{R}_{\mathbf{\psi}}^{\star}\,U^{\mathbf{n}}=(R_{\psi^{\mathbf{n} }}^{\star}\mathbf{u})^{-1}\circ\psi^{-1}(U)=(\psi^{-1}\circ\mathbf{u})^{-1}\circ\psi^{- 1}(U)=\mathbf{u}^{-1}(U)=\mathbf{u}^{-1}(U)=\mathbf{\cdot}\,U^{\mathbf{n}}. \tag{185}\]
The same then holds for \(\mathbf{\psi}\in C^{\infty}(\Phi,\text{Diff}(M))\simeq\text{\bf Diff}_{v}(\Phi)\): \((U^{\mathbf{n}})^{\psi}:=\bar{\Xi}^{\star}U^{\mathbf{n}}=U^{\mathbf{n}}\). Therefore, \(U^{\mathbf{n}}\) is a \(\phi\)-dependent \(\text{Diff}(M)\)-invariant region of _true spacetime_. It is indeed a key insight of GR that, by the conjunction of the hole argument and point-coincidence argument [31, 32, 33], spacetime is defined in a _relational_ and \(\text{Diff}(M)\)-invariant way by its physical field content. This fact is tacitly encoded by the \(\text{Diff}(M)\)-covariance of general relativistic theories, it is made manifest by the DFM: \(U^{\mathbf{n}}\) are manifestly \(\text{Diff}(M)\)-invariant \(\phi\)_-relationally_ defined regions of the physical spacetime - on which the relationally defined \(\text{Diff}(M)\)-invariant fields \(\phi^{\mathbf{n}}:=\mathbf{u}^{\star}\phi\) live, and are to be integrated over.
Let us further observe that, as a consequence, the (physical/relational) boundary \(\partial U^{\mathbf{u}}\) of a physical spacetime region is of necessity \(\mathrm{Diff}(M)\)-invariant. Thus, the often encountered claim that "boundaries in general relativistic physics break \(\mathrm{Diff}(M)\)-invariance" - and that the "edge modes" d.o.f. sometimes introduced as a consequence are the Goldstone bosons associated to that symmetry breaking - is deeply misguided. It is logically equivalent to the hole argument that temporarily confused Einstein about \(\mathrm{Diff}(M)\), and overlooks the resolution stemming from the key insight of his point coincidence argument: relationality.
So, on the space \(\Omega^{\bullet}(\Phi,\Omega^{\mathrm{top}}(U))\times\mathbf{U}(M)\) we define:
\[\begin{split}\tilde{F}_{\mathbf{u}}:\Omega^{\bullet}(\Phi,\Omega^{ \mathrm{top}}(U))\times\mathbf{U}(M)&\to\Omega^{\bullet}_{\mathrm{ basic}}(\Phi,\Omega^{\mathrm{top}}(N))\times\mathbf{U}(N)\\ (\mathbf{\alpha},U)&\mapsto\tilde{F}^{\bullet}_{\mathbf{u} }(\mathbf{\alpha},U):=(\mathbf{\alpha}^{\mathbf{u}},\mathbf{u}^{-1}(U)).\end{split} \tag{186}\]
With indeed \(\Omega^{\bullet}_{\mathrm{basic}}(\Phi,\Omega^{\mathrm{top}}(N))\simeq\Omega ^{\bullet}(\Phi^{\mathbf{u}},\Omega^{\mathrm{top}}(N))\), as dressed fields \(\phi^{\mathbf{u}}\) live on \(N\). Then, in formal analogy with (133), the dressing of an integral \(\mathbf{\alpha}_{U}:=\langle\mathbf{\alpha},U\rangle=\int_{U}\mathbf{\alpha}\) is:
\[(\mathbf{\alpha}_{U})^{\mathbf{u}}=\langle\,\ \rangle\circ\tilde{F}_{\mathbf{u}}(\mathbf{ \alpha},U)=\langle\mathbf{\alpha}^{\mathbf{u}},\mathbf{u}^{-1}(U)\rangle=\int_{\mathbf{u}^{-1 }(U)}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
### Discussion
Let us concisely sum up the main technical idea of this section, before offering some more conceptual comments. The DFM is a systematic and formal way to obtain objects, built from fields \(\phi\) and their variations \(\mathbf{d}\phi\), that are (partially or fully) invariant under diffeomorphisms of \(M\): _basic_ forms on field space \(\Phi\), in the bundle theoretic terminology. It is a _conditional_ proposition: _If_ one can find/build a dressing field, _then_ it gives the algorithm to produce these invariants (and analyse possible residual symmetries).
We must observe that the existence of global dressing fields are not guaranteed, global being meant in two ways. Firstly, it is clear that \(\phi\)-independent dressing fields of the type \(\mathcal{D}\!r\mathbb{R}^{n},M\)], i.e. \(u:\mathbb{R}^{n}\to M\), are nothing but _global_ coordinate charts, which may not exist depending on the topology of \(M\). Topological obstructions remain relevant in the \(\phi\)-dependent case \(\mathbf{u}:\mathbb{R}^{n}\to M\), as they may imply that no field \(\phi\) is globally defined, or non-vanishing, everywhere on \(M\) and fit to serve as a global coordinatisation.23
Footnote 23: One may e.g. think of the Poincaré–Hopf theorem (or its special case, the Hairy Ball theorem) stating that there is no nowhere vanishing vector fields on compact manifold with vanishing Euler characteristic. Admittedly, this particular case may be of limited (fundamental) physical importance, as spacetime is modelled by Lorentzian 4-dimensional manifolds requiring precisely the existence of such a nowhere vanishing vector field – hence spacetime is either non-compact or has vanishing Euler characteristic. See [73] p.149.
Secondly, as noted in section 3.1.1, dressing fields globally defined _on field space_\(\Phi\) provide flat Ehresmann connections, which in turn imply the global triviality of \(\Phi\) as a bundle. A priori, if \(\Phi\) is globally non-trivial, dressing fields would only be available locally, over open subsets \(\mathcal{U}\subset\mathcal{M}\) (being compatible with the local triviality of \(\Phi\)).
Remark that the formalism extends familiar notions of standard differential geometry. Indeed, as just noted, \(\phi\)-independent dressings \(\mathbf{u}\to u\in\mathcal{D}\!r\mathbb{R}^{n},U\subset M\)] are just coordinates charts of \(M\) (thus non-variational objects, \(du=0\)). In that case, dressed variables \(\phi^{u}\) are the usual coordinate representations of the fields \(\phi\), while the formula defining dressed integrals (187) reproduces the _definition_ of integration on \(M\) - the general case of (187) may thus be understood as defining _integration on spacetime_. Furthermore, \(\mathrm{Diff}(N)=\mathrm{Diff}(\mathbb{R}^{n})\) is then just the group of transition functions of \(M\) (i.e. coordinate changes), so that relations (176), (179), (178) and (188) reduce to standard coordinate change formulas for fields, field-dependent volume forms, and integrals.
Explicit in the DFM, is the proposition that the dressing field is to be found among, or built from, the existing set of fields \(\phi\) of a theory: it is key to the natural relational interpretation of the formalism. This is the situation of most fundamental physical significance for the analysis of a \(\mathrm{Diff}(M)\)-theory.
It is not so for dressing fields introduced by _fiat_ as d.o.f. independent from the original field space \(\Phi\), i.e. admitting \(\mathbf{du}\neq 0\): It amounts to extending the original field space to \(\Phi^{\prime}=\Phi+u\), and thus to considering an altogether different theoretical framework where both the kinematics and the _character_ of the symmetry are altered. Two common situations are covered by this case of the DFM.
The first is when starting from a \(\mathrm{Diff}(M)\)-theory on \(\Phi\), and one tries to reduce the symmetry by introducing such an _ad hoc_ dressing field. When \(\mathrm{Diff}(M)\) is seemingly broken by a background structure, like a non-relational boundary, the move is often understood as "restoring \(\mathrm{Diff}(M)\)-invariance". This is typically the premise of the literature on "edge modes", see e.g. [10; 12; 35; 46]. An immediate drawback of such a move was pointed out earlier, in section 3.2.2: The original \(\mathrm{Diff}(M)\) symmetry is replaced by an isomorphic, and a priori no more physical, \(\mathrm{Diff}(N)\) symmetry - think again of the case \(N=\mathbb{R}^{n}\) and \(u\in\mathcal{D}\!r\mathbb{R}^{n},U\subset M\)]. Clearly, there is little chance for \(\mathrm{Diff}(N)\) to be a symmetry giving relevant _new_ information about the theory under consideration (not already given by \(\mathrm{Diff}(M)\)). And the _ad hoc_ dressing field being introduced non-constructively, there is no compelling argument for ever having \(\mathrm{Diff}(N)\) reduced to a subgroup (small or discrete) either. Therefore, all "issues" one had with \(\mathrm{Diff}(M)\) one has now with \(\mathrm{Diff}(N)\), defeating the purpose of introducing a dressing field in the first place.
The second situation is when starting from a theory devoid of \(\mathrm{Diff}(M)\) symmetry, and one tries to enforce it via the introduction of an _ad hoc_ dressing field. In that context, _ad hoc_ dressing fields are known as (gravitational) Stueckelberg fields, and the move is again seen as "restoring \(\mathrm{Diff}(M)\)-invariance". This is typically the case of the literature on massive gravity, bi-gravity, and sometimes string theory [74; 75; 76]. Such an artificially enforced \(\mathrm{Diff}(M)\) symmetry have no physical content, and is there mainly to hide background non-dynamical structures - often some reference metric (preserved by a subgroup of \(\mathrm{Diff}(M)\), which is the real symmetry of the theory). For that reason these \(\mathrm{Diff}(M)\) symmetries introduced via _ad hoc_ dressing fields, i.e. Stueckelberg fields, are called "_artificial_" in
the philosophy of physics literature [77, 78, 79, 80].24 The class of theories displaying such artificial Diff\((M)\) symmetries is thus "non-general-relativistic" in a deep sense: They are only superficially and formally alike GR, but betray its key physical insight, i.e. the relationality and absence of background structures/fields that a substantive Diff\((M)\) symmetry encodes.
Footnote 24: Indeed, enforcing a symmetry via a Stueckelberg trick, is an instance of “Kretschmannisation”: rewriting a theory so as to display a strictly formal symmetry without physical signature or content. The name stems from the famous “Kretschmann objection” against general covariance as a fundamental feature of GR, according to which any theory can be made Diff\((M)\)-invariant, or generally covariant. Analysis of the objection resulted in the realisation that one must distinguish between _substanative_ Diff\((M)\)/general covariance as a distinctive native feature of a theory, with physical content/signature, from _artificial_ Diff\((M)\)/general covariance which is implemented by hand (often via the introduction of extra structures/fields, e.g. a la Stueckelberg) and thus without physical content. A comparable crucial distinction exists for (internal) gauge symmetries, in response to a “generalised Kretschmann objection” to the gauge principle [4; 7; 79]: There, a key physical signature of substantive gauge symmetries is the trade-off between gauge invariance and locality of a theory, absent for artificial gauge symmetries.
Field-dependent dressing fields \(\mathbf{u}=\mathbf{u}(\phi)\), whose d.o.f. are extracted from the original field space \(\Phi\), are much more interesting. They allow in principle a formal implementation of a _relational description_ of the physics of general relativistic field theory. As already noted early in section 3.1, the invariant dressed fields \(\phi^{\mathbf{u}(\phi)}\) defined by (145) are readily understood as a relational description of the physical d.o.f., in that the d.o.f. of the fields \(\phi\) are _coordinatised relative to each other_. As a result, only Diff\((M)\)-invariant, physical relational d.o.f. are manifest.25 It has long been advocated that the (completely) relational character of general relativistic physics is indeed one of its key innovating feature, tacitly encoded by the Diff\((M)\) covariance - as understood by the conjunction the hole and point-coincidence arguments. See e.g. [31, 32, 33, 47, 71, 81, 82]. The DFM can be understood as a way to reformulate a general relativistic field theory in a Diff\((M)\)-invariant and manifestly relational way. Accordingly, one may suggest to rebrand General Relativity as "_General Relationality_". We shall here tacitly adopt this nomenclature shift whenever writing "GR".
Footnote 25: Said otherwise, by dressing \(\phi\mapsto\phi^{\mathbf{u}(\phi)}\), one reshuffles the d.o.f. in presence so as to exhibits only the relational and physical ones.
Relatedly, as observed in section 3.2.2, only by _constructively_ producing a \(\phi\)-dependent dressing field is there any chance for the "new" gauge symmetry Diff\((N)\) to be reduced to a small, discrete, or trivial subgroup. In concrete situations, it may be reduced to a discrete choice: that of reference coordinatizing field among the collection \(\phi\).
Finally, we draw attention on the fact that the DFM as developed here is the common geometric framework underpinning a number of notions appearing in recent years in the literature on gravity, or Diff\((M)\)-theories more generally: the already mentioned massive gravity and bi-gravity theories [74], but also the notion of "gravitational dressings" as proposed in [36, 37, 38, 39, 40], or yet that of "dynamical reference frames" as proposed in [42, 43]. Notably, dressing fields underlie the notions of "_gravitational edge modes_" as introduced by [10] in the covariant phase space approach to the presymplectic structure of gravity over bounded regions, which spurred some amount of activity since its inception - e.g. [11, 12, 13, 34] or [15, 16, 17], as well as the derived and closely related notion of "embedding fields" [35, 44, 45, 46, 83] - see also [41]. The following sections 4 and 5, will substantiate this claim. In addition to the technical streamlining stemming from the DFM, the insights it offers and conceptual clarifications just discussed should be kept in mind when looking at the cited literature through the DFM sense.
## 4 Covariant phase space methods and bundle geometry
The covariant phase space approach was introduced to study the (pre) symplectic structure of gauge field theory. As (re-) introduced by [84, 85, 86], the aim was to associate a symplectic structure to a gauge field theory given by a Lagrangian \(L\) over some region \(U\subset M\), and so doing while keeping spacetime symmetries manifest (with in mind a possible covariant canonical or geometric quantization). Nowadays, it is mainly used to derive charges as physically interesting quantities, realising the symmetries of the theory through their poisson algebra: it finds notable applications in the study of asymptotic symmetries and gravitational waves physics, as well as in the study of bounded subsystems and their symmetries (like black holes).26 Classical references are [89, 90], and modern introductions are [91, 92] (see also [93] for a compact summary).
Footnote 26: In this case, Noether charges capture e.g. the Hawking-Finkelstein entropy [87], or the Komar mass (see [88] def. 4.6, eq. (4.8), p.460).
Its roots go further back though. The recent review [94] gives historical context and shows the relation of covariant phase space methods to other approaches such as the multisymplectic formalism - about which we recommend [95] - and the variational bicomplex [96].
In what follows we propose our own account of the topic, articulating covariant phase space approach for \(\mathrm{Diff}(M)\)-theories with the bundle geometry of field space \(\Phi\) as described in section 2. Our main goal in so doing, is both to streamline the derivation of key results - such as charges for both field-independent and field-dependent gauge parameter, and their Poisson bracket - and provide clear understanding of new ones, such as the vertical/gauge transformations of the symplectic potential and 2-form, that are necessary to apply immediately the DFM and thus give the general form of the _basic presymplectic structure_ of a theory (encompassing existing constructions, e.g. "edge mode extended phase space").
### Covariant phase space for \(\mathrm{Diff}(M)\)-theories
Consider a theory given by the Lagrangian \(n\)-form \(L^{\prime}=\mathcal{L}^{\prime}\,dx^{n}\in\Omega^{0}(\Phi)\), \(n=\mathrm{dim}M\), s.t.:
\[L^{\prime}=L+d\ell:\Phi \rightarrow\Omega^{n}(M),\] \[\phi \mapsto L^{\prime}(\phi). \tag{192}\]
The two Lagrangians \(L^{\prime}\) and \(L\) belong to the same (\(n^{\mathrm{th}}\)) DeRham cohomology group of \(M\). The exact term \(\ell\) is called a boundary Lagrangian, as over a region \(U\in\mathbf{U}(M)\) with boundary \(\partial U\), the associated action functional \(S^{\prime}=\langle L^{\prime},U\rangle=\int_{U}L^{\prime}\) is: \(S^{\prime}=S+\langle\ell,\partial U\rangle=S+\int_{\partial U}\ell\). The variational principle, finding extrema of the action \(\mathbf{d}S^{\prime}=\langle\mathbf{d}L^{\prime},U\rangle\equiv 0\), requires to consider the 1-form \(\mathbf{d}L^{\prime}\in\Omega^{1}(\Phi)\), which is generically s.t.:
\[\mathbf{d}L^{\prime}=\mathbf{d}L+\mathbf{d}\mathbf{d}\mathbf{\ell}=\mathbf{E}+\mathbf{d}(\mathbf{\theta}+\mathbf{ d}\mathbf{\ell})=\mathbf{E}+\mathbf{d}\mathbf{\theta}^{\prime}, \tag{193}\]
where \(\mathbf{E}=E(\mathbf{d}\phi;\phi)\) are the field equations 1-form, and \(\mathbf{\theta}^{(\prime)}=\theta^{(\prime)}(\mathbf{d}\phi;\phi)\) is the presymplectic potential current. We see that both \(L^{\prime}\) and \(L\) have the same field equations but distinct presymplectic potential currents, \(\mathbf{\theta}^{\prime}\) and \(\mathbf{\theta}\): this is what a boundary term/Lagrangian does, it contributes to the presymplectic structure by shifting the potential. The presymplectic 2-form current of the theory is defined by \(\mathbf{\Theta}:=\mathbf{d}\mathbf{\theta}^{\prime}=\mathbf{d}\mathbf{\theta}\in\Omega^{2}(\Phi)\), i.e. \(\Theta(\mathbf{d}\phi\wedge\mathbf{d}\phi;\phi)\). It is the same for both \(L^{\prime}\) and \(L\), it is thus cohomologically well defined. Given a codimension 1 submanifold \(\Sigma\) of \(U\subset M\), the presymplectic potential and presymplectic 2-form are objects on \(\Omega^{\star}(\Phi)\times\mathbf{U}(M)\), so on \(\Phi\times\mathbf{U}(M)\):
\[\mathbf{\theta}_{\Sigma} :=\mathcal{I}(\mathbf{\theta},\Sigma)=\langle\mathbf{\theta},\Sigma\rangle =\int_{\Sigma}\mathbf{\theta}, \tag{194}\] \[\mathbf{\Theta}_{\Sigma} :=\mathcal{I}(\mathbf{\Theta},\Sigma)=\langle\mathbf{\Theta},\Sigma \rangle=\int_{\Sigma}\mathbf{\Theta}.\] (195) \[=\mathbf{d}\mathcal{I}(\mathbf{\theta},\Sigma)=\mathcal{I}(\mathbf{d}\mathbf{ \theta},\Sigma)\]
The configuration space is the space of solutions \(\mathcal{S}:=\{\phi\in\Phi\,|\,\mathbf{E}_{|\phi}=0\}\), a subbundle of \(\Phi\) whose base \(\mathcal{M}_{\mathcal{S}}:=\mathcal{S}/\,\mathrm{Diff}(M)\) is the reduced phase space. We remark that there is then a reduced version of the associated bundle of regions: \(\tilde{\mathbf{U}}(M)_{|\mathcal{S}}:=\mathcal{S}\times_{\mathrm{Diff}(M)_{0}}U(M)= \mathcal{S}\times\mathbf{U}(M)/\sim\). On-shell integrated forms are objects on \(\mathcal{S}\times\mathbf{U}(M)\), and those that are invariant under the induced action of \(\mathrm{Diff}_{\mathrm{v}}(\Phi)\simeq C^{\infty}(\Phi,\mathrm{Diff}(M))\) are well-defined on \(\tilde{\mathbf{U}}(M)_{|\mathcal{S}}\rightarrow\mathcal{M}_{\mathcal{S}}\) - as discussed in section 2.5.2.
One original motivation of the formalism was to associate a symplectic space to the gauge theory \(L^{\prime}\) over \(U\subset M\): for this \(\mathbf{\Theta}_{\Sigma}\) must be well-defined on \(\tilde{\mathbf{U}}(M)_{|\mathcal{S}}\). This would happens if \(\mathbf{\Theta}\) is tensorial or basic, see again section 2.5.2, or if its transformation is a boundary term and either \(\partial\Sigma=\emptyset\) or boundary conditions (b.c.) are given. The trouble is, generically none of this occurs. In particular, when \(\partial\Sigma\neq\emptyset\) and no reasonable b.c. can be imposed, we have an occurrence of "boundary problem".
Another instance of "boundary problem" arises when, studying the symmetries of a field theory, one establishes the relation between Noether charges and \(\mathbf{\Theta}_{\Sigma}\): Indeed generically, the Noether charges fails to be Hamiltonian generator of the action of \(\mathsf{bif}(M)\) on \(\Phi\) (\(X^{\nu}\) are not Hamiltonian vector fields) due to boundary contributions that cannot be "integrated" into a redefinition of the charges (an issue typical of "open" systems). Said otherwise, Noether charges fails to be a moment map for \(\mathsf{bif}(M)\).
To establish such boundary problems in their most generic form, in as clear a way as possible, in the following our goals are: First, to derive an expression for the Noether charges associated to the action of \(\mathsf{bif}(M)\) and exhibit their relation to \(\mathbf{\Theta}_{\Sigma}\). This will allow us to also address the question of the definition of their Poisson bracket.
Second, we will derive the vertical/gauge transformations - a.k.a. field-dependent gauge transformations - of all objects in presence, \(L^{(\prime)}\), \(\mathbf{d}L^{(\prime)}\), \(\mathbf{E}\), \(\mathbf{\theta}^{(\prime)}\) and \(\mathbf{\Theta}\): This will both highlight the general form of the first boundary problem (the "non-basicity" of \(\mathbf{\Theta}_{\Sigma}\)), and help to give its formal solution in the form of the dressed/basic presymplectic structure obtained via the DFM (applying its simple rule of thumb).
For this tasks, we will need the equivariance and verticality properties of these objects: Since they are all are form on \(\Phi\) with values in \(\Omega^{\bullet}(M)\), their \(\mathrm{Diff}(M)\)-equivariance is given by the pullback representation, and their \(\mathsf{bif}(M)\)-equivariance given by the Lie derivative representation. So, for \(\psi\in\mathrm{Diff}(M)\) and \(X\in\mathsf{bif}(M)\simeq\Gamma(TM)\):
\[R^{\bullet}_{\psi}L^{\prime} =\psi^{*}L^{\prime}\] so \[\mathbf{L}_{X^{\prime}}L^{\prime}=\upac_{X}L^{\prime}=:\upac(X;\phi), \tag{196}\] \[R^{\bullet}_{\psi}\mathbf{d}L^{\prime} =\psi^{*}\mathbf{d}L^{\prime}\] so \[\mathbf{L}_{X^{\prime}}\mathbf{d}L^{\prime}=\upac_{X}\mathbf{d}L^{\prime}= \upac_{X}\mathbf{d}L^{\prime}=\mathbf{d}\alpha(X;\phi),\] (197) \[R^{\bullet}_{\psi}\mathbf{E} =\psi^{*}\mathbf{E}\] so \[\mathbf{L}_{X^{\prime}}\mathbf{E}=\upac_{X}\mathbf{E},\] (198) \[R^{\bullet}_{\psi}\mathbf{\theta}^{\prime}=\psi^{*}\mathbf{\theta}^{\prime}\] so \[\mathbf{L}_{X^{\prime}}\mathbf{\theta}^{\prime}=\upac_{X}\mathbf{\theta}^{ \prime}=:\mathbf{\alpha}(X),\] (199) \[R^{\bullet}_{\psi}\mathbf{\Theta}=\psi^{*}\mathbf{\Theta}\] so \[\mathbf{L}_{X^{\prime}}\mathbf{\Theta}=\upac_{X}\mathbf{\Theta}=\mathbf{d}\mathbf{ \alpha}(X), \tag{200}\]
The term \(\alpha(X;\phi)\) can be called the Lagrangian anomaly, a 0-form on \(\Phi\) linear in \(X\in\mathsf{bif}(M)\), while \(\mathbf{\alpha}(X)\) may be called the "symplectic anomaly", a 1-form on \(\Phi\) linear in \(X\). Given (193), the two are related by:
\[\mathbf{d}\alpha(X;\phi)=d\mathbf{\alpha}(X)+\upac_{X}\mathbf{E}=d(\alpha(X)+\iota_{X} \mathbf{E}), \tag{201}\]
where we used the fact that \(\mathbf{d}\mathbf{E}=0\) since \(\mathbf{E}\) is a top form on \(M\).
We may notice already that, on account of (200), a priori \(\mathrm{Diff}(M)\) does not act as symplectomorphisms. So we should expect obstruction from \(X^{\nu}\in\Gamma(V\Phi)\) being Hamiltonian vector fields for a moment map/Noether charge.
Finally, the \(C^{\infty}(\Phi,\mathrm{Diff}(M))\simeq\mathrm{Diff}_{\nu}(\Phi)\)-transformations of the integrals of the above objects will be obtained using (133) from section 2.5.2.
#### 4.1.1 Noether charges for field-independent gauge parameters
Let us proceed anyway to find the expression of Noether charges for the action of \(\mathsf{bif}(M)\). As we've already established in section 2.2.2 (just below (50)), and reiterated just above, the \(\mathsf{bif}\dagger(M)\)-equivariance of the Lagrangian as a 0-form on \(\Phi\) is
\[\mathbf{L}_{X^{\nu}}L^{\prime}=\iota_{X^{\nu}}\mathbf{d}L^{\prime} =\upac_{X}L^{\prime}=d(\iota_{X}L^{\prime})=:d\beta(X,\phi)= \alpha(X;\phi),\] \[=\upac_{X}L+\upac_{X}d\ell=d(\iota_{X}L+\upac_{X}\ell)=:d(\iota_ {X}L+\beta_{\ell}(X,\phi)). \tag{202}\]
Where we used the fact that \(n\)-forms on \(M\) are \(d\)-closed. From this and (193) we get: \(-\iota_{X^{\nu}}\mathbf{E}=d(\iota_{X^{\nu}}\mathbf{\theta}^{\prime}-\beta(X,\phi))\). Which tells us that there is an on-shell (\(\mathbf{E}=0\)) \(d\)-closed quantity, the Noether (\(n-1\))-form current:
\[\begin{split} J(X;\phi):=&\iota_{X^{\nu}}\mathbf{ \theta}^{\prime}-\beta(X,\phi)-d\gamma(X;\phi),\\ =&\iota_{X^{\nu}}\mathbf{\theta}-\iota_{X}L-d\gamma(X; \phi),\hskip 14.226378pt\in\hskip 14.226378pt\Omega^{n-1}(M).\end{split} \tag{203}\]
By definition then, \(-\iota_{X^{\nu}}\mathbf{E}=dJ(X,\phi)\). The current is independent of the boundary Lagrangian \(\ell\), so depends only on the DeRham cohomology class \([L^{\prime}]=[L]\), but still defined a priori up to a \(d\)-exact term \(d\gamma(X;\phi)\) (the sign is conventional). Given a codimension 1 submanifold \(\Sigma\subset U\in\mathbf{U}(M)\), the Noether charge is defined on \(\Phi\times\mathbf{U}(M)\) as:
\[Q_{\Sigma}(X;\phi):=\langle J(X;\phi),\Sigma\rangle=\int_{\Sigma}J(X;\phi)= \int_{\Sigma}\iota_{X^{\nu}}\mathbf{\theta}-\iota_{X}L-d\gamma(X;\phi). \tag{204}\]
Notice this can be seen as a map
\[\begin{split} Q_{\Sigma}(\chi;\ ):\Phi&\to\mathsf{bif} (M)^{*},\\ \phi&\mapsto Q_{\Sigma}(\chi;\phi).\end{split} \tag{205}\]
where \(\mathsf{bif}\dagger(M)^{*}\) is the dual of \(\mathsf{bif}\dagger(M)\), as indeed \(Q_{\Sigma}\) \(;\phi)\(:\mathsf{bif}\dagger(M)\to\mathbb{R}\), \(X\mapsto Q_{\Sigma}(X;\phi)\), is a linear map.
Now, in the spirit of the covariant phase space, all quantities ultimately must be expressed on-shell. So, throughout, we will endeavor to find expressions of our key objects in terms of the field equations. Case in point, using
(314) derived in A.3 (from the assumptions that we are dealing with the field space of a gauge theory given by a first-order Lagrangian), we find that the Noether current and charge are:
\[J(X;\phi) =d\big{(}\theta(\iota_{X}\phi;\phi)-\gamma(X;\phi)\big{)}\,-\,E( \iota_{X}\phi;\phi), \tag{206}\] \[Q_{\Sigma}(X;\phi) =\int_{\partial\Sigma}\theta(\iota_{X}\phi;\phi)-\gamma(X;\phi)\,- \int_{\Sigma}E(\iota_{X}\phi;\phi). \tag{207}\]
The charge is manifestly a boundary term on-shell. This result is consistent with the expectation that, the current being \(d\)-closed on-shell by construction, and assuming the region \(U\) has trivial topology, it should indeed be \(d\)-exact. The fact is made manifest by the above forms.27 Remark that from (206) follows \(\iota_{X}\cdot\mathbf{E}=dE(\iota_{X}\phi;\phi)\).
Footnote 27: Remark that had we postulated that there is another \(d\)-exact contribution to the \(\mathrm{Diff}(M)\)-anomaly (202), \(\alpha_{Y}(X;\phi)=d\beta_{Y}(X;\phi)\), the current would have been \(J(X;\phi)=d\big{(}\theta(\iota_{X}\phi;\phi)-\gamma(X;\phi)\big{)}-\beta_{X}( \cdot;\phi)-E(\iota_{X}\phi;\phi)\). But since by definition \(\iota_{X}\cdot\mathbf{E}=-dJ(X;\phi)\), it follows that on-shell \(d\beta_{Y}(X;\phi)=0\). So, assuming trivial topology, we get \(\beta_{Y}(X;\phi)=d\tilde{\gamma}(X;\phi)\). A term that can be reabsorbed in the \(d\)-ambiguity of the current: \(\gamma\mapsto\tilde{\gamma}=\gamma+\tilde{\gamma}\).
Then, one must assess the relation of the Noether charge \(Q_{\Sigma}(X;\phi)\) to the presymplectic 2-form \(\mathbf{\Theta}_{\Sigma}\), or of \(J(X;\phi)\) to \(\mathbf{\Theta}\). This will allow to see under which conditions the Noether charge is "integrable" into a Hamiltonian generator for \(X^{\nu}\in\Gamma(V\Phi)\), or in other words, if it defines via (205) a moment map for the action of \(\mathfrak{bif}(M)\) on field space \(\Phi\).
The most direct way is to look at the \(\mathfrak{bif}(M)\)-equivariance of \(\mathbf{\theta}^{\prime}\):
\[\mathbf{L}_{X^{\prime}}\mathbf{\theta}^{\prime} =\mathfrak{E}_{X}\mathbf{\theta}^{\prime}=:\mathbf{\alpha}(X), \tag{208}\] \[\hookrightarrow \iota_{X^{\prime}}\mathbf{\Theta}=-\mathbf{d}\iota_{X^{\prime}}\mathbf{ \theta}^{\prime}+\mathbf{\alpha}(X),\] \[\iota_{X^{\prime}}\mathbf{\Theta}=-\mathbf{d}\iota_{X^{\prime}}\mathbf{ \theta}-\mathbf{d}\beta_{\ell}(X;\phi)+\mathbf{\alpha}(X),\]
It is easy to work out an expression for \(\mathbf{\alpha}(X)\) in terms of the field equation:
\[\mathbf{\alpha}(X):=\mathfrak{E}_{X}\mathbf{\theta}^{\prime} =\iota_{X}d\mathbf{\theta}^{\prime}+d\iota_{X}\mathbf{\theta}^{\prime}, \tag{209}\] \[=\iota_{X}(\mathbf{d}L^{\prime}-\mathbf{E})+d\iota_{X}\mathbf{\theta}^{\prime},\] \[=\iota_{X}(\mathbf{d}L-\mathbf{E})+d\iota_{X}\mathbf{\theta}+\mathbf{d}\beta_{ \ell}(X;\phi).\]
Inserting this into (208), and given the definition of the Noether current (203), we finally obtain:
\[\iota_{X^{\prime}}\mathbf{\Theta} =-\mathbf{d}J(X;\phi)+d\left(\iota_{X}\mathbf{\theta}-\mathbf{d}\gamma(X;\phi) \right)-\iota_{X}\mathbf{E}, \tag{210}\] \[\iota_{X^{\prime}}\mathbf{\Theta}_{\Sigma} =-\mathbf{d}Q_{\Sigma}(X;\phi)+\int_{\partial\Sigma}\iota_{X}\mathbf{ \theta}-\mathbf{d}\gamma(X;\phi)\,\,\,-\int_{\Sigma}\iota_{X}\mathbf{E}.\]
We observe that, even on-shell, there is a boundary obstruction preventing the Noether charge to be a moment map. The integrand of the boundary term in (210) is often called the "flux term" or "symplectic flux", and is sometimes interpreted as meaning that there is physical flux leaking through the boundary \(\partial\Sigma\), making the system "open".
To "close" the system and reestablish integrability of the Noether charge, one needs to impose boundary condition s.t. \(\mathbf{d}\gamma(X;\phi)=\iota_{X}\mathbf{\theta}\), or to restrict the admissible class of diffeomorphisms, considering \(X\in\mathfrak{bif}(M)\) vanishing at \(\partial\Sigma\). An instance of the first approach suggests itself when requiring that the variational principle for \(L^{\prime}\) be well-posed on a bounded region: \(\mathbf{dS}^{\prime}=0\Rightarrow\mathbf{E}=0\), which necessitates \(\mathbf{\theta}+\mathbf{d}\ell=0_{\partial U\supset\partial\Sigma}\), in turn implying \(\iota_{X}\mathbf{\theta}=-\mathbf{d}\iota_{X}\ell\). In other words we set \(\gamma(X;\phi)=-\iota_{X}\ell\), so that on-shell:
\[Q_{\Sigma}(X;\phi)=\int_{\partial\Sigma}\theta(\iota_{X}\phi;\phi)+\iota_{X} \ell_{\mid S}\qquad\text{and}\qquad\iota_{X^{\prime}}\mathbf{\Theta}_{\Sigma}=-\bm {d}Q_{\Sigma}(X;\phi)_{\mid S}. \tag{211}\]
The boundary Lagrangian makes its return into the expression of the current/charge. This may be compared to Eqs.(2.46)-(2.47)/(2.50) of [92], where the presymplectic potential is assumed to decompose as \(\mathbf{\theta}=-\mathbf{d}\ell+d\mathcal{C}_{\mid\partial\Sigma}-\text{or to Eq.(2.20)-(2.21)}\) in [34] where it is assumed \(\mathbf{\theta}^{\prime}=\mathbf{\theta}-\mathbf{d}\ell+d\mathcal{C}_{\mid\partial\Sigma}\) - for some 1-form \(\mathbf{C}\) in \(\Phi\), so a term \(\iota_{X^{\prime}}\mathbf{C}\) adds to the charge. In GR with \(\Lambda=0\), (211) applies for \(L^{\prime}=L_{\mathrm{EGH}}+d\ell_{\mathrm{YGH}}\), with \(\ell_{\mathrm{YGH}}\) the York-Gibbons-Hawking boundary Lagrangian.
The relation (210) allows to investigate the possibility of defining a Poisson bracket of Noether charges. For \(X,Y\in\mathfrak{bif}(M)\) generating \(X^{v},Y^{v}\in\Gamma(V\phi)\), the standard prescription is:
\[\{Q_{\Sigma}(X;\phi),Q_{\Sigma}(Y;\phi)\}:=\mathbf{\Theta}_{\Sigma}(X^{v},Y^{v}). \tag{212}\]
We need only to work out the right-hand side. For this, using (208) we have,
\[\mathbf{\Theta}(X^{v},Y^{v})=\iota_{Y^{v}}\iota_{X^{v}}\mathbf{\Theta}=- \iota_{Y^{v}}(\mathbf{d}\iota_{X^{v}}\mathbf{\theta}^{\prime}+\mathbf{\alpha}(X)) =-\mathbf{L}_{Y^{v}}\iota_{X^{v}}\mathbf{\theta}^{\prime}+\iota_{Y^{v}} \mathbf{\alpha}(X),\] \[=\iota_{[X^{v},Y^{v}]}\mathbf{\theta}^{\prime}-\iota_{X^{v}}\mathbf{L}_{Y ^{v}}\mathbf{\theta}^{\prime}+\iota_{Y^{v}}\mathbf{\alpha}(X),\] \[=\iota_{[X,Y]\equiv\mathfrak{bif}(M)}\mathbf{\theta}^{\prime}-\iota_ {X^{v}}\mathbf{\alpha}(Y)+\iota_{Y^{v}}\mathbf{\alpha}(X). \tag{213}\]
where we used (39) with (37) (or (23)/(24)), as well as (300) from appendix (A.1). The last two terms can be related to the Lagrangian anomaly which, as stated below (50), satisfies an Abelian version of the 1-cocycle relation (consistency condition) (48):
\[\iota_{X^{v}}\mathbf{d}\alpha(Y;\phi)-\iota_{Y^{v}}\mathbf{d}\alpha(X;\phi)=\alpha([X, Y]_{\mathfrak{bif}(M)};\phi) \tag{214}\]
stemming from the commutation relation of Nijenhuis-Lie derivative (40)-(37) (or (23)/(24)). Indeed, using (201) we get:
\[\iota_{X^{v}}d(\mathbf{\alpha}(Y)+\iota_{Y}\mathbf{E})-\iota_{Y^{v}}d( \mathbf{\alpha}(X)+\iota_{X}\mathbf{E})=d\beta([X,Y]_{\mathfrak{bif}(M)};\phi),\] \[\hookrightarrow\quad\iota_{X^{v}}\mathbf{\alpha}(Y)-\iota_{Y^{v}}\mathbf{ \alpha}(X)+\iota_{X^{v}}\iota_{Y}\mathbf{E}-\iota_{Y^{v}}\iota_{X}\mathbf{E}-\beta([X, Y]_{\mathfrak{bif}(M)};\phi)=:-d\mathcal{A}([X,Y];\phi). \tag{215}\]
The notation \(d\mathcal{A}([X,Y];\phi)\) is meant to indicate that it is bilinear and antisymmetric in \(X,Y\) (as the left-hand side shows). Inserting this into (213), and using the definition (203) of the Noether current, we get
\[\mathbf{\Theta}(X^{v},Y^{v}) =\iota_{\{[X,Y]_{\mathfrak{bif}(M)}\}}\mathbf{\theta}^{\prime}-\beta( [X,Y]_{\mathfrak{bif}(M)};\phi)+d\mathcal{A}([X,Y];\phi)\ +\iota_{X^{v}}\iota_{Y}\mathbf{E}-\iota_{Y^{v}}\iota_{X}\mathbf{E},\] \[=J([X,Y]_{\mathfrak{bif}(M)};\phi)+d\gamma([X,Y]_{\mathfrak{bif}(M )};\phi)+d\mathcal{A}([X,Y];\phi)\ +\iota_{X^{v}}\iota_{Y}\mathbf{E}-\iota_{Y^{v}}\iota_{X}\mathbf{E}. \tag{216}\]
Which gives the formal expression,
\[\{Q_{\Sigma}(X;\phi),Q_{\Sigma}(Y;\phi)\} =Q_{\Sigma}([X,Y]_{\mathfrak{bif}(M)};\phi)+\int_{\partial\Sigma} \mathcal{A}([X,Y];\phi)+\gamma([X,Y]_{\mathfrak{bif}(M)};\phi)+\int_{\Sigma} \iota_{X^{v}}\iota_{Y}\mathbf{E}-\iota_{Y^{v}}\iota_{X}\mathbf{E}. \tag{217}\]
Which is on-shell,
\[\{Q_{\Sigma}(X;\phi),Q_{\Sigma}(Y;\phi)\}=Q_{\Sigma}([X,Y]_{ \mathfrak{bif}(M)};\phi)+\mathcal{C}([X,Y];\phi)_{|\mathcal{S}}, \tag{218}\]
defining the map
\[\begin{split}\mathcal{C}(\ \ ;\phi):\mathfrak{bif}(M)\times \mathfrak{bif}(M)&\to C^{\infty}(\Phi),\\ (X,Y)&\mapsto\mathcal{C}([X,Y];\phi):=\int_{ \partial\Sigma}\mathcal{A}([X,Y];\phi)+\gamma([X,Y]_{\mathfrak{bif}(M)};\phi), \end{split} \tag{219}\]
which is clearly bilinear and antisymmetric. If it can be shown that it furthermore satisfies
\[\mathcal{C}([X,[Y,Z]_{\mathfrak{bif}(M)};\phi)+\mathcal{C}([Y,[Z,X]_{ \mathfrak{bif}(M)};];\phi)+\mathcal{C}([Z,[X,Y]_{\mathfrak{bif}(M)};];\phi) \equiv 0, \tag{220}\]
then it would be a 2-cocycle on \(\mathfrak{bif}(M)\), meaning that the Poisson bracket (218) is a Lie bracket28 and that the Poisson algebra of Noether charges, \(\mathrm{PAlg}[Q_{\Sigma}]\), is a central extension of \(\mathfrak{bif}(M)\): i.e. there is a short exact sequence of Lie algebras,
Footnote 28: Indeed, since by definition on-shell \(\mathcal{C}([X,Y];\phi)=\{Q_{\Sigma}(X;\phi),Q_{\Sigma}(Y;\phi)\}-Q_{\Sigma}([X, Y]_{\mathfrak{bif}(M)};\phi)\), (220) is equivalent to the Jacobi identity holding for both \([\,\ ]_{\mathfrak{bif}(M)}\) and the Poisson bracket \([\,\ ]\).
\[(C^{\infty}(\Phi);\ \cdot\ )\xrightarrow{\iota}\ \ \mathrm{PAlg}[Q_{\Sigma}]=(Q_{ \Sigma}(\ \ ;\phi);\{\,\ \ \})\xrightarrow{\pi}\ \mathfrak{bif}(M)=(\Gamma(TM);-[\ \,\ ]_{\Gamma(TM)}) \tag{221}\]
with \(\mathrm{Im}\,\iota\subset Z(\mathrm{PAlg}[Q_{\Sigma}])\) and \(Q_{\Sigma}(\ ;\phi):\mathsf{bif}(M)\to\mathrm{PAlg}[Q_{\Sigma}]\), \(X\mapsto Q_{\Sigma}(X;\phi)\), a linear map s.t. \(\pi\circ Q_{\Sigma}(\ ;\phi)=\mathrm{id}_{\mathsf{bif}(M)}\) (it is a section of \(\pi\), _not_ a Lie algebra morphism). Since \(\pi\) is a Lie algebra morphism, and since on-shell we have \(\mathcal{C}([X,Y];\phi)=[Q_{\Sigma}(X;\phi),Q_{\Sigma}(Y;\phi)]-Q_{\Sigma}([X, Y]_{\mathsf{int}(M)};\phi)\) by (218), it is clear that \(\mathcal{C}([X,Y];\phi)\in\ker\pi=\mathrm{Im}\,\iota\subset Z(\mathrm{PAlg}[Q_ {\Sigma}])\).
There is no guarantee a priori that (220) holds. To ascertain the fact of the matter, we compute the 2-cocycle condition on \(\mathcal{C}\) in Appendix A.4, and from (321) we obtain:
\[\mathcal{C}([X,[Y,Z]_{\mathsf{int}(M)};\phi)+c.p.=\int_{\Sigma} \left(\mathbf{L}_{X^{\prime}}\mathbf{\Theta}\right)(Y^{v},Z^{v})+c.p.=\int_{\Sigma} \left(\mathbf{d}_{X^{\prime}}\mathbf{\Theta}\right)(Y^{v},Z^{v})+c.p. \tag{222}\]
Therefore, we see that the on-shell bracket (218) is a Poisson-Lie bracket if either **a)** fundamental vector fields \(X^{v}\in\Gamma(V\Phi)\) generated by \(\mathsf{bif}(M)\) act as symplectomorphisms - which, as we remarked already, is not generically the case on account of (200) - or **b)** if \(X^{v}\in\Gamma(V\Phi)\) are Hamiltonian, i.e. if the Noether charges are moment maps for \(\mathsf{bif}(M)\): Given (210), this happens **1)**_on-shell_ (as is already assumed here) and **2)** given adequate _boundary conditions_, as indeed (222) is written as
\[\mathcal{C}([X,[Y,Z]_{\mathsf{int}(M)};\phi)+c.p.=\mathbf{d}\left( \int_{\partial\Sigma}\iota_{X}\mathbf{\theta}-\mathbf{d}\gamma(X;\phi)\ -\int_{\Sigma}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
### Vertical and gauge transformations
We now turn to the issue of the vertical/gauge transformations - a.k.a. _field-dependent_ gauge transformations - of the objects defined above. These are obtained geometrically, relying on the conceptual and technical resources of section 2. This is not an idle application: it will first allow to see explicitly that the variational principle remains well-defined under the action of field-dependent diffeomorphism, a non-trivial result Then it will also allow to extend the construction to charges and Poisson bracket for field-dependent parameters (vector fields), section 4.2.1. And finally, it will be the basis from which to apply the rule of thumb of the DFM, easily obtaining the basic (relational) counterparts of all objects considered, section 5.1.
Let us recall that the group of general vertical diffeomorphisms of \(\Phi\) is \(\mathbf{Diff}_{\nu}(\Phi)\simeq C^{\infty}(\Phi,\mathrm{Diff}(M))\), and contains in particular the vertical automorphisms group/gauge group of \(\Phi\), \(\mathbf{Aut}_{\nu}(\Phi)\simeq\mathbf{Diff}(M)\): vertical transformations are given by the action of these groups via pullback. Their Lie algebras are \(\mathbf{bitf}_{\nu}(\Phi)\simeq C^{\infty}(\Phi,\mathbf{bitf}(M))\) and \(\mathbf{aut}_{\nu}(\Phi)\simeq\mathbf{bitf}(M)\): these can be seen to be elements \(\Omega^{0}(\Phi,\Psi\Phi)\subset\Omega^{\star}(\Phi,T\Phi)\), so infinitesimal vertical transformations are given by the action of these Lie algebras via the Nijenhuis-Lie derivative (30).
To find the vertical transformations of \(L^{\prime}\), \(\mathbf{dL}^{\prime}\),\(\mathbf{E}\), \(\mathbf{\theta}\) and \(\mathbf{\Theta}\), we need only to apply the natural geometric definition (56) of section 2.3: Collect the \(\mathrm{Diff}(M)\)-equivariance and verticality properties, and use (26)
\[\Xi_{\star}\bar{\mathbf{x}}=R_{\mathbf{\psi}\star}\bar{\mathbf{x}}+\left\{\mathbf{\psi}_{-}^{ -1}\mathbf{d}\mathbf{\psi}(\bar{\mathbf{x}})\right\}^{\nu}=R_{\mathbf{\psi}\star}\left(\bar{ \mathbf{x}}+\left\{\mathbf{d}\mathbf{\psi}(\bar{\mathbf{x}})\circ\mathbf{\psi}^{-1}\right\}^{\nu} \right), \tag{228}\]
for \(\mathbf{\psi}\in C^{\infty}(\Phi,\mathrm{Diff}(M))\) (or \(\mathbf{Diff}(M)\)) corresponding to \(\Xi\in\mathbf{Diff}_{\nu}(\Phi)\) (or \(\mathbf{Aut}_{\nu}(\Phi)\)), to get (56) for \(\mathbf{\alpha}\in\Omega^{\bullet}(\Phi)\),
\[\mathbf{\alpha}^{\mathbf{\psi}}(\bar{\mathbf{x}},\ldots)=\Xi^{\star}\alpha( \bar{\mathbf{x}},\ldots)=\mathbf{\alpha}(\Xi_{\star}\bar{\mathbf{x}},\ldots) =\mathbf{\alpha}\left(R_{\mathbf{\psi}\star}(\bar{\mathbf{x}}+\left\{\mathbf{d} \mathbf{\psi}(\bar{\mathbf{x}})\circ\mathbf{\psi}^{-1}\right\}^{\nu}),\ldots\right), \tag{229}\] \[=R_{\mathbf{\psi}}^{\star}\mathbf{\alpha}\left(\bar{\mathbf{x}}+\left\{\mathbf{d }\mathbf{\psi}(\bar{\mathbf{x}})\circ\mathbf{\psi}^{-1}\right\}^{\nu},\ldots\right),\]
whose linear version is given by (55)/(58). Alternatively, or as a crosscheck, given the expression \(\mathbf{\alpha}=\alpha(\Lambda^{\star}\mathbf{d}\phi;\phi)\), one can rely on (66)
\[\mathbf{d}\phi^{\mathbf{\psi}}:=\Xi^{\star}\mathbf{d}\phi=\mathbf{\psi}^{\star}(\mathbf{d}\phi+ \mathfrak{L}_{[\mathbf{d}\phi\phi^{\mathbf{\psi}}]}^{-1}\phi). \tag{230}\]
to determine (by multi-linearity) (43): \(\mathbf{\alpha}^{\mathbf{\psi}}=\alpha(\Lambda^{\star}\mathbf{d}\phi^{\mathbf{\psi}};\,\phi^{ \mathbf{\psi}})\). Then, the transformations of corresponding integrated quantities - \(S^{\prime}\), \(\mathbf{d}S^{\prime}\), \(\mathbf{\theta}_{\Sigma}\) and \(\mathbf{\Theta}_{\Sigma}\) - as objects on \(\Phi\times\mathbf{U}(M)\), are obtained via (133).
For \(L^{\prime}\in\Omega^{0}_{\mathrm{lens}}(\Phi)\), verticality is trivial, so the vertical transformation is controlled by its equivariance (196):
\[L^{\prime}\,\mathbf{\psi}:=\Xi^{\star}L^{\prime}=R_{\mathbf{\psi}}^{\star}L^{\prime}= \mathbf{\psi}^{\star}L^{\prime}. \tag{231}\]
From this, or from (58)-(59), follows that the \(\mathbf{bitf}_{\nu}(\Phi)\simeq C^{\infty}(\Phi,\mathbf{bitf}(M))\) transformation is simply:
\[\mathbf{L}_{\mathbf{X}^{\prime}}L^{\prime}=\mathfrak{L}_{\mathbf{X}}L^{\prime}=\alpha(\mathbf{X };\phi)=d\beta(\mathbf{X};\phi), \tag{232}\]
with \(\mathbf{X}\in C^{\infty}(\Phi,\mathbf{bitf}(M))\) a field-dependent vector field of \(M\).
For \(\mathbf{d}L^{\prime}\in\Omega^{1}_{\mathrm{eq}}(\Phi)\), from (196)-(197) we get \(R_{\mathbf{\psi}}^{\star}\,\mathbf{d}L^{\prime}=\psi^{\star}\mathbf{d}L^{\prime}\) and \(\iota_{\mathbf{X}^{\prime}}\mathbf{d}L^{\prime}=\alpha(X;\phi)\). So,
\[\mathbf{d}L^{\prime}\,\mathbf{\psi}(\bar{\mathbf{x}}):=\Xi^{\star}\mathbf{d}L^{ \prime}(\bar{\mathbf{x}}) =R_{\mathbf{\psi}}^{\star}\,\mathbf{d}L^{\prime}\left(\bar{\mathbf{x}}+\left\{ \mathbf{d}\mathbf{\psi}(\bar{\mathbf{x}})\circ\mathbf{\psi}^{-1}\right\}^{\nu}\right),\] \[=\mathbf{\psi}^{\star}\left(\mathbf{d}L^{\prime}(\bar{\mathbf{x}})+\alpha(\bm {d}\mathbf{\psi}(\bar{\mathbf{x}})\circ\mathbf{\psi}^{-1};\,\phi)\right),\] \[\hookrightarrow\quad\mathbf{d}L^{\prime}\,\mathbf{\psi} =\mathbf{\psi}^{\star}\left(\mathbf{d}L^{\prime}+\alpha(\mathbf{d}\mathbf{\psi}\circ \mathbf{\psi}^{-1};\,\phi)\right). \tag{233}\]
Of course, this could have been also obtained from (231), since the _naturality of the pullback_\(\{\Xi^{\star},\mathbf{d}\}=0\) implies \(\mathbf{d}L^{\prime}\,\mathbf{\psi}=\mathbf{d}(L^{\prime}\,\mathbf{\psi})\). The corresponding linear version is, from (233) or (58)-(59)/(60):
\[\mathbf{L}_{\mathbf{X}^{\prime}}\mathbf{d}L^{\prime}=\mathfrak{L}_{\mathbf{X}}\mathbf{d}L^{\prime}+ \alpha(\mathbf{d}\mathbf{X};\phi), \tag{234}\]
which could have equally been found via (232), since \([\mathbf{L}_{\mathbf{X}^{\prime}},\mathbf{d}]=0\).
For \(\mathbf{E}\in\Omega^{1}_{\text{eq}}(\Phi)\), we have \(R_{\psi}^{\star}\,\mathbf{E}=\psi^{\star}E\) and \(\iota_{X}\mathbf{E}=-dJ(X;\phi)=dE(\iota_{X}\phi;\phi)\). So,
\[\mathbf{E}^{\psi}(\mathbf{\bar{x}}):=\Xi^{\star}\mathbf{E}(\mathbf{\bar{x}})=R_{ \psi}^{\star}\,\mathbf{E}\left(\mathbf{\bar{x}}+\left\{d\mathbf{\psi}(\mathbf{\bar{x}})\circ\bm {\psi}^{-1}\right\}^{Y}\right)=\psi^{\star}\left(\mathbf{E}(\mathbf{\bar{x}})+dE(\iota _{d\mathbf{\psi}(\mathbf{\bar{x}})\circ\mathbf{\psi}^{-1}}\phi;\ \phi)\right),\] \[\hookrightarrow\ \ \mathbf{E}^{\psi}=\mathbf{\psi}^{\star}\left(\mathbf{E}+dE(\iota _{d\mathbf{\psi}\circ\mathbf{\psi}^{-1}}\phi;\ \phi)\right). \tag{235}\]
This relation tells us that the action of field-dependent diffeomorphisms \(\mathbf{Diff}_{v}(\Phi)\simeq C^{\infty}(\Phi,\text{Diff}(M))\) preserves the space of solutions \(\mathcal{S}:=\{\phi\in\Phi\,|\,\mathbf{E}_{\psi}=0\}\), a fact essential for the covariant phase space approach,30 showing that \(\text{Diff}(M)\)-theories naturally enjoy a larger \(\mathbf{Diff}_{v}(\Phi)\simeq C^{\infty}(\Phi,\text{Diff}(M))\)-covariance. As observed at the end of section 2.5.2, this fact was stressed in the case of GR by Bergmann and Komar [22]. The linear version of the above is,
Footnote 30: We may also observe – continuing footnote 27 – that had we postulated an additional contribution \(\alpha_{\gamma}(X;\phi)\) to the Lagrangian \(\text{Diff}(M)\)-anomaly \(\alpha(X;\phi)\), it would have entered the vertical property of \(\mathbf{E}\) through the (anomalous) current: \(\iota_{X}\mathbf{E}=-dJ(X;\phi)+\alpha_{\gamma}(X;\phi)=dE(\iota_{X}\phi;\phi)+ \alpha_{\gamma}(X;\phi)\). So we would have
\[\mathbf{E}^{\psi}=\mathbf{\psi}^{\star}\left(\mathbf{E}+dE(\mathbf{d\psi}\circ\mathbf{\psi};\ \phi)+\alpha_{\gamma}(\mathbf{d\psi}\circ\mathbf{\psi};\ \phi)\right), \tag{236}\]
meaning that \(\mathbf{Diff}_{v}(\Phi)\simeq C^{\infty}(\Phi,\text{Diff}(M))\) would generically not preserve \(\mathcal{S}\) (unless \(\alpha_{\gamma}(X;\phi)=d\beta_{\gamma}(X;\phi)\) and given adequate b.c.). However, such additional anomalies only arise in theories with background structures, i.e. theories that are not truly general relativistic, which we do not wish to consider.
\[\mathbf{L}_{X}\mathbf{E}=\mathfrak{\bar{x}}_{\mathbf{X}}\mathbf{E}+dE(\iota_{dX} \phi;\phi). \tag{237}\]
Now, for \(\mathbf{\theta}\in\Omega^{1}_{\text{eq}}(\Phi)\) we have still \(R_{\psi}^{\star}\,\mathbf{\theta}=\psi^{\star}\mathbf{\theta}\), and by (314): \(\iota_{X}\mathbf{\theta}=d\theta(\iota_{X}\phi;\phi)+\iota_{X}L-\ E(\iota_{X}\phi;\phi)\). So,
\[\mathbf{\theta}^{\psi}(\mathbf{\bar{x}}):=\Xi^{\star}\mathbf{\theta}(\mathbf{\bar{ x}})=R_{\psi}^{\star}\,\mathbf{\theta}\left(\mathbf{\bar{x}}+\left\{d\mathbf{\psi}(\mathbf{ \bar{x}})\circ\mathbf{\psi}^{-1}\right\}^{Y}\right)=\mathbf{\psi}^{\star}\mathbf{\theta} \left(\mathbf{\bar{x}}+\dots\ \ \right),\] \[\hookrightarrow\ \ \mathbf{\theta}^{\psi}=\mathbf{\psi}^{\star}\left(\mathbf{\theta}+d \theta(\iota_{d\mathbf{\psi}\circ\mathbf{\psi}^{-1}}\phi;\ \phi)+\iota_{d\mathbf{\psi}\circ\mathbf{\psi}^{-1}}L-\ E(\iota_{d\mathbf{\psi}\circ\mathbf{\psi }^{-1}}\phi;\phi)\right). \tag{238}\]
Linearly, we thus get
\[\mathbf{L}_{X}\mathbf{\theta}=\mathfrak{\bar{x}}_{\mathbf{X}}\mathbf{\theta}+d\theta(\iota_{ dX}\phi;\ \phi)+\iota_{dX}L-\ E(\iota_{dX}\phi;\phi). \tag{239}\]
as also found via by (58)-(59)/(60). For the presymplectic potential \(\mathbf{\theta}_{\Sigma}:=\int_{\Sigma}\mathbf{\theta}\), by (133) we have thus:
\[\mathbf{\theta}_{\Sigma}^{\psi} :=\int_{\mathbf{\theta}^{-1}(\Sigma)}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
Turning finally to \(\mathbf{\Theta}\in\Omega^{2}_{\mathrm{eq}}(\Phi)\) we have \(R^{\star}_{\mathbf{\phi}}\,\mathbf{\Theta}=\psi^{\star}\mathbf{\Theta}\), and by (210): \(\iota_{X}\mathbf{\Theta}=-\mathbf{d}J(X;\phi)+d\left(\iota_{X}\mathbf{\theta}-\mathbf{d}\gamma( X;\phi)\right)-\iota_{X}E\). So, for \(\mathfrak{X},\mathfrak{Y}\in\Gamma(\mathcal{T}\Phi)\):
\[\mathbf{\Theta}^{\phi}(\mathfrak{X},\mathfrak{Y}):=\Xi^{\star}\mathbf{ \Theta}(\mathfrak{X},\mathfrak{Y}) =R^{\star}_{\mathbf{\phi}}\,\mathbf{\Theta}\left(\mathfrak{X}+\left\{\mathbf{ d}\mathbf{\psi}(\mathfrak{X})\circ\mathbf{\psi}^{-1}\right\}^{v},\;\mathfrak{Y}+ \left\{\mathbf{d}\mathbf{\psi}(\mathfrak{Y})\circ\mathbf{\psi}^{-1}\right\}^{v}\right), \tag{243}\] \[=\mathbf{\psi}^{\star}\mathbf{\Theta}\left(\mathfrak{X}+\left[\mathbf{d}\mathbf{ \psi}(\mathfrak{X})\circ\mathbf{\psi}^{-1}\right]^{v},\;\mathfrak{Y}+\left[\mathbf{d} \mathbf{\psi}(\mathfrak{Y})\circ\mathbf{\psi}^{-1}\right]^{v}\right),\] \[=\mathbf{\psi}^{\star}\left[\;\mathbf{\Theta}(\mathfrak{X},\mathfrak{Y}) +\mathbf{\Theta}\left(\mathfrak{X},\left[\mathbf{d}\mathbf{\psi}(\mathfrak{Y})\circ\mathbf{ \psi}^{-1}\right]^{v}\right)+\mathbf{\Theta}\left(\left[\mathbf{d}\mathbf{\psi}(\mathfrak{ X})\circ\mathbf{\psi}^{-1}\right]^{v},\mathfrak{Y}\right)\right.\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+\; \mathbf{\Theta}\left(\left[\mathbf{d}\mathbf{\psi}(\mathfrak{X})\circ\mathbf{\psi}^{-1}\right] ^{v},\left[\mathbf{d}\mathbf{\psi}(\mathfrak{Y})\circ\mathbf{\psi}^{-1}\right]^{v}\right)\right]\]
The last term is related to the expression of the bracket (212), so by (216) it is immediately:
\[\mathbf{\Theta}\left(\left\{\mathbf{d}\mathbf{\psi}(\mathfrak{X})\circ\mathbf{\psi }^{-1}\right\}^{v},\left\{\mathbf{d}\mathbf{\psi}(\mathfrak{Y})\circ\mathbf{\psi}^{-1} \right\}^{v}\right) =J([\mathbf{d}\mathbf{\psi}(\mathfrak{X})\circ\mathbf{\psi}^{-1},\;\mathbf{d}\mathbf{ \psi}(\mathfrak{Y}))\circ\mathbf{\psi}^{-1}]_{\mathrm{nt}(\alpha)};\phi) \tag{244}\] \[\quad+d\mathcal{I}([\mathbf{d}\mathbf{\psi}(\mathfrak{X})\circ\mathbf{\psi}^{ -1},\;\mathbf{d}\mathbf{\psi}(\mathfrak{Y})\circ\mathbf{\psi}^{-1}]_{\mathrm{nt}(\alpha)} ;\phi)\] \[\quad+\iota_{\left\{\mathbf{d}\mathbf{\psi}(\mathfrak{X})\circ\mathbf{\psi}^{ -1}\right\}^{v}}\mathbf{d}\mathbf{\psi}(\mathfrak{Y})\circ\mathbf{\psi}^{-1};\;\phi)\]
The two middle terms are can be written at once via the "moment map/integrability relation" (210):
\[-\iota\mathfrak{x}t_{\left\{\mathbf{d}\mathbf{\psi}(\mathfrak{Y})\circ \mathbf{\psi}^{-1}\right\}^{v}}\mathbf{\Theta} +t\mathfrak{y}t_{\left\{\mathbf{d}\mathbf{\psi}(\mathfrak{X})\circ\mathbf{\psi}^{-1 }\right\}^{v}}\mathbf{\Theta}= \tag{245}\] \[\quad-\iota_{X}\left(-\mathbf{d}\mathbf{J}(\mathbf{d}\mathbf{\psi}(\mathfrak{Y} )\circ\mathbf{\psi}^{-1};\;\underline{\phi}\right)+d\left(\iota_{\mathbf{d}\mathbf{\psi}( \mathfrak{Y})\circ\mathbf{\psi}^{-1}}\mathbf{\theta}-\mathbf{d}\,\mathbf{\gamma}(\mathbf{d}\mathbf{ \psi}(\mathfrak{Y})\circ\mathbf{\psi}^{-1};\;\underline{\phi}\right)\right)-\iota_ {\mathbf{d}\mathbf{\psi}(\mathfrak{X})\circ\mathbf{\psi}^{-1}}\mathbf{E}\right)\] \[\quad+t_{\mathfrak{y}}\left(-\mathbf{d}\mathbf{J}(\mathbf{d}\mathbf{\psi}( \mathfrak{X})\circ\mathbf{\psi}^{-1};\;\underline{\phi}\right)+d\left(\iota_{\mathbf{ d}\mathbf{\psi}(\mathfrak{X})\circ\mathbf{\psi}^{-1}}\mathbf{\theta}-\mathbf{d}\,\mathbf{\gamma}(\mathbf{d}\mathbf{ \psi}(\mathfrak{X})\circ\mathbf{\psi}^{-1};\;\underline{\phi}\right)\right)-\iota_ {\mathbf{d}\mathbf{\psi}(\mathfrak{X})\circ\mathbf{\psi}^{-1}}\mathbf{E}\right).\]
In the above expressions, it should be noticed that terms like \(\mathbf{d}\mathbf{\psi}(\mathfrak{Y})\circ\mathbf{\psi}^{-1}\) are considered \(\phi\)-independent so that in \(\iota_{X}\mathbf{d}\,\mathbf{J}\left(\mathbf{d}\mathbf{\psi}(\mathfrak{Y})\circ\mathbf{\psi}^{-1}; \;\underline{\phi}\right)=\mathfrak{X}\cdot J\left(\mathbf{d}\mathbf{\psi}(\mathfrak{ Y})\circ\mathbf{\psi}^{-1};\;\underline{\phi}\right)\), the derivative \(\mathbf{d}\) or vector field \(\mathfrak{X}\) act only on the usual (underlined) \(\phi\)-dependence of \(J\).
Yet, noticing that \(\mathbf{d}(-\mathbf{d}\mathbf{\psi}\circ\mathbf{\psi}^{-1})+\frac{1}{2}[\mathbf{d}\mathbf{\psi}\circ\mathbf{ \psi}^{-1},\;\mathbf{d}\mathbf{\psi}\circ\mathbf{\psi}^{-1}]_{\mathrm{nt}(\alpha)}=0\) and using Koszul formula for \(\mathbf{d}\), the current term in (244) is:
\[J\left(\left[\mathbf{d}(\mathbf{d}\mathbf{\psi}\circ\mathbf{\psi}^{-1})\right](\mathfrak{X}, \mathfrak{Y});\phi\right)=J\left(\mathfrak{X}\cdot\left[\mathbf{d}\mathbf{\psi}( \mathfrak{Y})\circ\mathbf{\psi}^{-1}\right];\;\phi\right)-J\left(\mathfrak{Y}\cdot \left[\mathbf{d}\mathbf{\psi}(\mathfrak{X})\circ\mathbf{\psi}^{-1}\right];\;\phi\right)-J \left([\mathbf{d}\mathbf{\psi}\circ\mathbf{\psi}^{-1}]([\mathfrak{X},\mathfrak{Y}]);\;\phi \right).\]
This combines with the current terms in (245), so that:
\[J\left(\mathfrak{X}\cdot[\mathbf{d}\mathbf{\psi}(\mathfrak{Y})\circ\mathbf{ \psi}^{-1}];\;\phi\right)-J\left(\mathfrak{Y}\cdot[\mathbf{d}\mathbf{\psi}( \mathfrak{X})\circ\mathbf{\psi}^{-1}];\;\phi\right)-J\left([\mathbf{d}\mathbf{\psi}\circ\mathbf{ \psi}^{-1}]([\mathfrak{X},\mathfrak{Y}]);\;\phi\right) \tag{246}\] \[+\mathfrak{X}\cdot J\left(\mathbf{d}\mathbf{\psi}(\mathfrak{Y})\circ\mathbf{ \psi}^{-1};\;\underline{\phi}\right)-\mathfrak{Y}\cdot J\left(\mathbf{d}\mathbf{\psi}( \mathfrak{X})\circ\mathbf{\psi}^{-1};\;\underline{\phi}\right)\] \[=\mathfrak{X}\cdot J\left(\mathbf{d}\mathbf{\psi}(\mathfrak{Y})\circ\mathbf{ \psi}^{-1};\;\phi\right)-\mathfrak{Y}\cdot J\left(\mathbf{d}\mathbf{\psi}( \mathfrak{X})\circ\mathbf{\psi}^{-1};\;\phi\right)-J\left([\mathbf{d}\mathbf{\psi}\circ\mathbf{ \psi}^{-1}]([\mathfrak{X},\mathfrak{Y}]);\;\phi\right)\] \[=:\mathbf{d}[J(\mathbf{d}\mathbf{\psi}\circ\mathbf{\psi}^{-1};\;\phi)](\mathfrak{X}, \mathfrak{Y}).\]
We may then already write a nice expression for the \(\mathbf{Diff}_{v}(\Phi)\simeq C^{\infty}(\Phi,\mathrm{Diff}(M))\)-transformation of \(\mathbf{\Theta}\):
\[\mathbf{\Theta}^{\phi}=\mathbf{\psi}^{\star}\left[\;\mathbf{\Theta} +\mathbf{d}J(\mathbf{d}\mathbf{\psi}\circ\mathbf{\psi}^{-1};\phi)\right.\] (247) \[\quad+\frac{1}{2}\left(d\mathcal{I}(\mathbf{d}\mathbf{\psi}\circ\mathbf{\psi} ^{-1},\;\mathbf{d}\mathbf{\psi}\circ\mathbf{\psi}^{-1}];\phi\right)+d\gamma([\mathbf{d}\mathbf{ \psi}\circ\mathbf{\psi}^{-1},\;\underline{\phi}\mathbf{\psi}\circ\mathbf{\psi}^{-1}]_{ \mathrm{nt}(\alpha)};\phi)\right)\] \[\quad+d\left(\iota_{\mathbf{d}\mathbf{\psi}\circ\mathbf{\psi}^{-1}}\mathbf{ \theta}-\mathbf{d}\,\mathbf{\gamma}(\mathbf{d}\mathbf{\psi}\circ\mathbf{\psi}^{-1};\;\underline{ \phi})\right)\] \[\quad+\iota_{\left\{\mathbf{d}\mathbf{\psi}\circ\mathbf{\psi}^{-1}\right\}^{v} \iota_{\mathbf{d}\mathbf{\psi}\circ\mathbf{\psi}^{-1}}\mathbf{E}-\iota_{\mathbf{d}\mathbf{\psi}\circ\mathbf{ \psi}^{-1}}\mathbf{E}\;\;\right]
So, the transformation of the presymplectic 2-form is, by (133):
\[\begin{split}\mathbf{\Theta}_{\Sigma}{}^{\psi}:=\int_{\psi^{-1}(\Sigma)} \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
#### 4.2.1 Noether charges for field-dependent gauge parameters
We emulate the analysis of section 4.1.1 replacing \(X\in\mathsf{bifif}(M)\ \to\ \mathbf{X}\in C^{\infty}(\Phi,\mathsf{bifif}(M))\). From (232) we have:
\[\mathbf{L}_{\mathbf{X}^{\prime}}L^{\prime} =\mathfrak{L}_{\mathbf{X}}L^{\prime}=\alpha(\mathbf{X};\phi)=d\beta(\mathbf{X} ;\phi), \tag{254}\] \[=d(\iota_{\mathbf{X}}L+\mathfrak{L}_{\mathbf{X}}\ell)=:d(\iota_{\mathbf{X}}L +\beta_{\ell}(\mathbf{X},\phi)).\]
Which imply that the Noether current,
\[J(X;\phi):= \iota_{\mathbf{X}^{\prime}}\theta^{\prime}-\beta(\mathbf{X},\phi)-d\gamma (\mathbf{X};\phi), \tag{255}\] \[= \iota_{\mathbf{X}^{\prime}}\theta-\iota_{\mathbf{X}}L-d\gamma(\mathbf{X};\phi),\]
satisfies \(-\iota_{\mathbf{X}^{\prime}}\mathbf{E}=dJ(\mathbf{X};\phi)\), and is thus \(d\)-closed on-shell, as (203). For a codimension 1 submanifold \(\Sigma\in\mathbf{U}(M)\), the associated Noether charge is defined on \(\Phi\times U(M)\) as:
\[Q_{\Sigma}(\mathbf{X};\phi):=\langle J(\mathbf{X};\phi),\Sigma\rangle=\int_{\Sigma}J (X;\phi)=\int_{\Sigma}\iota_{\mathbf{X}^{\prime}}\theta-\iota_{\mathbf{X}}L-d\gamma( \mathbf{X};\phi). \tag{256}\]
in exact analogy to (204). It is a map,
\[Q_{\Sigma}(X;\ ):\Phi \to C^{\infty}(\Phi,\mathsf{bifif}(M))^{*}\simeq\mathsf{bifif}_{v}( \Phi)^{*}, \tag{257}\] \[\phi \mapsto Q_{\Sigma}(X;\phi).\]
with \(C^{\infty}(\Phi,\mathsf{bifif}(M))^{*}\) the dual of \(C^{\infty}(\Phi,\mathsf{bifif}(M))\), as indeed \(Q_{\Sigma}(\ :\phi):C^{\infty}(\Phi,\mathsf{bifif}(M))\to\mathbb{R}\), \(\mathbf{X}\mapsto Q_{\Sigma}(X;\phi)\), is a linear map. Now, from (314) derived in A.3, we get the results in terms of the field equations, similar to (206)-(207):
\[J(\mathbf{X};\phi) =d(\theta(\iota_{\mathbf{X}}\phi;\phi)-\gamma(\mathbf{X};\phi))\ -\ E(\iota_{\mathbf{X}}\phi;\phi), \tag{258}\] \[Q_{\Sigma}(\mathbf{X};\phi) =\int_{\partial\Sigma}\theta(\iota_{\mathbf{X}}\phi;\phi)-\gamma(\mathbf{ X};\phi)\ -\int_{\Sigma}E(\iota_{\mathbf{X}}\phi;\phi). \tag{259}\]
From which follows that \(\iota_{\mathbf{X}^{\prime}}\mathbf{E}=dE(\iota_{\mathbf{X}}\phi;\phi)\). As expected, the charge is a boundary/corner term on-shell. Compare this e.g. to eq.(2.11)-(2.12) of [26].
We now need to relate this current to the symplectic form current \(\mathbf{\Theta}=\mathbf{d}\mathbf{\theta}\). Naturally, one should start with the relation (239), special case of (60):
\[\mathbf{L}_{\mathbf{X}^{\prime}}\mathbf{\theta}^{\prime}=\mathfrak{L}_{\mathbf{X} }\mathbf{\theta}^{\prime}+\iota_{\mathbf{dX}^{\prime}}\mathbf{\theta}^{\prime}, \tag{260}\] \[\hookrightarrow \iota_{\mathbf{X}^{\prime}}\mathbf{\Theta}=-\mathbf{d}\iota_{\mathbf{X}^{\prime }}\mathbf{\theta}^{\prime}+\iota_{\mathbf{dX}^{\prime}}\mathbf{\theta}^{\prime}+\mathfrak{ L}_{\mathbf{X}}\mathbf{\theta}^{\prime},\] \[\iota_{\mathbf{X}^{\prime}}\mathbf{\Theta}=-\mathbf{d}\iota_{\mathbf{X}^{\prime}} \mathbf{\theta}^{\prime}+\mathfrak{L}_{\mathbf{X}}\mathbf{\theta}^{\prime},\]
The notation \(\underline{\mathbf{X}}\) here is meant to indicate that the object is considered \(\phi\)-constant (from the field space viewpoint), and we have by linearity of the inner product: \(-\mathbf{d}\iota_{\mathbf{X}^{\prime}}\mathbf{\theta}^{\prime}=-\mathbf{d}\iota_{\underline{ \mathbf{X}}^{\prime}}\mathbf{\theta}-\iota_{\{\mathbf{dX}^{\prime}\}}\mathbf{\theta}^{\prime}\). The result (260) is formally the same as in the \(\phi\)-independent case (208), so in an analogous way we get:
\[\iota_{\mathbf{X}^{\prime}}\mathbf{\Theta} =-\mathbf{d}J(\underline{\mathbf{X}};\phi)+d\left(\iota_{\mathbf{X}}\mathbf{ \theta}\ -\mathbf{d}\gamma(\underline{\mathbf{X}};\phi)\right)-\iota_{\mathbf{X}}\mathbf{E}, \tag{261}\] \[=-\mathbf{d}J(\underline{\mathbf{X}};\phi)+J(\mathbf{dX};\phi))+d\left(\iota_ {\mathbf{X}}\mathbf{\theta}\ -\mathbf{d}\gamma(\underline{\mathbf{X}};\phi)+\gamma(\mathbf{dX};\phi)\right)-\iota_{\mathbf{X }}\mathbf{E},\]
by linearity of \(J(X;\phi)\) and \(\gamma(X;\phi)\) in their first argument. So, we get the relation of the Noether charge to the symplectic 2-form:
\[\iota_{\mathbf{X}^{\prime}}\mathbf{\Theta}_{\Sigma} =-\mathbf{d}Q_{\Sigma}(\underline{\mathbf{X}};\phi)+\int_{\partial\Sigma} \iota_{\mathbf{X}}\mathbf{\theta}-\mathbf{d}\gamma(\underline{\mathbf{X}};\phi)\ -\int_{\Sigma}\iota_{\mathbf{X}}\mathbf{E}, \tag{262}\] \[\iota_{\mathbf{X}^{\prime}}\mathbf{\Theta}_{\Sigma} =-\mathbf{d}Q_{\Sigma}(\underline{\mathbf{X}};\phi)+Q_{\Sigma}(\mathbf{dX};\phi) +\int_{\partial\Sigma}\iota_{\mathbf{X}}\mathbf{\theta}-\mathbf{d}\gamma(\underline{\mathbf{ X}};\phi)+\gamma(\mathbf{dX};\phi)\ -\int_{\Sigma}\iota_{\mathbf{X}}\mathbf{E},\]
which manifestly reproduces (210) in the limit \(X\to X\). This result can be compared to eq.(2.18), (2.19) and (3.31) in [26] (where the last three terms on the right-hand side, noted \(\mathcal{F}_{\mathbf{X}}\), are referred to as the "Noetherian flux"). The discussion on restoring charge integrability, around (211), applies here too. In particular, upon imposing the b.c. \(\mathbf{\theta}+\mathbf{d}\ell=0_{\beta U\supset\partial\Sigma}\) and \(\mathbf{dX}=0_{\beta U\supset\partial\Sigma}\), one can set \(\gamma(\mathbf{X};\phi)=-\iota_{\mathbf{X}}\ell\) so that,
\[Q_{\Sigma}(\mathbf{X};\phi)=\int_{\partial\Sigma}\theta(\iota_{\mathbf{X}}\phi;\phi)+ \iota_{\mathbf{X}}\ell_{|\mathcal{S}}\qquad\text{and}\qquad\iota_{\mathbf{X}}\mathbf{ \Theta}_{\Sigma}=-\mathbf{d}Q_{\Sigma}(\mathbf{X};\phi)_{|\mathcal{S}}, \tag{263}\]
which applies e.g. to GR with \(\Lambda=0\), with the York-Gibbons-Hawking boundary Lagrangian \(\ell=\ell_{\text{YGH}}\).
Now, it remains to address the question of the bracket of the charges (259) induced by \(\mathbf{\Theta}_{\Sigma}\) via the usual prescription \([Q_{\Sigma}(\mathbf{X};\phi),Q_{\Sigma}(\mathbf{Y};\phi)]:=\mathbf{\Theta}_{\Sigma}(\mathbf{ X}^{v},\mathbf{Y}^{v})\). As is natural, we start back from (260):
\[\mathbf{L}_{\mathbf{X}^{v}}\mathbf{\theta}^{\prime}=\mathfrak{L}_{\mathbf{X}}\bm {\theta}^{\prime}+\iota_{|\mathbf{dX}|}\mathbf{\theta}^{\prime}=\mathbf{\alpha}(\mathbf{X})+ \iota_{|\mathbf{dX}|}\mathbf{\theta}^{\prime}=:\tilde{\mathbf{\alpha}}(\mathbf{X}),\] \[\mathbf{\hookrightarrow}\quad\iota_{\mathbf{X}}\mathbf{\Theta}=-\mathbf{d}\iota_ {\mathbf{X}^{v}}\mathbf{\theta}^{\prime}+\tilde{\mathbf{\alpha}}(\mathbf{X}),\] \[\text{so}\quad\iota_{\mathbf{Y}^{v}}\iota_{\mathbf{X}^{v}}\mathbf{\Theta}=- \iota_{\mathbf{Y}^{v}}\mathbf{d}\iota_{\mathbf{X}^{v}}\mathbf{\theta}^{\prime}+\iota_{\mathbf{Y}^ {v}}\tilde{\mathbf{\alpha}}(\mathbf{X}). \tag{264}\]
And by using the definition and properties (32)-(35) of the Nijenhuis-Richardson bracket, we find:
\[\iota_{\mathbf{Y}^{v}}\iota_{\mathbf{X}^{v}}\mathbf{\Theta} =-\mathbf{L}_{\mathbf{Y}^{v}}\iota_{\mathbf{X}^{v}}\mathbf{\theta}^{\prime}+\iota _{\mathbf{Y}^{v}}\tilde{\mathbf{\alpha}}(\mathbf{X})\] \[=-\iota_{\mathbf{X}^{v}}\mathbf{L}_{\mathbf{Y}^{v}}\mathbf{\theta}^{\prime}+\iota _{\mathbf{X}^{v},\mathbf{Y}^{v}}\tilde{\mathbf{\alpha}}(\mathbf{X}),\] \[=\iota_{\{\mathbf{X},\mathbf{Y}^{v}\}}\mathbf{\theta}^{\prime}-\iota_{\mathbf{X}^ {v}}\tilde{\mathbf{\alpha}}(\mathbf{Y})+\iota_{\mathbf{Y}^{v}}\tilde{\mathbf{\alpha}}(\mathbf{X}). \tag{265}\]
Remark that, using the definition of \(\tilde{\mathbf{\alpha}}(X)\) and the properties (29)-(36) of the Nijenhuis-Richardson bracket, we also find that the above could be written as: \(\iota_{\mathbf{Y}^{v}}\iota_{\mathbf{X}^{v}}\mathbf{\Theta}=\iota_{\{\mathbf{X},\mathbf{Y}\}\mathbf{ \Theta}}+\iota_{\mathbf{Y}^{v}}\tilde{\mathbf{\alpha}}(\mathbf{X})+\iota_{\mathbf{Y}^{v}} \alpha(\mathbf{X})\). A result formally identical the one (213) for \(\phi\)-independent parameters.
Our goal is to reconstruct the current (258), which contains the exact Lagrangian anomaly (254). The last two terms on the r.h.s. of (265) can indeed be related to the Lagrangian anomaly \(\alpha(\mathbf{X};\phi)\), which satisfies the abelian version of the 1-cocycle relation (62):
\[\iota_{\mathbf{X}^{v}}\mathbf{d}(\alpha(\mathbf{Y};\phi))-\iota_{\mathbf{Y}^{v}}\mathbf{d}(\alpha( \mathbf{X};\phi))-\alpha(\{\mathbf{X},\mathbf{Y}\};\phi)=0. \tag{266}\]
To work out this relation, generalising (201), we observe that \([\mathbf{L}_{\mathbf{X}^{v}},\mathbf{d}]L^{\prime}=0\) and that on the one hand \(\mathbf{d}\mathbf{L}_{\mathbf{X}^{v}}L^{\prime}=\mathbf{d}\alpha(\mathbf{X};\phi)\). While on the other hand, by (234) (or (60)):
\[\mathbf{L}_{\mathbf{X}^{v}}\mathbf{d}L^{\prime} =\mathfrak{L}_{\mathbf{X}}\mathbf{d}L^{\prime}+\alpha(\mathbf{dX};\phi),\] \[=\mathfrak{L}_{\mathbf{X}}(\mathbf{E}+d\mathbf{\theta}^{\prime})+\alpha(\mathbf{ dX};\phi), \tag{267}\] \[=\mathfrak{L}_{\mathbf{X}}\mathbf{E}+d\mathbf{\alpha}(\mathbf{X})+\alpha(\mathbf{dX}; \phi),\] \[=d\iota_{\mathbf{X}}\mathbf{E}+d(\tilde{\mathbf{\alpha}}(\mathbf{X})-\iota_{\mathbf{dX} ^{v}}\mathbf{\theta}^{\prime})+d\beta(\mathbf{dX};\phi).\]
This gives us the relation:
\[\mathbf{d}\alpha(\mathbf{X};\phi) =d\left(\tilde{\mathbf{\alpha}}(\mathbf{X})-\iota_{\mathbf{AX}^{v}}\mathbf{\theta }^{\prime}+\beta(\mathbf{dX};\phi)+\iota_{\mathbf{X}}\mathbf{E}\right), \tag{268}\] \[=d\left(\tilde{\mathbf{\alpha}}(\mathbf{X})-J(\mathbf{dX};\phi)-d\gamma(\mathbf{ dX};\phi)+\iota_{\mathbf{X}}\mathbf{E}\right).\]
It is clear that that the first line reduces to (201) when \(\mathbf{X}\to X\). We used the definition (255) of the current to obtain the second line. Combining (268) and (266) we get:
\[\iota_{\mathbf{X}^{v}}\tilde{\mathbf{\alpha}}(\mathbf{Y})-\iota_{\mathbf{Y}^{v}} \tilde{\mathbf{\alpha}}(\mathbf{X}) -\beta(\{\mathbf{X},\mathbf{Y}\};\phi)\ +\ \iota_{\mathbf{X}^{v}}\iota_{\mathbf{Y}}\mathbf{E}-\iota_{\mathbf{Y}^{v}}\iota_{\mathbf{X}}\mathbf{E} \tag{269}\] \[-J(\mathbf{X}^{v}(\mathbf{Y})-\mathbf{Y}^{v}(\mathbf{X});\phi)-d\gamma(\mathbf{X}^{v}( \mathbf{Y})-\mathbf{Y}^{v}(\mathbf{X});\phi)=:-d\mathcal{A}(\{\mathbf{X},\mathbf{Y}\};\phi).\]
This expression reduces to (215) when \(\mathbf{X}\to X\) (it can actually be rewritten in exactly the same way). From this,
(265) is:
\[\begin{split}\iota_{\mathbf{Y}^{c}\mathbf{X}}\mathbf{\Theta}&= \iota_{\{\mathbf{X},\mathbf{Y}\}^{c}}\mathbf{\theta}^{\prime}-\beta(\{\mathbf{X},\mathbf{Y}\};\phi)+d \mathcal{A}(\lfloor\mathbf{X},\mathbf{Y}\rfloor;\phi)&-J(\mathbf{X}^{v}(\mathbf{Y} )-\mathbf{Y}^{v}(\mathbf{X});\phi)-d\gamma(\mathbf{X}^{v}(\mathbf{Y})-\mathbf{Y}^{v}(\mathbf{X});\phi) \\ &+\iota_{\mathbf{X}^{c}\mathbf{i}\mathbf{Y}}E-\iota_{\mathbf{Y}^{c}\mathbf{i}\mathbf{X}}E,\\ &=J(\{\mathbf{X},\mathbf{Y}\};\phi)+d\gamma(\{\mathbf{X},\mathbf{Y}\};\phi)+d \mathcal{A}(\lfloor\mathbf{X},\mathbf{Y}\rfloor;\phi)&-J(\mathbf{X}^{v}(\mathbf{Y} )-\mathbf{Y}^{v}(\mathbf{X});\phi)-d\gamma(\mathbf{X}^{v}(\mathbf{Y})-\mathbf{Y}^{v}(\mathbf{X});\phi) \\ &+\iota_{\mathbf{X}^{c}\mathbf{i}\mathbf{Y}}E-\iota_{\mathbf{Y}^{c}\mathbf{i}\mathbf{X}} E,\\ &=J(\{\mathbf{X},\mathbf{Y}\};\phi)+d\mathcal{A}(\lfloor\mathbf{X},\mathbf{Y} \rfloor;\phi)+d\gamma(\lfloor\mathbf{X},\mathbf{Y}\rfloor_{\text{\tiny{\#}}(\mathbf{x})}; \phi)&-J(\mathbf{X}^{v}(\mathbf{Y})-\mathbf{Y}^{v}(\mathbf{X});\phi)\\ &+\iota_{\mathbf{X}^{c}\mathbf{i}\mathbf{Y}}E-\iota_{\mathbf{Y}^{c}\mathbf{i}\mathbf{X}} E.\end{split} \tag{270}\]
We have again used the definition (255) of the Noether current. We have therefore the bracket of charges:
\[\{Q_{\Sigma}(\mathbf{X};\phi);Q_{\Sigma}(\mathbf{Y};\phi)\}=Q_{\Sigma}(\{\mathbf{X},\mathbf{Y} \};\phi)+\bar{\mathcal{C}}(\lfloor\mathbf{X},\mathbf{Y}\rfloor;\phi)+\int_{\Sigma} \iota_{\mathbf{X}^{c}\mathbf{i}\mathbf{Y}}E-\iota_{\mathbf{Y}^{c}\mathbf{i}\mathbf{X}}E \tag{271}\]
where we define the map
\[\begin{split}\bar{\mathcal{C}}(\ ;\phi):\mathcal{C}^{\infty}( \Phi,\text{\sf{bif}}(M))\times\mathcal{C}^{\infty}(\Phi,\text{\sf{bif}}(M))& \to C^{\infty}(\Phi),\\ (\mathbf{X},\mathbf{Y})&\mapsto\bar{\mathcal{C}}(\lfloor\mathbf{X },\mathbf{Y}\rfloor;\phi):=\int_{\partial\Sigma}\mathcal{A}(\lfloor\mathbf{X},\mathbf{Y} \rfloor;\phi)+\gamma(\lfloor\mathbf{X},\mathbf{Y}\rfloor_{\text{\tiny{\#}}(\mathbf{x})}; \phi)\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad-\int_{\Sigma}J(\mathbf{X}^{v}(\mathbf{Y})-\mathbf{Y}^{v}(\mathbf{X});\phi)\,,\end{split} \tag{272}\]
which is clearly antisymmetric and bilinear. It is manifest that it reproduces the map \(\mathcal{C}\) (219) in the limit \(\mathbf{X}\to X\). Given the expressions of the current and charge (258)-(259), we can write it as:
\[\begin{split}\bar{\mathcal{C}}(\lfloor\mathbf{X},\mathbf{Y}\rfloor;\phi)& =\mathcal{C}(\lfloor\mathbf{X},\mathbf{Y}\rfloor;\phi)+Q_{\Sigma}(\mathbf{X}^{ v}(\mathbf{Y})-\mathbf{Y}^{v}(\mathbf{X});\phi),\\ &=\int_{\partial\Sigma}\mathcal{A}(\lfloor\mathbf{X},\mathbf{Y}\rfloor; \phi)+\gamma(\lfloor\mathbf{X},\mathbf{Y}\rfloor;\phi)-\theta(\iota_{\mathbf{X}^{v}(\mathbf{Y} )-\mathbf{Y}^{v}(\mathbf{X})};\phi)\ +\ \int_{\Sigma}E(\iota_{\mathbf{X}^{v}(\mathbf{Y})-\mathbf{Y}^{v}(\mathbf{X})};\phi;\phi).\end{split} \tag{273}\]
Using the result (224) for \(\mathcal{C}\) in the \(\phi\)-independent case, we have the concrete expression:
\[\begin{split}\bar{\mathcal{C}}(\lfloor\mathbf{X},\mathbf{Y}\rfloor;\phi)& =\int_{\partial\Sigma}\mathfrak{L}_{\mathbf{X}}\,\theta(\iota_{\mathbf{Y}}\phi; \phi)-\mathfrak{L}_{\mathbf{Y}}\,\theta(\iota_{\mathbf{X}}\phi;\phi)+\iota_{\mathbf{X}} \iota_{\mathbf{Y}}L-\theta(\iota_{\mathbf{X}^{v}(\mathbf{Y})-\mathbf{Y}^{v}(\mathbf{X})};\phi)+ \gamma(\{\mathbf{X},\mathbf{Y}\};\phi)\\ &\qquad\qquad\qquad\qquad\qquad+\int_{\partial\Sigma}\iota_{\mathbf{Y} }E(\iota_{\mathbf{X}}\phi;\phi)-\iota_{\mathbf{X}}E(\iota_{\mathbf{Y}}\phi;\phi)+\int_{ \Sigma}E(\iota_{\mathbf{X}^{v}(\mathbf{Y})-\mathbf{Y}^{v}(\mathbf{X})};\phi;\phi).\end{split} \tag{274}\]
On-shell, the bracket is:
\[\{Q_{\Sigma}(\mathbf{X};\phi);Q_{\Sigma}(\mathbf{Y};\phi)\}=Q_{\Sigma}(\{\mathbf{X},\mathbf{Y }\};\phi)+\bar{\mathcal{C}}(\lfloor\mathbf{X},\mathbf{Y}\rfloor;\phi)_{|\mathbf{S}} \tag{275}\]
and by (273)-(274) the map \(\bar{\mathcal{C}}\) is manifestly a boundary term. We can further ascertain the conditions under which it is a 2-cocycle (as we did in Appendix A.4). Writing that on-shell:
\[\begin{split}\bar{\mathcal{C}}(\lfloor\mathbf{X},\mathbf{Y}\rfloor;\phi)& :=\{Q_{\Sigma}(\mathbf{X};\phi),Q_{\Sigma}(\mathbf{Y};\phi)\}-Q_{\Sigma}( \{\mathbf{X},\mathbf{Y}\};\phi),\\ &=\ \mathbf{\Theta}_{\Sigma}(\mathbf{X}^{v},\mathbf{Y}^{v})-Q_{\Sigma}(\{\mathbf{X},\mathbf{Y }\};\phi),\end{split} \tag{276}\]
we have that,
\[\begin{split}\bar{\mathcal{C}}(\lfloor\mathbf{X},\{\mathbf{Y},\mathbf{Z}\} \rfloor;\phi)+c.p.&=\mathbf{\Theta}_{\Sigma}(X^{v},(\lfloor\mathbf{X},\mathbf{Y }\rfloor)^{v})-Q_{\Sigma}(\{\mathbf{X},\{\mathbf{Y},\mathbf{Z}\}\};\phi)+c.p.\\ &=\mathbf{\Theta}_{\Sigma}(X^{v},[\mathbf{Y}^{v},\mathbf{Z}^{v}])-Q_{\Sigma}(\{ \mathbf{X},\{\mathbf{Y},\mathbf{Z}\}\};\phi)+c.p.\\ &=\int_{\Sigma}(\mathbf{L}_{\mathbf{X}^{v}}\mathbf{\Theta})(\mathbf{Y}^{v},\mathbf{Z}^ {v})+c.p.\\ &=\int_{\Sigma}(\mathbf{d}\iota_{\mathbf{X}^{v}}\mathbf{\Theta})(\mathbf{Y}^{v}, \mathbf{Z}^{v})+c.p.\end{split} \tag{277}\]
where we used the property of the Frolicher-Nijenhuis (extended) bracket (23)/(37), which satisfies Jacobi, and the identity (317). In view of (261), we see that \(\bar{\mathcal{C}}\) satisfies a 2-cocycle condition, and thus (275) is a central extension of \(C^{\infty}(\Phi,\mathsf{bif}(M))\simeq\mathsf{bif}_{\nu}(\Phi)\), when the charges are moment maps: i.e. when we are **1)**_on-shell_, as is already assumed, and **2)** given adequate boundary conditions, as indeed (277) is written as
\[\begin{split}\mathcal{C}([\mathbf{X},\{\mathbf{Y},\mathbf{Z}\}];\phi)+c.p.= \mathbf{d}\left(\int_{\partial\Sigma}\theta(\iota_{\mathbf{dX}}\phi;\phi)-\gamma(\mathbf{ dX};\phi)\ -\int_{\Sigma}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
Starting from a theory specified by a Lagrangian \(L=L(\phi)\), and given a dressing field \(\mathbf{u}=\mathbf{u}(\phi)\), the Diff\((M)\)-invariant manifestly relational theory is, from (231):
\[L^{\mathbf{u}}=\mathbf{u}^{*}L=L(\phi^{\mathbf{u}}). \tag{282}\]
We remind that, as a dressing _is not a gauge-fixing_, (282) _is not_ a gauge-fixed version of \(L\). The dressed field equations are then, from (235):
\[\mathbf{E}^{\mathbf{u}}=\mathbf{u}^{*}\left(\mathbf{E}+dE(t_{\mathbf{d}\mathbf{u}\mathbf{u}^{-1}}\phi;\ \phi)\right). \tag{283}\]
Let us show explicitly that \(\mathbf{E}^{\mathbf{u}}\) is invariant under \(\textbf{Diff}_{v}(\Phi)\simeq C^{\infty}(\Phi,\text{Diff}(M))\), i.e. field-dependent diffeomorphisms. We do so by checking its basicity. First, it is easily checked that it is horizontal, by \(\iota_{X^{\prime}}\mathbf{E}=dE(\iota_{X}\phi;\phi)\) and (158):
\[\mathbf{E}^{\mathbf{u}}(X^{\nu}) =\mathbf{u}^{*}\left(\mathbf{E}(X^{\nu})+dE(\iota_{\mathbf{d}\mathbf{u}\mathbf{u}^{- 1}(X^{\nu})}\phi;\ \phi)\right), \tag{284}\] \[=\mathbf{u}^{*}(dE(\iota_{X}\phi;\ \phi)+dE(\iota_{-(X)}\phi;\ \phi) \right)\equiv 0.\]
Then, one shows the triviality of equivariance (i.e. invariance). Using \(R_{\phi}^{\mathbf{\star}}\mathbf{d}\mathbf{u}\mathbf{u}^{-1}=\psi_{*}^{-1}(\mathbf{d}\mathbf{u}\mathbf{u} ^{-1})\circ\psi\) (159), we get:
\[R_{\phi}^{\mathbf{\star}}\mathbf{E}^{\mathbf{u}} =(R_{\phi}^{\mathbf{\star}}\mathbf{u})^{*}\left(R_{\phi}^{\mathbf{\star}}E+R_ {\phi}^{\mathbf{\star}}dE(\iota_{\mathbf{d}\mathbf{u}\mathbf{u}^{-1}}\phi;\ \phi)\right), \tag{285}\] \[=(\psi^{-1}\circ\mathbf{u})^{*}\left(\psi^{*}\mathbf{E}+dE(\iota_{\phi^{ \ast}(\iota_{\mathbf{d}\mathbf{u}^{-1}})\circ\phi}\,\psi^{*}\phi;\ \psi^{*}\phi)\right),\] \[=\mathbf{u}^{*}\psi^{-1^{\star}}\left(\psi^{*}\mathbf{E}+dE(\psi^{*}\iota _{\mathbf{d}\mathbf{u}\mathbf{u}^{-1}}\phi;\ \psi^{*}\phi)\right),\] \[=\mathbf{u}^{*}\psi^{-1^{\star}}\psi^{*}\left(\mathbf{E}+dE(\iota_{\mathbf{d }\mathbf{u}\mathbf{u}^{-1}}\phi;\ \phi)\right)=\mathbf{E}^{\mathbf{u}},\]
where a property analogous to lemma (8) is used. We remark that these two computations rely on the properties (158)-(159) of \(\mathbf{d}\mathbf{u}\circ\mathbf{u}^{-1}\) as an Ehresmann connection on \(\Phi\). By (284)-(285) is established that \(\mathbf{E}^{\mathbf{u}}\in\Omega^{1}_{\text{basic}}(\Phi)\), thus that it is indeed \(\textbf{Diff}_{v}(\Phi)\simeq C^{\infty}(\Phi,\text{Diff}(M))\)-invariant, as expected.
Now, the dressed presymplectic potential current is, from (238):
\[\mathbf{\theta}^{\mathbf{u}}=\mathbf{u}^{*}\left(\mathbf{\theta}+d\theta(\iota_{\mathbf{d}\mathbf{u} \mathbf{u}^{-1}}\phi;\ \phi)+\iota_{\mathbf{d}\mathbf{u}\mathbf{u}^{-1}}L-\ E(\iota_{\mathbf{d}\mathbf{u}\mathbf{u}^{-1}}\phi; \phi)\right). \tag{286}\]
So, by the general prescription (187) to dress integrals or from (240), the dressed presymplectic potential is:
\[\mathbf{\theta}_{\Sigma}^{\mathbf{u}} :=\int_{\Sigma^{\mathbf{u}}}\mathbf{\theta}^{\mathbf{u}}=\int_{\mathbf{u}^{-1}( \Sigma)}\mathbf{u}^{*}\left(\mathbf{\theta}+d\theta(\iota_{\mathbf{d}\mathbf{u}\mathbf{u}^{-1}} \phi;\ \phi)+\iota_{\mathbf{d}\mathbf{u}\mathbf{u}^{-1}}L-\ E(\iota_{\mathbf{d}\mathbf{u}\mathbf{u}^{-1}}\phi; \phi)\right), \tag{287}\] \[=\int_{\Sigma}\mathbf{\theta}+d\theta(\iota_{\mathbf{d}\mathbf{u}\mathbf{u}^{-1}} \phi;\ \phi)+\iota_{\mathbf{d}\mathbf{u}\mathbf{u}^{-1}}L-\ E(\iota_{\mathbf{d}\mathbf{u}\mathbf{u}^{-1}}\phi; \phi).\]
This result, we notice, immediately reproduces and generalises the so-called "extended potential" derived in the edge mode literature: compare e.g. to eq.(3.22) in [10], eq.(4.6)-(4.9) in [12], or eq.(2.13) in [13]. What is done in this literature may then be understood, reinterpreted, in light of the DFM conceptual framework - in particular, edge modes (a misnomer, given the bulk contribution in (287)) are special cases of _ad hoc_ dressing fields, see section 3.4.
The dressing of the variational 1-form \(\mathbf{d}L\) associated to \(L\) is, from (233):
\[\mathbf{d}\mathbf{L}^{\mathbf{u}} =\mathbf{u}^{*}\left(\mathbf{d}L+\alpha(\mathbf{d}\mathbf{u}\circ\mathbf{u}^{-1};\ \phi)\right), \tag{288}\] \[=\mathbf{u}^{*}\left(\mathbf{d}L+\mathfrak{L}_{\mathbf{d}\mathbf{u}\mathbf{u}^{-1}}L \right),\]
consistent with the lemma (157). We then observed that:
\[\langle\mathbf{E}^{\mathbf{u}}+d\mathbf{\theta}^{\mathbf{u}},\mathbf{u}^{-1}(U)\rangle =\langle\mathbf{d}L+\mathfrak{L}_{\mathbf{d}\mathbf{u}\mathbf{u}^{-1}}L,U\rangle =\langle\mathbf{d}L^{\mathbf{u}},\mathbf{u}^{-1}(U)\rangle, \tag{289}\] \[=\mathbf{d}S+\langle\mathfrak{L}_{\mathbf{d}\mathbf{u}\mathbf{u}^{-1}}L,U\rangle =(\mathbf{d}S)^{\mathbf{u}},\]
which is also found from (242). This is in accord with the general result (190), giving for the dressed variational 1-form associated to the action \(S=\langle L^{\prime},U\rangle\) (191):
\[(\mathbf{d}S)^{\mathbf{u}} =\mathbf{d}S+\langle\mathfrak{L}_{\mathbf{d}\mathbf{u}\mathbf{u}^{-1}}L,U\rangle \tag{290}\] \[=\mathbf{d}S+\langle\iota_{\mathbf{d}\mathbf{u}\mathbf{u}^{-1}}L,U\rangle=\mathbf{d}S +\langle\iota_{\mathbf{d}\mathbf{u}\mathbf{u}^{-1}}L,\partial U\rangle.\]
As previewed at the end of section 3.3, this indicates, first, that \(\mathbf{E}^{u}\) are indeed the field equations associated to \(L^{u}\), and second, that these equations for the relational variables \(\phi^{u}\) are functionally the same as those, \(\mathbf{E}\), for the gauge variables \(\phi\) of the bare theory \(L^{\prime}\): i.e. the variational principle is stable under dressing, i.e. under relational reformulation (as one would expect). As is manifest from (283), the space of dressed/relational solutions is isomorphic to the space of bare solutions, \(\mathcal{S}^{u}\simeq\mathcal{S}\). Formula (283) also shows that, when one can neglect the contribution from the variation of the dressing field (i.e. the coordinatizing reference frame, or d.o.f.), the physical/relational field equations are, up to \(\mathbf{u}^{*}\), the bare field equations.
These facts, we argue, explain why it is possible to successfully test general relativistic theories before solving the issue of their observables. In the case e.g. of GR, some authors (notably Rovelli [48; 50; 71]) have argued that what is compared to experiments is a "gauge-fixed" version of the theory, tacitly presuming some privileged physical reference frame: We suggest that a more accurate statement would be that it is tacitly a dressed/relational version of GR that is compared to experiments, in the specified regime of neglecting variation of the chosen dressing field. It is worth pondering what happens when such variation cannot be neglected, in a "strongly relational regime". We will do so in a follow-up paper.
Turning our attention to the symplectic 2-form current, we find its dressing to be, from (247):
\[\begin{split}\mathbf{\Theta}^{u}=\mathbf{u}^{*}\,\Big{[}& \mathbf{\Theta}+\mathbf{d}J(\mathbf{du}\circ\mathbf{u}^{-1};\phi)\\ &+\tfrac{1}{2}\left(d\mathcal{A}(|\mathbf{du}\circ\mathbf{u}^{-1},\ \mathbf{du}\circ\mathbf{u}^{-1}];\phi)+d\gamma(|\mathbf{du}\circ\mathbf{u}^{-1},\ \mathbf{du}\circ\mathbf{u}^{-1}|_{\text{det}(\mathbf{u})};\phi)\right)\\ &+d\left(\iota_{\mathbf{du}\circ\mathbf{u}^{-1}}\mathbf{\theta}-\mathbf{d}\, \gamma(\mathbf{du}\circ\mathbf{u}^{-1};\ \phi)\right)\\ &+\iota_{\{\mathbf{du}\circ\mathbf{u}^{-1}\}\mathbf{du}\circ\mathbf{u}^{-1}}\mathbf{E }-\iota_{\mathbf{du}\circ\mathbf{u}^{-1}}\mathbf{E}\ \ \Big{]}.\end{split} \tag{291}\]
From (249), or using the expression for the current (206)-(258), we then find the dressed symplectic 2-form to be:
\[\begin{split}\mathbf{\Theta}_{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\bm
As detailed in section 3.2.2, if the dressing field eliminates \(\mathrm{Diff}(M)\) completely, a priori there may be residual transformations of the second kind stemming from ambiguities (174) in choosing the dressing field: Transformations under \(\mathrm{Diff}(N)\), the structure group of the space of dressed fields \(\Phi^{\mu}\), and under \(\mathbf{Diff}_{\nu}(\Phi^{\mu})\simeq C^{\infty}(\Phi^{\mu},\mathrm{Diff}(N))\), its group of vertical diffeomorphisms. In this case, results of section 4.1.1 again applies mutatis mutandis.
Noether charges associated to \(\mathbf{X}\in\mathsf{bif}(N)\) are immediately found to be, as in (204)-(207):
\[Q_{\Sigma^{\mu}}(\mathbf{X};\phi^{\mu}):=\langle J(\mathbf{X}; \phi^{\mu}),\Sigma^{\mu}\rangle=\int_{\Sigma^{\mu}}l_{\mathbf{X}}\phi^{\mu}-l_ {\mathbf{X}}L^{\mu}-d\gamma(\mathbf{X};\phi^{\mu}), \tag{294}\] \[=\int_{\partial\Sigma^{\mu}}\theta(l_{\mathbf{X}}\phi^{\mu};\phi^ {\mu})-\gamma(\mathbf{X};\phi^{\mu})\ -\int_{\Sigma^{\mu}}E(l_{\mathbf{X}}\phi^{\mu};\phi^{\mu}),\]
and to satisfy, as in (210),
\[\iota_{\mathbf{X}^{\prime}}\mathbf{\Theta}_{\Sigma^{\mu}}=-\mathbf{d}Q_{\Sigma^{ \mu}}(\mathbf{X};\phi^{\mu})+\int_{\partial\Sigma^{\mu}}\iota_{\mathbf{X}} \mathbf{\theta}^{\mu}-\mathbf{d}\gamma(\mathbf{X};\phi^{\mu})\ -\int_{\Sigma^{\mu}}\iota_{\mathbf{X}}\mathbf{E}^{ \mu}, \tag{295}\]
so that their bracket is induced by \(\mathbf{\Theta}_{\Sigma^{\mu}}\) as in (217)-(219) :
\[\begin{split}[Q_{\Sigma^{\mu}}(\mathbf{X};\phi^{\mu}),Q_{\Sigma^{ \mu}}(\mathbf{Y};\phi^{\mu})]:=\mathbf{\Theta}_{\Sigma^{\mu}}(\mathbf{X}^{ \prime},\mathbf{Y}^{\prime})=Q_{\Sigma^{\mu}}([\mathbf{X},\mathbf{Y}]_{ \mathsf{bif}(N)};\phi^{\mu})+\int_{\partial\Sigma^{\mu}}\mathcal{A}([\mathbf{X },\mathbf{Y}];\phi^{\mu})+\gamma([\mathbf{X},\mathbf{Y}]_{\mathsf{bif}(N)}; \phi^{\mu})\\ +\int_{\Sigma^{\mu}}\iota_{\mathbf{X}^{\prime}\mathbf{Y}}\mathbf{E} ^{\mu}-\iota_{\mathbf{Y}^{\prime}\mathbf{X}}\mathbf{E}^{\mu},\end{split} \tag{296}\]
with \(\mathbf{X},\mathbf{Y}\in\mathsf{bif}(N)\) and \(\mathbf{X}^{\prime},\mathbf{Y}^{\prime}\in\Gamma(V\Phi^{\mu})\) the corresponding fundamental vertical vector fields on the space of dressed fields. On-shell this is,
\[\{Q_{\Sigma^{\mu}}(\mathbf{X};\phi^{\mu}),Q_{\Sigma^{\mu}}(\mathbf{Y};\phi^{ \mu})\}=Q_{\Sigma}([\mathbf{X},\mathbf{Y}]_{\mathsf{bif}(N)};\phi^{\mu})+ \mathcal{C}([\mathbf{X},\mathbf{Y}];\phi^{\mu})_{|S}\,, \tag{297}\]
defining the map
\[\begin{split}\mathcal{C}(\ ;\phi):\mathsf{bif}(N)\times\mathsf{bif}(N)& \to C^{\infty}(\Phi^{\mu}),\\ (X,Y)&\mapsto\mathcal{C}([\mathbf{X},\mathbf{Y}]; \phi):=\int_{\partial\Sigma}\mathcal{A}([\mathbf{X},\mathbf{Y}];\phi^{\mu})+ \gamma([\mathbf{X},\mathbf{Y}]_{\mathsf{bif}(N)};\phi^{\mu}),\end{split} \tag{298}\]
with concrete expression as in (224)-(225). The latter is a 2-cocycle on-shell and under adequate b.c. so that (297) is a central extension of \(\mathsf{bif}(N)\).
Obviously the results for charges \(Q_{\Sigma^{\mu}}(\mathbf{X};\phi^{\mu})\) associated to \(\phi^{\mu}\)-dependent parameters \(\mathbf{X}\in C^{\infty}(\Phi^{\mu},\mathsf{bif}(N))\simeq\mathsf{bif}_{ \mathsf{V}}(\Phi^{\mu})\) and their bracket are just copy/pasted from section (4.2.1).
Also, vertical transformations of all relevant dressed objects under \(\mathbf{Diff}_{\nu}(\Phi^{\mu})\simeq C^{\infty}(\Phi^{\mu},\mathrm{Diff}(N))\), by (178), are the same as those of their bare counterpart under \(\mathbf{Diff}_{\nu}(\Phi)\simeq C^{\infty}(\Phi,\mathrm{Diff}(M))\) as found in section 4.2.
As argued in section 3.4, the existence of residual symmetries of the second kind due to the ambiguity in choosing a dressing field, thus the existence of the above corresponding formal algebra of charges, is a systematic feature of cases where _ad hoc_ dressing fields \(\mathbf{u}=u\) (a.k.a. gravitational Stueckelberg fields) are introduced: Then the residual symmetry \(\mathrm{Diff}(N)\) is exactly isomorphic to the original \(\mathrm{Diff}(M)\) symmetry, and neither contains more information nor enjoys more immediate physical interpretation.
Such is the case in the edge modes literature, where edge modes are indeed examples of _ad hoc_ dressing fields, and the notion of "surface symmetries" or "corner symmetries" are readily understood in terms of \(\mathrm{Diff}(N)\) and \(\mathbf{Diff}_{\nu}(\Phi^{\mu})\simeq C^{\infty}(\Phi^{\mu},\mathrm{Diff}(N))\) as described in section 3.2.2. To see this, one may compare (174) and the above results to e.g. eq.(3.44) and section 3.3 in [10], eq.(6.3) and section 6 in [12], or eq.(2.42) and section 2.3 in [13].
The above, or rather more generally section 3.2.2, may also be compared to sections 3.3.6 to 3.3.9 of [43], where notions such as "frame reorientations", "relational atlases", or "dynamical frame covariance", clearly can be understood in terms of \(\mathrm{Diff}(N)\) and \(\mathbf{Diff}_{\nu}(\Phi^{\mu})\simeq C^{\infty}(\Phi^{\mu},\mathrm{Diff}(N))\).
### Relational interpretation of the DFM
In case of a genuinely constructively built \(\phi\)-dependent dressing field \(\mathbf{u}=\mathbf{u}(\phi)\), the ambiguity parametrised by \(\mathrm{Diff}(N)\) may be greatly reduced, possibly (perhaps likely) even to a physically motivated discrete choice of reference field, or d.o.f., among the collection \(\phi\). This is the situation where the relational interpretation of the formalism is at its strongest: The dressed theory \(L^{\mathbf{u}}\), and derived basic objects \(\mathbf{E}^{\mathbf{u}}\), \(\mathbf{\theta}_{\mathbf{\Sigma}}^{\mathbf{u}}\) and \(\mathbf{\Theta}_{\mathbf{\Sigma}}^{\mathbf{u}}\), are manifestly relational quantities with a clear physical interpretation. Such is the case in all manners of "scalar coordinatisations" of GR, as proposed e.g. in [48, 49, 50, 51, 52]. There, the scalar fields, clock fields, or dust fields, are exactly dressing fields \(\mathbf{u}\), and the dressed metric \(g^{\mathbf{u}}=\mathbf{u}^{*}g\) (and its derived quantities \(\mathbf{\alpha}^{\mathbf{u}}=\alpha(\mathbf{dg}^{\mathbf{u}};g^{\mathbf{u}})\)) acquire a clear physical meaning as a relational observable - a "complete observable" in the terminology coined by Rovelli.
When this applies, as one may expect in modelling realistic situations, if conserved charges cannot be derived not via a Noether analysis (all objects being \(\mathrm{Diff}(M)\)-invariant), they may be derived from the field equations \(\mathbf{E}^{\mathbf{u}}\).
## 6 Conclusion
The purpose of this paper was to give a detailed presentation of the dressing field method for \(\mathrm{Diff}(M)\)-theories, extending the original framework designed to deal with gauge theories with internal gauge symmetry groups - of either Yang-Mills type, or gauge gravity type based on Cartan geometry. The proper mathematics required for the formulation of this extension of the DFM is the bundle geometry of field space \(\Phi\). By carefully exposing it, we clarified a number of notions often encountered in the literature. Let us mention three especially noteworthy.
First, we have stressed the proper geometric understanding of field-independent and field-dependent diffeomorphisms: The former, \(\mathrm{Diff}(M)\), is the structure group of \(\Phi\) as a bundle. Its action by pullback on forms of \(\Phi\) defines their equivariance. The latter, \(C^{\infty}(\Phi,\mathrm{Diff}(M))\simeq\mathbf{Diff}_{v}(\Phi)\), is the group of vertical diffeomorphisms of \(\Phi\). It contains as a subgroup \(\mathbf{Aut}_{v}(\Phi)\), the group of vertical _automorphisms_ of \(\Phi\), isomorphic to its gauge group \(\mathbf{Diff}(M)\). Their actions by pullback on forms of \(\Phi\) geometrically define general vertical, and gauge transformations respectively.
Secondly, we elucidated the geometric origin of the "extended bracket" (24) for field-dependent vector fields of \(M\), often used in the covariant phase space, edge mode, and BMS literature - tracing back its introduction to [22] and [23], before its resurfacing in [24]. Some have attempted to interpret it as a Lie algebroid bracket [58]. Actually, it is but the degree 0 case of the Frolicher-Nijenhuis bracket for vector-valued forms on \(\Phi\), and is tied to the properties of the Nijenhuis-Lie derivative [29] on a principal bundle. The latter expressing, on \(\Phi\) and in degree 0, the action of \(\mathbf{diff}_{v}(\Phi)\simeq C^{\infty}(\Phi,\mathrm{bif}(M))\). This extended bracket reduces to the standard \(\mathrm{bifif}(M)\) bracket for \(\mathbf{bifif}(M)\simeq\mathbf{aut}_{v}(\Phi)\), as is standard in bundle geometry. Further details on this may be found in a companion technical note [57], treating the finite dimensional case.
Thirdly, we have stressed the non-standard notion of twisted equivariant/tensorial forms and of the corresponding twisted connection (necessary for their covariant derivation): these are forms whose equivariance is controlled by \(\mathrm{Diff}(M)\) 1-cocycles, rather than \(\mathrm{Diff}(M)\) representations. These objects are key to understanding the geometry of classical and quantum \(\mathrm{bifif}(M)\)-anomalies. The Wess-Zumino consistency condition for such anomalies being e.g. encoded in the horizontality property of the twisted curvature.
As a natural part of the bundle geometry of \(\Phi\), we gave an account of integration on \(M\) as an invariant pairing on what we called the "associated bundle of regions", a bundle associated to \(\Phi\) via the defining representation of \(\mathrm{Diff}(M)\) - the field of open sets of \(M\). This allowed to properly understand the action of \(C^{\infty}(\Phi,\mathrm{Diff}(M))\) in integrals, a technicality relevant to the variational principle.
Exposing the bundle structure of \(\Phi\) also shows the proper geometric understanding of several heuristic computations done in various work on covariant phase space literature. In so doing, we thus set the stage to give our own streamlined account of the covariance phase space methods for \(\mathrm{Diff}(M)\)-theories. Starting from a given Lagrangian, we derived concrete and conceptually transparent expressions for the Noether charge associated to both \(\mathrm{bifif}(M)\) and \(\mathbf{bifif}_{v}(\Phi)\simeq C^{\infty}(\Phi,\mathrm{bif}(M))\), and for their brackets induced by the symplectic 2-form \(\mathbf{\Theta}_{\mathbf{\Sigma}}\). These allowed for an easy reading of the conditions for the brackets to be Lie, and the algebra of charges to be central extension of \(\mathrm{bifif}(M)\) and \(\mathbf{bifif}_{v}(\Phi)\simeq C^{\infty}(\Phi,\mathrm{bif}(M))\): which are, unsurprisingly **1)** being on-shell, **2)** being under b.c. killing the symplectic flux (closing the system). The expression we derive for the 2-cocycles reproduces or extends results from the covariant phase space literature.
It is reasonable to try and redefine the bracket so as to relax condition **2**): This would be interpretable as a Poisson bracket well-defined for open systems, i.e. where there is "flux" through the boundary of the region. Such systems are relevant when analysing electromagnetic and especially gravitational radiations. The latter subject in particular is closely related to BMS asymptotic symmetries (and its generalisations, like \(\Lambda\)-BMS). In that context, such modified prescriptions have been proposed: Notably the Barnich-Troessaert (BT) bracket [97], followed by attempts to rederive it or by other modifications, e.g. [26; 27; 35; 41; 92] and references therein. It was not our aim to make explicit contact with those attempts. But our first principles derivation may help to make it conceptually clearer how one may proceed in a geometrically natural way. We shall revisit this topic in future works.
We derived geometrically the \(C^{\infty}(\Phi,\mathrm{Diff}(M))\simeq\mathbf{Diff}_{v}(\Phi)\)-transformations of the various key objects discussed, Lagrangian, field equation, presymplectic potential and 2-forms. This showed in particular that the variational principle, thus the space of solutions, is stable under the action of field-dependent diffeomorphisms. Which proves that \(\mathrm{Diff}(M)\)-theories have a much larger "covariance group", \(C^{\infty}(\Phi,\mathrm{Diff}(M))\simeq\mathbf{Diff}_{v}(\Phi)\), a fact hinted at first (to my knowledge) by Bergmann and Komar in the early 70s [22].
The derivation of these \(C^{\infty}(\Phi,\mathrm{Diff}(M))\simeq\mathbf{Diff}_{v}(\Phi)\)-transformations was also necessary to apply the rule of thumb of the DFM, thereby obtaining the dressed, or basic, counterparts of our key objects. These reproduce and generalise results from the edge mode literature, in particular the formulae for the "extended" presymplectic potential and 2-form. The DFM also immediately allows to derive the formulas for Noether charges associated to residual symmetries arising from possible ambiguities in the choice of dressing field. These reproduce results of the edge mode literature pertaining to the notion of "surface/corner symmetries". We stressed that when dressing fields are _ad hoc_, \(\mathbf{u}=u\), residual symmetries are mere duplicates of \(\mathrm{Diff}(M)\), encoding no more information nor susceptible to more immediate physical interpretation. Edge modes being _ad hoc_ dressing fields, this applies to surface/corner symmetries, contra the prevalent opinion.
Finally we stressed that constructive dressing fields \(\mathbf{u}=\mathbf{u}(\phi)\) allows the most interesting application of the DFM. It turns out to be an explicit relational framework for general relativistic theories, i.e. it allows to rewrite such theories in a \(\mathrm{Diff}(M)\)-invariant and manifestly relational way. The DFM can be seen as a systematic, if formal, implementation in field theory of Einstein's point coincidence argument: his answer to the hole argument [31; 32; 33], clarifying the physical meaning of \(\mathrm{Diff}(M)\)-invariance. To register this key physical insight, we tacitly adopt a rebranding of GR as "General Relationality".
The DFM also happens to make clear why "bare" versions of GR theories could be tested before solving the issue of observables: The relational version (often inaccurately referred to as a "gauged-fixed" version) is formally identical to the bare one, and the practical deployment of the theory tacitly presume the use of its relational version (as material reference frames = dressing fields are used). In a follow-up work, we will show how the DFM reproduces naturally several works relating to the production of observables in GR, notably the so-called "scalar coordinatisation" [48; 49; 50; 51; 52]. We will also show how to apply the formalism to cosmological perturbation theory (encompassing the notion of Bardeen variables).
The definition of "dressed integrals" is part of the DFM framework, and there appears naturally the notion of "dressed regions": i.e. \(\mathrm{Diff}(M)\)-invariant relationally defined region of _spacetime_ - again in line with the insight of the point-coincidence argument. This in particular dispels the often repeated misconception that "boundaries break \(\mathrm{Diff}(M)\)-invariance", which is logically equivalent to the hole argument: Gauge symmetries are never broken, a relationally defined, physical, boundary is obviously \(\mathrm{Diff}(M)\)-invariant. We will investigate the implications for the definition of local subsystems, black hole physics, and to asymptotic symmetries in a follow-up paper.
The framework developed here extends naturally (with little adjustments) to relativistic _gauge_ field theories, replacing \(\mathrm{Diff}(M)\) with \(\mathrm{Aut}(P)\) as the structure group of \(\Phi\). This extension, developed in the forthcoming work [98], gives a relational account of "Einstein-Yang-Mills" theories. Indeed, given \(\mathcal{H}\simeq\mathrm{Aut}_{v}(P)\) the gauge group of a principal \(H\)-bundle \(P\), on account of the SES of groups \(\mathcal{H}\to\mathrm{Aut}(P)\to\mathrm{Diff}(M)\) canonically associated to \(P\),31 the group \(\mathrm{Aut}(P)\) locally splits as \(\mathrm{Diff}(M)\ltimes\mathcal{H}_{\mathrm{loc}}\), with \(\mathcal{H}_{\mathrm{loc}}\) the (local) gauge group of a gauge field theory.
Footnote 31: The corresponding SES of Lie algebras defines the Atiyah Lie algebroid canonically associated to \(P\). See (5)-(20) in our case \(P=\Phi\).
We will illustrate this extension of the DFM with the case of pure GR (with \(\mathrm{Diff}(M)\) and \(\mathcal{H}_{\mathrm{loc}}=\mathcal{SO}(1,3)\)), of GR + scalar EM (with \(\mathrm{Diff}(M)\) and \(\mathcal{H}_{\mathrm{loc}}=\mathcal{U}(1)\)), and of conformal gravity in the Cartan geometric framework (with \(\mathrm{Diff}(M)\) and \(\mathcal{H}_{\mathrm{loc}}=\mathcal{SO}(2,4)\)).
This framework has obvious implications for quantum gravity that we will also pursue.
## Acknowledgment
This work was funded by the OP J.A.C MSCA grant, number CZ.02.01.01/00/22_010/0003229, co-funded by the Czech government Ministry of Education, Youth & Sports and the EU. This research was also funded in part by the Austrian Science Fund (FWF), [P 36542]. Support from the service _Physics of the Universe, Fields and Gravitation_ at UMONS (BE) is also acknowledged.
The author wishes to thank L. Ravera for sustained and stimulating discussions on relationality in general relativitic physics, and for steering him into emphasizing it as a key feature of the extended DFM developed here. She is to be credited for the rebranding of GR as "General Relationality", adopted in this paper.
## Appendix A Appendix
### Lie algebra (anti)-isomorphisms
We prove here that the "verticality" map \(\left|{}^{v}:\mathfrak{lift}(M)\to\Gamma(V\Phi)\), \(X\mapsto X^{v}\), is a morphism of Lie algebra. Let us write the flow through \(\phi\in\Phi\) of \(X^{v}\in\Gamma(V\Phi)\) as \(\tilde{\psi}_{\tau}(\phi):=R_{\phi_{\tau}}\phi:=\psi_{\tau}^{*}\phi\), with of course \(X=\frac{d}{d\tau}\psi_{\tau}\left|{}_{\tau=0}\in\Gamma(TM)\). So that,
\[X^{v}_{|\phi}=\frac{d}{d\tau}\tilde{\psi}_{\tau}(\phi)\left|{}_{\tau=0}\quad \left({}=X(\phi)^{v}\frac{\delta}{\delta\phi},\text{ written as a derivation of }C^{\infty}(\Phi)\right).\right. \tag{299}\]
Thus we can write the bracket of two vertical vector fields is,
\[[X^{v},Y^{v}]_{\phi}=\mathbf{L}_{X}Y^{v}_{|\phi} :=\frac{d}{d\tau}(\tilde{\psi}_{\tau}^{-1})_{\star}\mathbf{Y}^{v}_{| \tilde{\psi}_{\tau}(\phi)}\left|{}_{t=0},\right.\] \[:=\frac{d}{d\tau}\frac{d}{ds}\left(\tilde{\psi}_{\tau}^{-1}\circ \tilde{\eta}_{s}\circ\tilde{\psi}_{\tau}\right)(\phi)\left|{}_{s=0}\right.\right| _{\tau=0},\] \[:=\frac{d}{d\tau}\frac{d}{ds}\left.R_{\phi_{\tau}^{-1}\circ\tilde{ \psi}_{\tau}}\circ R_{\eta_{s}}\circ R_{\phi}\phi\left|{}_{s=0}\right.\right| _{\tau=0},\] \[:=\frac{d}{d\tau}\frac{d}{ds}\left.R_{\left(\psi_{\tau}\circ\eta_ {s}\circ\psi_{\tau}^{-1}\right)}\phi\left|{}_{s=0}\right.\right|_{\tau=0},\] \[:=\frac{d}{d\tau}\frac{d}{ds}\left.\left(\psi_{\tau}\circ\eta_{s} \circ\psi_{\tau}^{-1}\right)^{\phi}\phi\left|{}_{s=0}\right.\right|_{\tau=0},\] \[\left.=:\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \
Finally,
\[[\mathbf{X}^{v},\mathbf{Y}^{v}]_{\theta} =\frac{d}{dt}\frac{d}{ds}R_{\mathbf{\psi}_{\tau}(\theta)^{-1}\mathbf{\eta}_{s }(\theta)\mapsto\mathbf{\psi}_{\tau}(\theta)}\,\phi\,\big{|}_{s=0}\big{|}_{\tau=0},\] \[=\frac{d}{dt}\frac{d}{ds}\underbrace{(\mathbf{\psi}_{\tau}(\theta)^{ -1}\circ\mathbf{\eta}_{s}(\phi)\circ\mathbf{\psi}_{\tau}(\theta))^{*}\phi}_{\text{ flow of }[\mathbf{X},\mathbf{Y}]}\big{|}_{s=0}\big{|}_{\tau=0},\] \[=:\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,
### Assumptions on the set of fields and the Lagrangian functional
First, we assume that \(L^{\prime}/L\) is a gauge theory, based on a \(H\)-bundle \(P\), so that the elementary fields under considerations are Ehresmann and/or Cartan connections 1-forms \(A\), depending if we are doing Yang-Mills or gravity gauge theories (or both), and tensorial 0-forms \(\varphi\) (sections of associated bundles) representing matter fields. Thus \(\phi=\{A,\varphi\}\). The curvature of \(A\) is the Ad-tensorial 2-form \(F=DA\), and the covariant derivative of a \(\varphi\) is \(\rho\)-tensorial 1-form \(D\varphi=d\varphi+\rho_{*}(A)\varphi\). The covariant derivative of a \(\rho\)-tensorial form \(b\) is \(Db=db+\rho_{*}(A)b\), and we have that \(DF=0\) (Bianchi identity) and \(DD\varphi=\rho_{*}(F)\varphi\). We write \(D\phi=\{F,D\varphi\}\). Since \(D^{2\varphi}\varphi=\rho_{*}(F^{\rho})\varphi\) and \(D^{2\rho+1}\varphi=\rho_{*}(F^{\rho})D\varphi\), \(\{\phi,D\phi\}\) is an algebraically closed set of variables under the action of \(D\). Defining the "covariant Lie derivative" \(L^{D}_{X}:=[t_{X},D]\), it is easy to probe that \(\mathcal{L}_{X}b=\mathcal{L}^{D}_{X}b-\rho_{*}(t_{X}A)b\). On the other hand, it also easy seen that \(\mathcal{L}_{X}A=D(t_{X}A)+t_{X}F\). So we can write the generic expression
\[\iota_{X}d\phi=\mathcal{L}_{X}\phi=\iota_{X}D\phi+D(t_{X}\phi)-\rho_{*}(t_{X}A )\phi. \tag{310}\]
Now, we assume the Lagrangian is \(H\)-invariant, and of the form
\[L(\phi)=\bar{L}(\phi;D\phi), \tag{311}\]
where \(\bar{L}\) is the functional expression in terms of \(\phi\) and \(D\phi\). In other words, we assume that actually \(L\) is defined on \(J^{1}\Phi\), the \(1^{\text{st}}\) jet bundle of field space. Then we have,
\[\mathbf{dL}_{|\phi}=\bar{L}_{0}(\mathbf{d}\phi;\{\phi\})+\bar{L}_{1}(\mathbf{d}D\phi;\{ \phi\}), \tag{312}\]
where \(\{\phi\}\) means the collection of remaining \(\phi\) and \(D\phi\) in the respective functional expression \(\bar{L}_{0/1}\). Using \(\mathbf{d}D\phi=D(\mathbf{d}\phi)+\rho_{*}(\mathbf{dA})\phi\),and integrating by parts, we get the formal expression of the field equation and presymplectic potential:
\[\mathbf{dL}_{|\phi} =\bar{L}_{0}(\mathbf{d}\phi;\{\phi\})+\bar{L}_{1}(D(\mathbf{d}\phi)+\rho_{ *}(\mathbf{dA})\phi;\{\phi\}),\] \[=d\bar{L}_{1}(\mathbf{d}\phi;\{\phi\})-(-)^{r}\bar{L}_{1}(\mathbf{d}\phi;D [\phi])+\bar{L}_{1}(\rho_{*}(\mathbf{dA});\{\phi\})+\bar{L}_{0}(\mathbf{d}\phi;\{\phi\}),\] \[=:d\theta(\mathbf{d}\phi;\phi)+E(\mathbf{d}\phi;\phi),\] \[=d\theta_{|\phi}+\mathbf{E}_{|\phi}.\]
Where \(r=0,1\) is the \(\Omega^{\bullet}(M)\) form degree of \(\mathbf{d}\phi\) over which the integration by part is made, and we used the \(H\)-invariance of \(L/\bar{L}_{0(1/1)}\) to write \(\bar{L}_{1}(\rho_{*}(A),\mathbf{d}\phi;\{\phi\})+(-)^{r}\bar{L}_{1}(\mathbf{d}\phi; \rho_{*}(A)(\phi))=0\). From this, we find on the one hand the expression of the evaluation of \(\theta\) on a vertical vector field \(X^{\nu}\in\Gamma(V\Phi)\):
\[\iota_{X^{\nu}}\mathbf{\theta}=\theta(\iota_{X^{\nu}}\mathbf{d}\phi;\phi)=\bar{L}_{1}( \iota_{X^{\nu}}d\phi;\phi) \tag{313}\]
On the other, using (310), hand we also get the expression of the evaluation of the Lagrangian \(n\)-form on a vector field \(X\in\Gamma(TM)\simeq\mathfrak{hif}(M)\), acting as a derivation (like \(\mathbf{d}\)):
\[\iota_{X}L(\phi)=\iota_{X}\bar{L}(\phi;D\phi) =\bar{L}_{0}(\iota_{X}\phi;\{\phi\})+\bar{L}_{1}(\iota_{X}D\phi; \{\phi\}),\] \[=\bar{L}_{0}(\iota_{X}\phi;\{\phi\})+\bar{L}_{1}(\iota_{X^{\nu}}d \phi-D(\iota_{X}\phi)+\rho_{*}(\iota_{X}A)\phi;\{\phi\}),\] \[=\bar{L}_{1}(\iota_{X^{\nu}}d\phi;\{\phi\})-d\bar{L}_{1}(\iota_{X }\phi;\{\phi\})+\bar{L}_{1}(\iota_{X}\phi;D[\phi]+\bar{L}_{1}(\rho_{*}(\iota_{X }A)\phi;\{\phi\})+\bar{L}_{0}(\iota_{X}\phi;\{\phi\}),\] \[=\iota_{X^{\nu}}\theta-d\theta(\iota_{X}\phi;\phi)+E(\iota_{X}\phi ;\phi).\]
Which give us the important identity,
\[\iota_{X^{\nu}}\mathbf{\theta}-\iota_{X}L=d\theta(\iota_{X}\phi;\phi)-E(\iota_{X} \phi;\phi). \tag{314}\]
It is used in section 4.1.1 to expressed the Noether current and charge, so as to make manifest that they are \(d\)-exact on-shell.
### Condition for the bracket (218) to be Lie
For any form \(\omega\in\Omega^{1}(N)\) and \(X,Y,Z\in\Gamma(N)\), we have by Koszul formula:
\[d\omega(X,Y)=X\cdot\omega(Y)-Y\cdot\omega(X)-\omega([X,Y]) =\iota_{X}d[\omega(Y)]-\iota_{Y}d[\omega(X)]-\iota_{\{X,Y\}}\omega,\] \[=L_{X}\iota_{Y}\omega-L_{Y}\iota_{X}\omega-\iota_{\{X,Y\}}\omega, \tag{315}\] \[=\iota_{Y}L_{X}\omega-L_{Y}\iota_{X}\omega,\] \[=\iota_{Y}L_{X}\omega-\iota_{X}L_{Y}\omega+\iota_{\{X,Y\}}\omega. \tag{316}\]
The two numbered equalities are useful ways to express the exterior derivative of a 1-form.
Now, for a \(d\)-exact 2-form \(\Omega=d\omega\in\Omega^{2}(N)\), we have by (316):
\[\Omega(X,[Y,Z])+c.p. =d\omega(X,[Y,Z])+c.p.\] \[=-\iota_{X}L_{[Y,Z]}\omega+\iota_{[Y,Z]}L_{X}\omega-\iota_{[X,[Y, Z]]}\omega\ +\ c.p.\] \[=-\iota_{X}[L_{Y},L_{Z}]\omega+\iota_{[Y,Z]}L_{X}\omega\ +\ c.p.\] \[=-\iota_{X}L_{Y}L_{Z}\omega+\iota_{X}L_{Z}L_{Y}\omega+\iota_{[Y, Z]}L_{X}\omega\ +\ c.p.\] \[=\left(-L_{Y}\iota_{X}+\iota_{\{X,Y\}}\right)L_{Z}\omega+\left(L_ {Z}\iota_{X}-\iota_{\{Z,X\}}\right)L_{Y}\omega+\iota_{\{Y,Z\}}L_{X}\omega\ +\ c.p.\] \[=\left(-L_{Y}\iota_{X}+\iota_{\{Y,X\}}\right)L_{Z}\omega+\left(L_ {Z}\iota_{X}-\iota_{\{Z,X\}}\right)L_{Y}\omega+\iota_{\{Y,Z\}}L_{X}\omega\ +\ c.p.\] \[=\left(-L_{Y}\iota_{X}+\iota_{\{Y,X\}}\right)L_{Z}\omega+\left(L_ {Z}\iota_{X}-\iota_{\{Z,X\}}\right)L_{Y}\omega+\iota_{\{Y,Z\}}L_{X}\omega\] \[\quad+\left(-L_{Z}\iota_{Y}+\iota_{\{Z,Y\}}\right)L_{X}\omega+ \left(L_{X}\iota_{Y}-\iota_{\{X,Y\}}\right)L_{Z}\omega+\iota_{\{Z,X\}}L_{Y}\omega\] \[\quad+\left(-L_{X}\iota_{Z}+\iota_{\{X,Z\}}\right)L_{Y}\omega+ \left(L_{Y}\iota_{Z}-\iota_{\{Y,Z\}}\right)L_{X}\omega+\iota_{\{X,Y\}}L_{Z}\omega,\] \[=\left\{d(L_{X}\omega)\right\}(Y,Z)+\left\{d(L_{Y}\omega)\right\} (Z,X)+\frac{[d(L_{Z}\omega)](X,Y)}{[L_{X}\Omega](Y,Z)+c.p.} \tag{317}\]
By (315) to get the second before last line, and by \([L_{X},d]=0\) to the last. Which is finally, since \(d\Omega=0\):
\[\Omega(X,[Y,Z])+c.p.=\left\{d(\iota_{X}\Omega)\right\}(Y,Z)+c.p. \tag{318}\]
Applying this to the presymplectic 2-form current \(\mathbf{\Theta}\in\Omega^{2}(\Phi)\) and \(X^{\nu}\), \(Y^{\nu},Z^{\nu}\in\Gamma(V\Phi)\):
\[\mathbf{\Theta}(X^{\nu},[Y^{\nu},Z^{\nu}])+c.p.=\left(\mathbf{L}_{X^{\nu}}\mathbf{\Theta} \right)(Y^{\nu},Z^{\nu})+c.p.=\left(\mathbf{d}_{X^{\nu}}\mathbf{\Theta}\right)(Y^{\nu },Z^{\nu})+c.p. \tag{319}\]
Now, we defined a formal bracket (217) by \(\{Q_{\Sigma}(X;\phi),Q_{\Sigma}(Y;\phi)\}:=\mathbf{\Theta}_{\Sigma}(X^{\nu},Y^{ \nu})\), and from its on-shell form (218) one has:
\[\mathcal{C}(\{X,Y\};\phi) :=\{Q_{\Sigma}(X;\phi),Q_{\Sigma}(Y;\phi)\}-Q_{\Sigma}([X,Y]_{ \operatorname{int}(\lambda)};\phi), \tag{320}\] \[=\ \mathbf{\Theta}_{\Sigma}(X^{\nu},Y^{\nu})-Q_{\Sigma}([X,Y]_{ \operatorname{int}(\lambda)};\phi).\]
So,
\[\mathcal{C}(\{X,[Y,Z]_{\operatorname{int}(\lambda)};\phi\}+c.p. =\mathbf{\Theta}_{\Sigma}(X^{\nu},([Y,Z]_{\operatorname{int}(\lambda )})^{\nu})-Q_{\Sigma}([X,[Y,Z]_{\operatorname{int}(\lambda)}]_{\operatorname {int}(\lambda)};\phi)+c.p.,\] \[=\mathbf{\Theta}_{\Sigma}(X^{\nu},[Y^{\nu},Z^{\nu}])-Q_{\Sigma}([X,[Y,Z]_{\operatorname{int}(\lambda)};\phi)+c.p. \tag{321}\] \[=(\mathbf{d}_{X^{\nu}}\mathbf{\Theta}_{\Sigma})(Y^{\nu},Z^{\nu})+c.p.\]
Where we used (300), the fact that \(Q_{\Sigma}(X;\phi)\) is a linear map in its first argument together with Jacobi in \(\operatorname{bif}\!\!\!\!\dagger(M)\), and (319). Thus, (321) tells us that the 2-cocycle condition on \(\mathcal{C}\) holds, and the bracket (218) is Lie, if the vertical vector fields are either symplectic or Hamiltonian for the charges, showing that the Poisson algebra of these charges is a central extension (221) of \(\operatorname{bif}\!\!\!\!\dagger(M)\), as discussed in section 4.1.1.
### Concrete expression of the map (219)
Here we derive the result necessary to write the concrete expression (224) for the \(\mathcal{C}\)-map (219). We have
\[\mathcal{C}([X,Y];\phi):=\int_{\partial\Sigma}\mathcal{A}([X,Y];\phi)+\gamma([X,Y]_{\operatorname{int}(\lambda)};\phi), \tag{322}\]
with the definition (215) for the first term in the integrand:
\[-d\mathcal{A}([X,Y];\phi)=\iota_{X^{\prime}}\alpha(Y)-\iota_{Y^{\prime}}\alpha(X)+ \iota_{X^{\prime}}\iota_{Y}\mathbf{E}-\iota_{Y^{\prime}}\iota_{X}\mathbf{E}-\beta([X,Y]_ {\text{\tiny{\rm{[th/H]}}}};\phi) \tag{323}\]
To computing it explicitly, using (209), we start with the terms involving the symplectic anomaly:
\[\iota_{X^{\prime}}\alpha(Y) =\iota_{X^{\prime}}\Big{(}\iota_{Y}(\mathbf{d}L-\mathbf{E})+d\iota_{Y} \mathbf{\theta}+\mathbf{d}\beta_{\ell}(Y;\phi)\Big{)},\] \[=\iota_{Y}\mathfrak{E}_{X}L-\iota_{X^{\prime}}\iota_{Y}\mathbf{E}+d \iota_{Y}(\iota_{X^{\prime}}\mathbf{\theta})+\iota_{X^{\prime}}\mathfrak{E}_{Y}\ell,\] \[=\iota_{Y}d\iota_{X}L-\iota_{X^{\prime}}\iota_{Y}\mathbf{E}+d\iota_{ Y}\Big{(}\iota_{X}L+d\theta(\iota_{X}\phi;\phi)-E(\iota_{X}\phi;\phi)\Big{)}+ \mathfrak{E}_{Y}\mathfrak{E}_{X}\ell\] \[=\iota_{Y}d\iota_{X}L+d\iota_{Y}\iota_{X}L+\mathfrak{E}_{Y} \mathfrak{E}_{X}\ell-\iota_{X^{\prime}}\iota_{Y}\mathbf{E}+d\Big{(}\iota_{Y}d \theta(\iota_{X}\phi;\phi)-\iota_{Y}E(\iota_{X}\phi;\phi)\Big{)}. \tag{324}\]
where \(dL=0\) and (314) are used to get the third line. Then, using
\[[\mathfrak{E}_{X},\iota_{Y}]L =\iota_{[X,Y]_{[Y,M]}}L\] \[=\iota_{X}d\iota_{Y}L+d\iota_{X}\iota_{Y}L-\iota_{Y}\iota_{X}\mathbf{ d}L-\iota_{Y}d\iota_{X}L,\]
we have that:
\[\iota_{X^{\prime}}\mathbf{\alpha}(Y)-\iota_{Y^{\prime}}\alpha(x) =-\iota_{[X,Y]_{[Y,M]}}L-\mathfrak{E}_{[X,Y]_{[Y,M]}}\ell-\iota_{ X^{\prime}}\iota_{Y}\mathbf{E}+\iota_{Y^{\prime}}\iota_{X}\mathbf{E}\] \[\quad+d\Big{(}\iota_{Y}d\theta(\iota_{X}\phi;\phi)-\iota_{X}d \theta(\iota_{Y}\phi;\phi)+\iota_{Y}\iota_{X}L\Big{)}+d\Big{(}-\iota_{Y}E( \iota_{X}\phi;\phi)+\iota_{X}E(\iota_{Y}\phi;\phi)\Big{)},\] \[=-\beta([X,Y]_{[Y,M]};\phi)-\iota_{X^{\prime}}\iota_{Y}\mathbf{E}+ \iota_{Y^{\prime}}\iota_{X}\mathbf{E}\] \[\quad+d\Big{(}\mathfrak{E}_{Y}\theta(\iota_{X}\phi;\phi)-\mathfrak{ E}_{X}\theta(\iota_{Y}\phi;\phi)+\iota_{Y}\iota_{X}L\Big{)}+d\Big{(}-\iota_{Y}E( \iota_{X}\phi;\phi)+\iota_{X}E(\iota_{Y}\phi;\phi)\Big{)},\]
It follows indeed that the first term in the integrand of \(\mathcal{C}\) is
\[d\mathcal{A}([X,Y];\phi)=d\Big{(}\mathfrak{E}_{X}\theta(\iota_{Y}\phi;\phi)- \mathfrak{E}_{Y}\theta(\iota_{X}\phi;\phi)+\iota_{X}\iota_{Y}L\ +\ \iota_{Y}E(\iota_{X}\phi;\phi)-\iota_{X}E(\iota_{Y}\phi;\phi)\Big{)} \tag{325}\]
So that finally,
\[\mathcal{C}([X,Y];\phi)=\int_{\partial\Sigma}\mathfrak{E}_{X}\theta(\iota_{Y} \phi;\phi)-\mathfrak{E}_{Y}\theta(\iota_{X}\phi;\phi)+\iota_{X}\iota_{Y}L+ \gamma([X,Y]_{\text{\tiny{\rm{[th/H]}}}};\phi)\ +\ \iota_{Y}E(\iota_{X}\phi;\phi)-\iota_{X}E(\iota_{Y}\phi;\phi). \tag{326}\]
The result (224) displayed in the main text. It is a 2-cocycle on-shell and for adequate b.c.
|
2303.00517
|
Analyzing Impact of Socio-Economic Factors on COVID-19 Mortality
Prediction Using SHAP Value
|
This paper applies multiple machine learning (ML) algorithms to a dataset of
de-identified COVID-19 patients provided by the COVID-19 Research Database. The
dataset consists of 20,878 COVID-positive patients, among which 9,177 patients
died in the year 2020. This paper aims to understand and interpret the
association of socio-economic characteristics of patients with their mortality
instead of maximizing prediction accuracy. According to our analysis, a
patients households annual and disposable income, age, education, and
employment status significantly impacts a machine learning models prediction.
We also observe several individual patient data, which gives us insight into
how the feature values impact the prediction for that data point. This paper
analyzes the global and local interpretation of machine learning models on
socio-economic data of COVID patients.
|
Redoan Rahman, Jooyeong Kang, Justin F Rousseau, Ying Ding
|
2023-02-27T21:33:04Z
|
http://arxiv.org/abs/2303.00517v1
|
**Analyzing Impact of Socio-Economic Factors on COVID-19 Mortality Prediction Using SHAP Value**
## Abstract
_This paper applies multiple machine learning (ML) algorithms to a dataset of de-identified COVID-19 patients provided by the COVID-19 Research Database. The dataset consists of 20,878 COVID-positive patients, among which 9,177 patients died in the year 2020. This paper aims to understand and interpret the association of socio-economic characteristics of patients with their mortality instead of maximizing prediction accuracy. According to our analysis, a patient's household's annual and disposable income, age, education, and employment status significantly impacts a machine learning model's prediction. We also observe several individual patient data, which gives us insight into how the feature values impact the prediction for that data point. This paper analyzes the global and local interpretation of machine learning models on socio-economic data of COVID patients._
## Introduction
The COVID-19 pandemic has impeded and altered the ways of human life. Its impact has made COVID-19 one of the primary focuses of research since 2019. The disease is responsible for the death of 850,608 people in the USA alone [14]. In the face of a new danger, the knowledge of the deterministic factors of the COVID-19 pandemic has the potential to save lives. For this reason, it is vital to understand the socio-economic characteristics of COVID-19 afflicted patients and determine the correlation between a patient's social and economic condition and their mortality due to COVID-19. The first step to determining the existence of such a correlation is to observe and understand the relationship between a patient's socio-economic characteristics and their mortality due to COVID-19.
This paper examines a dataset containing information regarding patients suffering from the SARS-CoV-2 Coronavirus in 2020. The dataset was constructed by retrieving patients' COVID-19 from a more extensive private database, Covid19ResearchDatabase [19]. The database is a public-private consortium organized by Datavant, Health Care Cost Institute, Medidata, Mirador Analytics, Veradigm, Change Healthcare, SAS, Snowflake, and many others. The filtered dataset contains 20,878 COVID-19 patients, among which 9,177 patients died in 2020. We apply multiple machine learning algorithms to the dataset to understand the impact of the features on the models' prediction. We use the Shapley score to determine each feature's importance in the learning process of Extreme Gradient Boosting (XGBoost) and Random Forest models. Shapley score is a concept recently taken from cooperative game theory and applied to machine learning [18]. We also do individual patient-by-patient prediction analysis, which allows us to observe the impact of specific feature values on the predictions. Finally, we analyze multiple partial dependence plots of features with a higher effect on prediction to understand the trend of its impact on a model's prediction and the feature's interaction with other features.
The findings of this research can become a foundation for determining the governing socio-economic characteristics of COVID. If a feature impacts the prediction model heavily, further research on that feature's impact may discover a correlation between that feature and mortality due to COVID-19. As such, this process can serve as a hypothesis generation task, leading to further studies to establish the causality of specific features on the outcome of interest.
## Related Works
The impact of COVID-19 has inspired many works of literature in different disciplines. The field of computational learning has also participated in the research to understand and display the impact of COVID-19. In some pieces, the authors have proposed a hybrid machine learning method to predict the progress of the pandemic [1]. The authors utilized data from Hungary to exemplify the model's potential. In another piece of literature, the authors proposed a machine learning-based model using prediction of pandemic impacts and people's subsequent journey and fuel usage of US to project gasoline demand and studied the influence of government intervention [2]. In a different publication, the authors used a convolutional neural network (CNN) to identify the disease and predict outcomes in COVID-19 patients using X-rays and computed tomography (CT) [3]. Some projects have also been where possible applications of machine learning algorithms and methods to battle the COVID-19 pandemic were studied [4]. These applications include analysis of unlabeled data as an input source for COVID-19 and predicting risk in healthcare during the COVID-19 crisis.
* Joined last authors
Interpretability in Artificial Intelligence (AI) shows promise in healthcare analysis since it allows humans to understand the functionality and process and boosts confidence in the applications. Thus, Shapley value analysis for machine learning model interpretation in healthcare has become more prevalent over the past few years. Using machine learning, researchers proposed a data mining classifier system to predict generalized anxiety disorder (GAD) among women[5]. The authors used Naive Bayes, Random Forest, and J48 models as classifiers. They demonstrated an apparent increase in model accuracy, sensitivity, and specificity by using Shapley analysis as a feature selection method. Researchers also analyzed a medical imaging dataset to observe the phenomena of different data instance values and showed that an approximation of Shapley value based on a k-nearest neighbors (KNN) classifier could value large amounts of data appraisal within an acceptable amount of time[6]. They also demonstrated other applications of Shapley value such as removing data based on their influence on model learning and detecting noisy labels. A Shapley regression framework has also been proposed as a reliable feature importance evaluator in non-linear models and used the random forest model for demonstration[7].
Previous research sought to predict mortality due to COVID-19. Researchers developed an XGBoost machine learning model to predict the mortality of COVID-infected patients based on biomarkers in blood samples and identified a clinical rule for COVID-19 prognostic prediction[8]. Another piece of literature compares six different models with XGBoost and used Shapley values to feature importance in predicting mortality[9]. The authors apply a "what-if" analysis to determine the impact of marginal changes in the mortality factors on the prediction. However, both studies concentrated on the blood samples of COVID-infected patients and focused on biological features contributing to mortality. This paper focuses primarily on the socio-economic factors influencing mortality in COVID cases.
There have been several studies that attempt to shed light on the disparity in COVID-19 countermeasures. Several authors report that African American people and, to a lesser extent, Latino individuals bear a lopsided affliction of COVID-19-related results and present us with multiple research questions that can provide us with a clear answer to understand the disproportionality[15]. Another group combines the data emphasizing specific risks among marginalized and under-resourced communities, including people in prison or detention centers, immigrants and the undocumented, people with disabilities, and homeless people in USA[16]. Researchers also assess disparity in COVID-19 vaccine distribution in multiple countries across the world[17]. They fit a logistic model to report daily case numbers and estimated the vaccine roll-out index (VRI). Then they used a Random Forest model and analyzed the relation between predictors and model prediction. The authors found that median per capita income, human development index, the proportion of internet users in the population, and health expenditure per capita are the top four factors associated with VRI. These studies concentrate on analyzing and determining the socio-economic factors that impact patient care. In our work, we focus more on analyzing socio-economic factors and their association with mortality due to COVID-19.
### Methodology
A. Extreme Gradient Boosting (XGBoost)
XGBoost is part of a family of Boosting algorithms. It is an end-to-end Tree boosting algorithm that is sparsity-aware, can be used for sparse data and estimated tree learning, and is scalable10. It offers various tuning parameters that make it suitable for different scenarios.
The scalability of XGBoost is primarily due to the algorithmic optimization in multiple stages of the tree learning process. XGBoost uses an approximate algorithm that suggests possible splitting points based on feature distribution percentiles. The algorithm aggregates the statistics of the continuous features split by the candidates and finds the best solution based on the statistics.
### Random Forests
Random forest is a machine learning technique that utilizes ensemble learning. The algorithm consists of many tree predictors. The outcome of a random forest model depends on each prediction of all predictor trees[11]. Generally, the average or mean of the trees' output is the algorithm's final prediction.
### SHAP (SHapley Additive exPlanations) Values
SHAP is an integrated framework that provides some tools to interpret predictions. It assigns a value to each feature in a computational learning model that represents its importance.[12] proposed SHAP values as a unified measure of feature importance. It connects LIME, DeepLIFT, and later-wise relevance propagation method with Shapley regression and Shapley sampling.
_1) Shapley Value_
* Joined last authors
The feature importance for linear models in the presence of multicollinearity is known as the Shapley regression value or Shapley value[13]. It signifies the effect of including that feature on the model prediction. If the feature impacts the model positively, then the assigned Shapley value to the feature is positive, and if the effect is negative, then the Shapley value reflects that impact.
_2) SHAP Feature Importance Plot_
The SHAP feature importance plot illustrates the relative importance of the features where large absolute Shapley values are globally important. The algorithm uses the average of absolute Shapley values per feature to demonstrate the level of impact of the features in model prediction. The calculated absolute feature importance is plotted in descending order to create the SHAP feature importance plot.
_3) SHAP Summary plot_
Each point in the SHAP summary plot is a Shapley value for a feature. The feature determines the vertical position of the point, and the Shapley value determines the horizontal position. The color of the point represents whether the value of the feature is low or high. Our experiment uses red and blue to represent low or high feature values, respectively. For example, for a feature Age, an older man would be drawn as red or a redder point, whereas a younger would be described as blue or a bluer point. Overlapping points are jittered in the y-axis position. The SHAP summary plot indicates a possible relationship between feature value and the impact on model prediction. However, it does not prove any causal relationship.
_4) SHAP Partial Dependence Plot_
In SHAP partial dependence plot, individual feature values are plotted on the X-axis, and corresponding Shapley values are plotted on the Y-axis. This plot displays:
* The relationship between the feature and the target value demonstrates whether the relation is linear, straightforward, monotonic, or more complex.
* The relation with another feature based on the frequency of inter-feature interaction.
D. Dataset Statistics & Feature Description
We use a filtered dataset retrieved from the Covid19ResearchData database for our experiments. The database had three different schemas titled 'ANALYTICSIQ', 'OFFICE_ALLY', and 'MORTALITY'. OFFICE_ALLY provides claims and remittance data from a claims clearinghouse. MORTALITY or Death Index is a curated set of records from multiple obituary data sources. ANALYTICSIQ contains data at the name/geographic level. It is aggregated from large public data sources including census, econometric data from the US government, summarized credit data from 2 different credit bureaus, home sales information from county courthouses, occupation information from state licensing boards, and past purchase behavior from catalogs and retailers et cetera. Features from ANALYTICSIQ have inferred characteristics based on consumer and custom survey data.
The data from these schemas were joined based on common identifiers. The patients in the database had one or more diagnosis codes from claims from OFFICE_ALLY. To determine if a patient has tested positive for COVID-19, we looked through all diagnosis codes and selected all patients with a diagnosis code containing the string 'U07.1', 'U07.2', 'J12.82' or 'M35.81'. We then used the patient's birth date and the appointment date to calculate their age. We also removed any patient data if they did not have the necessary feature information. 20,878 patients had a diagnosis of COVID-19 in the dataset, among which 9,177 patients died in 2020. The others are presumed as alive for the experiments. For the experiments, we used the following features:
* INCOMEIQ_PLUS_V3: Indicates a household's predicted annual income (ANALYTICSIQ).
* INCOMEIQ_PLUS_DISP_V2: Represents a household's predicted disposable income (ANALYTICSIQ).
* ETHNICIQ_V2: This element identifies an individual's ethnicity, known and inferred (ANALYTICSIQ).
* AIQ_EMPLOYMENT: This element predicts an individual's employment status on a scale from 1-7, where 1-3 means a very low likelihood of employment, 4 means part-time job, and 5-7 means full-time employment (ANALYTICSIQ).
* AIQ_EDUCATION_V2: Denotes an individual's level of education from less than high school through a graduate degree (ANALYTICSIQ).
* HW_ER_VISITS_SC: Prediction of likelihood of having visited the emergency room (ER) in the last 12 months (7=most likely; 1=least likely) (ANALYTICSIQ).
* HW_PRIMARY_CARE_DOCTOR_SC: Prediction of likelihood of having a primary care doctor (7=most likely; 1=least likely) (ANALYTICSIQ).
* HW_MED_UTILIZATION: This element predicts if an individual is likely to exhibit high medical utilization by visiting three or more of the following in the last 12 months: emergency room, medical specialist, primary care doctor, urgent care. (Y/N) (ANALYTICSIQ).
* HW_STRESS_V2: Represents the predicted measure of stress levels (7-high stress; 1=low stress) (ANALYTICSIQ).
* AGE: Indicates an individual's age (OFFICE_ALLY).
* MORTALITY: This element is the target variable for the learning model. It represents whether the patient died due to COVID-19 or not (0=non-mortality or survived 1=mortality or death) (MORTALITY).
### Experiment Setup & Task Description
Records were only included if data were available in the MORTALITY dataset. This resulted in the final curated dataset having a 44% mortality rate. However, our focus for the experiment was to train well-performing machine learning models and identify the most impactful socio-economic factors for the trained models using explainable AI. While observing the prediction of deleterious and neutral phenotypes of human missense mutations in human genome data, [20] observed that balanced training data (50% neutral and 50% deleterious) results in the highest balanced accuracy (the average of True Positive Rate and True Negative Rate. For this reason, we attempted to achieve a more balanced dataset rather than replicating real-world scenarios. This approach also helped us avoid duplicating or oversampling any data class.
We used Snowflake for accessing the Covid-19ResearchDatabase and used python for data preprocessing, model training, and Shapley analysis. We trained a Random Forest and an XGboost model using the abovementioned features. We split the data into 90% and 10% for training and testing, respectively. Also, since we are using categorical data, we used the OrdinalEncoder scaling technique provided by the python sklearn library to prepare the data for model training.
For the XGBoost model, we used the XGBRegressor provided by the xgb python library. We used'reg:logistic' as the objective since we are working on a classification problem. We also used 0.1 for learning_rate, 5 for max_depth, and 10 for n_estimators.
For the Random Forest model, we used the RandomForestRegressor provided by the python sklearn library. We use 6 for max-depth and 10 for n_estimators while setting up the model for training.
After the models had been trained, we used the shap library to create the feature importance plot and summary plot to understand the impact of the features on model prediction. We also made several partial dependence plots to understand a feature's effect on model prediction more clearly and to understand the feature's interaction with other features. Finally, we conducted individual case-by-case analyses to understand the local importance of features.
### Result & Observations
In this section, we present the results of the experiments and the observations we made from the results.
### Feature Importance
Our experiments produced SHAP feature importance plots and summary plots for both XGBoost and Random Forest models. These plots demonstrate the following information:
* Feature importance: The features are ranked in descending order. Therefore, the feature plotted at the top in the SHAP feature importance plot and the SHAP summary plot are the most impactful on the model's prediction.
* Impact: The horizontal location of the individual plotted point in the SHAP summary plot demonstrates feature importance or impact on model prediction.
* Feature Value: Each plotted point represents an instance or a data point. The point's color shows whether the feature value is in a higher (redder) or lower (bluer) value for that instance.
* Relationship: The preceding information can reveal a possible relationship between the target result, whether the patient died in 2020 or not, and the feature as detailed below. However, the inferred relationship does not act as evidence of real-world causality.
* Joined last authors
Figure 1 displays the average impact of each feature on model output. According to the figure, the disposable income of households has the most impact on the production of the XGBoost model. The patient's age, employment, education status, and stress level also play a decisive role in the output. The likelihood of ER visits, the possibility of exhibiting heavy med utilization, the chance of having a primary care doctor, or the patient's ethnicity has little effect on the output prediction of XGBoost.
Figure 2 displays the SHAP value for each feature instance and its impact on the model. From the color of the points and their horizontal positions, we can reach several conclusions.
Figure 1 shows that disposable income level, age, and employment are the top 3 highly ranked features based on the mean SHAP value. But Fig 1 B indicates that only the employment feature has the apparent decisive power that COVID patients with lower employment have a higher probability of dying. At the same time, disposable income level and age are unclear because COVID patients with high disposable income and age are distributed across all ranges of the mean SHAP scores, including having a higher probability of dying or not dying. Education ranked No four, and its SHAP score distribution shows that COVID with low education has a high certainty of dying because all the blue dots are located at the positive SHAP scores. Even Medical Utilization ranked No nine but has an apparent decisive power that COVID patients with high medical utilization have a higher chance of not dying. In summary, income, age, employment, education, stress, ER revisit, primary care, medical utilization, and ethnicity contribute to the mortality prediction for COVID patients. Among these critical features, income, age, and employment are the top 3 factors that significantly contribute to the final mortality. Employment, education, and medical utilization have clear decisive patterns.
2) Feature Importance observation for the Random Forest model
Fig. 3 displays the feature importance plot for the random forest model. According to the figure, unlike the XGBoost model, the annual income of the patient's households has the most impact on the output of the random forest model. The patient's employment, education status, age, and household's disposable income also play a decisive role in the
Figure 1: SHAP Feature Importance Plot of XGBoost model
Figure 2: SHAP Summary Plot of XGBoost model
prediction. Stress level, the likelihood of ER visits, the possibility of exhibiting heavy med utilization, the likelihood of having a primary care doctor, or the patient's ethnicity has little effect on the output prediction of random forest.
Fig. 4 displays the SHAP value for each feature value and their impact on the output of the random forest model. We can reach several conclusions from the color of the points and their horizontal positions.
According to the Random Forest Model, Figure 3 shows that income level, employment, and education are the top 3 highly ranked features based on the mean SHAP value. But Fig 4 indicates that only the education feature has the roughly apparent decisive power that COVID patients with lower education have a higher probability of dying because COVID patients with low education are concentrated on the positive SHAP scores indicating a higher probability of dying. Different from Figure 1, age and disposable income level is ranked No 5 and 4. But looking at Fig 4, almost all features have unclear decision patterns because each feature's high and low values are distributed across all ranges of the SHAP scores, including having a higher probability of dying or not dying.
We can observe that the SHAP summary plot of the XGBoost model provided a clearer understanding of the feature impact on model prediction than the random forest summary plot. However, whether this is related to inherent characteristics of the random forest model or SHAP value analysis or whether this is related to the experiment's setup cannot yet be confirmed and requires further analysis.
By comparing the features from the XGBoost and Random Forest models (see Fig 1(A) and Fig 2(A)), we found that disposable income level, age, education, employment, stress, medical utilization, and ethnicity contribute significantly to the mortality prediction of COVID patients. From Fig 1b and Fig 2b, we can conclude that employment and education have contributed significantly to the mortality prediction of COVID patients. Both have the clear decisive patterns that COVID patients with low education and low employment have higher risks of dying.
* Joined last authors
Figure 4: SHAP Summary Plot of Random Forest model
Figure 3: SHAP Feature Importance Plot of Random Forest model
### Partial Dependence Plot of Features
We also produced several partial dependence plots of features to understand feature impact on model output.
1) Partial dependence plots for the XGBoost model
a) Education
Figure 5 displays the partial dependence plot between the patient's disposable household income and its assigned SHAP values. The relationship is a complex one. We can see that push to increase the prediction outcome. The increase in income decreases the SHAP value, indicating that higher income decreases the prediction outcome.
Figure 6 shows the partial dependence plot between patients' employment level and their morality prediction. As the employment value increases, we can see that the SHAP value becomes more negative, implying that better employment status leads to a decreased prediction of mortality value. Employment feature most frequently interacts with disposable income, and the income is lowest when the employment level is lowest.
* Joined last authors
Figure 5: Partial Dependence plot for patient’s education level for XGBoost model
b) Employment
Figure 6: Partial Dependence plot for patient’s employment level for XGBoost model
2) Partial dependence plots for the Random Forest model
a) Age
Figure 7 displays the partial dependence plot for the patient's age. We can see that as age increases, the likelihood of having a primary care doctor increases. As for the impact on prediction, the figure does not display any clear relation between age value and SHAP value. We have observed a similar result in Fig. 4. However, the instances with an age value higher than 60 have a higher SHAP value. This phenomenon implies that for patients with higher age, age impacts the model prediction positively, which means higher age has more chance of outputting a death prediction.
b) Disposable Household Income:
Figure 8 shows the partial dependence plot for the patient's disposable household income. From the figure, we can say that when an increase in employment level occurs, a parallel increase in disposable income also occurs. However, similar to the impact of annual income, the patient's disposable household income does not have a clear impact on the model prediction.
### Local Interpretation
This section looks at several individual cases and how the feature values impacted the output through multiple individual SHAP value plots.
In Figure 9, we see an individual SHAP value plot. This individual case was processed through the XGBoost model. The lower education value, higher stress value and ethnicity indicate the higher mortality risk, whereas the higher
Figure 8: Partial Dependence plot for patient’s Disposable Household Income for Random Forest model
Figure 7: Partial Dependence plot for patient’s age for Random Forest model
annual and disposable income of the household and higher employment value pushes contribute to the higher survival rate. We can see that ethnicity plays a significant role in this prediction, even though the SHAP summary plot and feature importance plot ranked ethnicity as a minor impactful feature, but in this individual case, ethnicity is the top ranked feature contributing the most to mortality rate.
In Figure 10, there is another individual SHAP value plot, but this case was processed through the random forest model. The higher stress value contributes significantly to the mortality risk. The higher annual and disposable income value and the higher education value are influencing the mortality prediction to be lower.
### Limitation & Future Work
This research is preliminary work toward understanding the impact of socio-economic factors in healthcare. The experiment conditions are controlled resulting in several limitations. These data should be interpreted in the context of the study design. Given a dataset and applying an ML prediction model to predict mortality, we sought to understand the socioeconomic drivers of the predictive models. The performance of the prediction model could only be applied in the context of the constraints of the dataset. Caution should be made to generalize our findings to the population. But the methods we employed could be applied to prediction models on more representative datasets.
Our work considers a dataset that has a 44% mortality rate. This indicates these data are not representative of the population and contain bias in their selections. This limits the generalizability of our findings, and we seek in future work to apply these methods to more representative data. We did not include non-Covid patient data for our experiment, further making the generalizability of our work unclear. We intend to extend our dataset to include the non-Covid patient as well so that we can identify the similarities and differences of the impactful characteristics between Covid and non-Covid patients.
We are currently aiming to introduce different explainable AI methods in our experiments and analyze the results. Our purpose is to determine if the results stay unchanged. If the results vary, then we also are aiming to understand the reasons behind the differences.
## Conclusion
This paper analyses the possible socio-economic factors that may impact mortality due to covid. There were 20,878 COVID patients in the dataset, among which 9,177 patients died. We trained an XGBoost and a Random Forest model and later used SHAP value analysis to understand the impact of the features on the output of the models.
In our research, we focused on interpretability and worked on the assumption that if a feature impacts a model's learning heavily, then it is an excellent indicator to decide if a patient is at high risk for mortality due to COVID. We found that a patient's household disposable and annual income, employment, education level, and patient's age are good possible indicators to determine if a patient should be considered high-risk or not. We also found that a patient's stress level, or visit frequency, and doctor visit frequency are also adequate indicators. We also observed these behaviors and trends through partial dependence plots and individual SHAP value plots. However, some characteristics show impactful behavior in the local interpretation, such as ethnicity, but are ranked low in the global
Figure 10: Partial Dependence plot for patient’s education for Random Forest model
Figure 9: Partial Dependence plot for patient’s education for XGBoost model
interpretation. We will require further analysis to understand the implications of these findings in the context of other studies[21] showing the impact of these features while also considering the limitations of the data.
These findings should be verified through further data analysis and case studies before these can be used in decision-making in the real world. Further research on these findings may help us identify the population in danger of mortality due to COVID. It has the potential to assist in clinical decision support to understand the impact of features on risk prediction at the individual level and the population level thus also supporting public health policy. It may also help develop a quick triage method to determine priority for COVID-infected people.
|
2310.18260
|
Concepts and Paradigms for Neuromorphic Programming
|
The value of neuromorphic computers depends crucially on our ability to
program them for relevant tasks. Currently, neuromorphic computers are mostly
limited to machine learning methods adapted from deep learning. However,
neuromorphic computers have potential far beyond deep learning if we can only
make use of their computational properties to harness their full power.
Neuromorphic programming will necessarily be different from conventional
programming, requiring a paradigm shift in how we think about programming in
general. The contributions of this paper are 1) a conceptual analysis of what
"programming" means in the context of neuromorphic computers and 2) an
exploration of existing programming paradigms that are promising yet overlooked
in neuromorphic computing. The goal is to expand the horizon of neuromorphic
programming methods, thereby allowing researchers to move beyond the shackles
of current methods and explore novel directions.
|
Steven Abreu
|
2023-10-27T16:48:11Z
|
http://arxiv.org/abs/2310.18260v1
|
# Concepts and Paradigms for Neuromorphic Programming
###### Abstract.
The value of neuromorphic computers depends crucially on our ability to program them for relevant tasks. Currently, neuromorphic computers are mostly limited to machine learning methods adapted from deep learning. However, neuromorphic computers have potential far beyond deep learning if we can only make use of their computational properties to harness their full power. Neuromorphic programming will necessarily be different from conventional programming, requiring a paradigm shift in how we think about programming in general. The contributions of this paper are 1) a conceptual analysis of what 'programming' means in the context of neuromorphic computers and 2) an exploration of existing programming paradigms that are promising yet overlooked in neuromorphic computing. The goal is to expand the horizon of neuromorphic programming methods, thereby allowing researchers to move beyond the shackles of current methods and explore novel directions.
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
+
Footnote †: journal: Accepted in 2019
## 1. Introduction
Computing technology is steering toward impasses, with Demand scaling ending and Moore's law slowing down (Moore, 1998). These impasses give rise to innovation opportunities for specialized hardware in computer architecture (Schafer et al., 2007; Bockock et al., 2010) as well as in software (Bock et al., 2010). This 'Golden Age' of innovation has lead many researchers to investigate neuromorphic computers. Taking inspiration from how the brain computes has a rich history going back at least six decades (Schafer et al., 2007; Bock et al., 2010) and the recent success of deep learning has demonstrated the power of neural information processing convincingly (Schafer et al., 2007). The development of event-based sensors (Schafer et al., 2007; Bock et al., 2010), large-scale neuromorphic processors (Schafer et al., 2007), and brain-computer interfaces (Bock et al., 2010) indicates that neuromorphic computers will play an important role in the future of computing.
An increased diversity of specialized hardware can revive old research ideas or programming paradigms on novel hardware, similar to how the GPU revived research on neural networks for machine learning (Schafer et al., 2007; Schafer et al., 2007). In light of novel neuromorphic hardware, it is worth re-evaluating overlooked programming paradigms (Bock et al., 2010).
Neuromorphic computers take inspiration from the brain, both in the way that information is processed and in the fact that the physical dynamics of the underlying substrate are exploited for computation (Schafer et al., 2007). Research in neuromorphic computing is diverse and happening on many levels: different materials are investigated for basic components in novel computers (Schafer et al., 2007; Bock et al., 2010), different architectures for assembling these components into a computing system are investigated (Moore, 1998), different domains are considered to move beyond electronics into optical (Schafer et al., 2007) or chemical domains (Bock et al., 2010).
A neuromorphic computer is composed of neurons and synapses which model biological neural networks at some level of detail, and they are often implemented directly in the physics of the device (Schafer et al., 2007). Although artificial neural networks (ANNs) are also often considered neuromorphic, this paper focuses on spiking neural networks (SNNs) because they offer a radically different paradigm for computing (see Section 2.1), making them an interesting topic for research on programming methods.
All this requires new theories to describe the computations in such novel devices, along with new theories and methods of programming that can make these devices useful. The former has been outlined in a recent review (Schafer et al., 2007) whereas the latter is constrained to an as-yet limited set of neuromorphic algorithms (Schafer et al., 2007; Schafer et al., 2007).
In Section 2 of this paper, concepts for a more general way of programming neuromorphic computers are analyzed and clarified. To fully harness the potential of neuromorphic computers, algorithm design is not enough. Ultimately, general programming methods must be developed to enable a large group of 'neuromorphic programmers' to harness the power of neuromorphic computers for real-world problems beyond machine learning and research benchmarks (Bock et al., 2010).
Neuromorphic computers presently cannot be programmed in ways comparable to the rich programming methods of digital computers with instruction set architectures, high-level programming languages, and compilation hierarchies. Schuman _et al._(Schuman et al., 2007) argue that progress on neuromorphic programming requires a paradigm shift in how to think about programming. Herein, it is assumed that there may not be a single paradigm for neuromorphic programming, just as there is no single paradigm for conventional programming. Section 3 states out the landscape of programming paradigms to make this body of knowledge available to the neuromorphic community and to identify promising directions for future research.
## 2. Concepts
### Dimensions of Computing
The brain works quite differently from a digital computer (Bock et al., 2010). While these differences make it challenging to use conventional programming methods, they simultaneously provide opportunities for novel programming models that are not supported by conventional computers. In the following, key differences between conventional and neuromorphic computers are outlined.
_Stochasticity._ Neurons are unreliable and noisy (Schafer et al., 2007), with neural spike output changing from trial to trial in identical experiments (Schafer et al., 2007). Yet, the brain is able to generate reliable behavior from unreliable components. This has fascinated the research community for over six decades and led to models of computing with probabilistic logic (Moore, 1998), stochastic computing (Bock et al., 2010) where information
is represented and processed in probability distributions, and hyperdimensional computing where high-dimensional random vectors are used for distributed data representation and computation [71, 134].
RobustnessThe theory of digital computation can be realized robustly in physics through the strong dynamical robustness of bi-stable switching dynamics. The physics of digital computing is very robust, but the theory is brittle in that a single bit flip can lead to catastrophic failure. In contrast, the brain works reliably despite the ongoing death and re-generation of neurons. Natural systems like the brain use robust adaptive procedures to work well in unknown and changing environments [128]. Mechanisms that provide the physical and functional robustness that natural systems exhibit are only beginning to be understood [75].
DistributednessIn neuromorphic systems, information representation and processing are distributed spatially and possibly also temporally. This is a classical property of neural networks [115] which stands in contrast to the localized information in binary transistor states and the sequential execution of elementary instructions in digital hardware.
UnobservabilityWhile in digital computers every bit of information can, in principle, be addressed, the same is not true in many neuromorphic systems which can only be configured and observed through a limited interface. This prevents the implementation of algorithms that require information which is simply not accessible in neuromorphic systems.
Physical timeIn many neuromorphic computers time represents itself. In contrast, classical theories of symbolic computation are decoupled from real physical time and simulated through a discrete global clock signal. Such decoupling may not be possible (nor desirable) in neuromorphic computers, thus current theories of computation are unsuited for describing neural computation [66].
Multi-scale dynamicsNeuromorphic computers operate on multiple temporal scales with no global synchronization, and are often described at multiple different spatial scales: from local learning rules to neural circuits all the way to the global behavior of the network as a whole. Often, the only way to decide what network-level behavior emerges from a local learning rule is to let the network run. This undecidability of global behavior from local rules may be a fundamental property of physical systems that can act as computers [147]. The difficulty of reasoning about global behavior from elementary operations is solved in digital computing by designing software systems as decomposable hierarchical structures [18, 127] but this is not presently possible in neuromorphic programming.
AnalogThe merits of analog computation in terms of energy efficiency and inherent parallelism are well-known [118, 16]. But analog computing is more sensitive to device mismatch and noise which limits the computational depth (number of operations performed in series) [143]. They may also be susceptible to parameter drift, aging effects, and changes in temperature.
No hardware/software separationWhen programming digital computers, one may neglect physical properties of the underlying hardware. In neuromorphic computers, such hardware-agnostic programming is not generally possible, as these devices are designed to exploit their underlying _physical_ properties and dynamics. The connection of physical systems and computers has been investigated for decades in the field of unconventional computing [2], though a general theory of such computation is still missing [66].
### Physical Computing
Although classical theories of computing are non-physical, all computations must ultimately be physically instantiated [34]. Digital computing was first developed as an abstract model which was later physically realized. Neuromorphic computers do not follow the same pattern. There is no universally accepted model of neuromorphic computation and many different physical instantiations are explored [123]. As such, abstract models of computation are co-developed with physical implementations. From a physical perspective, the key difference between conventional computing and neuromorphic computing lies in the set of physical phenomena that are harnessed for computation. While digital computing only uses bi-stable switching dynamics, neuromorphic computers use stochasticity, real-valued states in continuous time, and more [66].
Horsman _et al._[63] provide a general framework for computation with arbitrary physical systems which was further refined by Jaeger and Catthoor [67]. Therein, a computer is a physical machine \(\mathcal{M}\) which can be stimulated by an input signal \(u_{\mathcal{M}}\) and from which an output signal \(y_{\mathcal{M}}\) can be read out. The computation \(C\) is specified by an abstract function from input \(u\) to output \(y\). The machine \(\mathcal{M}\) then implements the computation \(C\) if an encoding procedure \(E\) and decoding procedure \(D\) is known such that the machine \(\mathcal{M}\) will produce \(y_{\mathcal{M}}\) with \(D(y_{\mathcal{M}})\approx y\) when stimulated with the input signal \(E(u)=u_{\mathcal{M}}\). This leads to the general form of the abstract computer model shown in Figure 1 (right): the physical machine \(\mathcal{M}\) receives input \(u_{\mathcal{M}}\) and produces output \(y_{\mathcal{M}}\), thereby implementing the abstract computation \(C\) from input \(u\) to output \(y\).
Hardware and SoftwareUsing physics for computation in neuromorphic computers makes it difficult to separate hardware and software in the same way as in digital computers. This separation is practically useful because hardware and software are developed on different timescales; it takes many months to design and manufacture a computer chip, while algorithms can be designed and tested within a single day. Hardware is generally considered to be anything that cannot be changed without significant effort, such as the numbers and types of physical components in the computer. The set of all possible computations that a machine \(\mathcal{M}\) can implement is fixed by the hardware. Considering the hardware to be fixed provides a programmer with firm, stable ground whereon rich, complex programs can be built. Software denotes malleable behavioral aspects of the computation \(C\) implemented by the machine \(\mathcal{M}\). Obviously, this behavior is ultimately manifested in the physical state and dynamics of the machine, but it is useful to think of the machine's behavior at an abstract level [95].
_Configuration._ In reconfigurable hardware, one must consider the role of a machine's configuration. A reconfiguration of the computer usually requires a reset, effectively breaking the operation of the program. Thus, _the computer's configuration is fixed over the lifetime of a program, but not necessarily fixed over the lifetime of the computer._ The configuration can be considered part of the hardware, whereby changing it effectively instantiates a different physical system. But it can also be considered part of the software, whereby changing it simply runs a different program on the same physical system. The chosen view is a design decision by the programmer.
### Computations and Programs
A computation \(\mathcal{C}\) specifies _what_ is being computed while a program \(\mathcal{P}\) specifies _how_ the computation is implemented. There may be many different programs \(\mathcal{P}_{1},\ldots,\mathcal{P}_{n}\) that implement the same computation \(\mathcal{C}\). As such, the computation gives a specification of what is being computed while the program gives a recipe, or mechanism, for how this computation is implemented. It is noted that the concept of a 'program' herein includes algorithms as Turing machines as well as programs that learn (Turing, 1990) and interactive programs, both of which cannot be implemented by Turing machines (Turing, 1991; Turing, 1992).
In classical computing, a function on natural numbers is implemented by a program which can be represented by a Turing machine. In neuromorphic computing, functions that operate on (real-valued) time series are computed. The computation is implemented by a program represented as a neural network, often with designated input and output neurons.
A computation \(\mathcal{C}\) is described by a formal specification which specifies the function that is being implemented. The specification formalizes the informal intention of the computation (Figure 1, left). The specification of a computation is expressed in some mathematical formalism. In digital computing, this can be done using formalisms from logic. In analog computing, there are a variety of formalisms that describe the computation, for example qualitative geometrical constructs like attractors and bifurcations (Turing, 1992).
A program \(\mathcal{P}\) is described in another formalism. In digital computing, programs are expressed in some programming language, see Section 2.5. In analog computing, one typically uses differential equations to describe the program. When programs interact with another, one may also speak of each individual program as a _process_ and the ensemble of all processes as the program, whose behavior emerges from the interaction of the interacting processes (see Section 3.1 on distributed programming).
Operationally, a program is defined by the data flow and control flow. The data flow specifies how signals that carry computationally relevant information are propagated through the machine. The control flow specifies what operations or transformations are done on these signals. For example, in a field-programmable gate array (FPGA) the data flow is configured through its routing elements while the control flow is defined by the function implemented in each logic block. In a CPU, data flows between registers and memory according to the program's data instructions while the control flow is defined by its logic instructions. In a neuromorphic chip, the data flow is defined by the connectivity of the neural network while the control flow is defined by the synapse and neuron models, learning rules, synaptic weights, time constants, thresholds, and more.
### Programming
Treating hardware as fixed and software as malleable helps to separate the different timescales on which hardware and software are designed. Programming is a software matter and therefore assumes that a physical system already exists which allows to be programmed or configured. This does not, however, prevent the programmer from thinking about what properties a physical system _should_ have in order to be effectively programmable for some task. On the contrary, this separation of concerns allows clear communication of what hardware constraints (there will be constraints!) are more or less desirable from a programming perspective, thereby contributing to successful hardware-software co-design.
It has already been mentioned that the physical computing machine is _designed_ and _configured_ before it can be _programmed_. In the following, some processes which have been called 'programming' are delineated, their meanings clarified and a general programming framework is outlined.
_Designing._ Every computing machine must be designed and manufactured before it can be used. Such machines can be programmable to varying extents. An application-specific computer is not programmable in any way - it physically implements a single program. A reconfigurable computer is configurable but may not be
Figure 1. Diagram of the programming process, see text for explanation. Left: instantiation of a computer program, adapted from Refs. (Turing, 1992; Turing, 1992). Right: organization of a computer.
extensibly programmable. A programmable computer is fully programmable. The difference between the latter two depends on their usage and a clear _a priori_ separation may not be possible.
ConfiguringMany computing machines can be configured in a way that was defined in the design of the computing machine. A configuration can modify the interconnects of an FPGA (Vaswani et al., 2017), the time constants and gains in a spiking neuromorphic chip (Vaswani et al., 2017), or the tunable beam couplers in a photonic circuit (Krishnan et al., 2017). As defined in Section 2.2, the configuration is constant for the lifetime of the program. Configuring is close to the hardware and amounts to selecting a configuration from a pre-defined set of configurations that were designed into the hardware, and is analogous to setting the control systems in a dynamical system. This limits the expressivity and creativity of a programmer constrained to configuration space. The configuring is often done through trial-and-error, or automatically through a search procedure if a well-defined objective exists.
ProgrammingAs opposed to configuring, programming is not strictly constrained by the machine's physical design. The set of all possible programs is typically infinite, providing programmers with an unbounded creative medium for realizing their ideas. This infinitude originates in the compositionality of programs. Moreover, programs have a temporal component; while a configuration is fixed for the program's entire lifetime, a program can change its behavior over time. The key to this expressivity is a programming language in which programs are expressed (see Section 2.5).
Optimizing / Training / LearningPrograms need not be written manually, but can also be searched for automatically. Such a search often has a desired program and can therefore be viewed as an optimization problem in which the 'distance' between the implemented program and the desired program is minimized. The optimization can be done on-device or off-device with a (digital) computer and it can be done either offline in a training phase when the computer is not being used or online while the computer is being used.
Training and learning often use optimization methods. In neuromorphic computing, one can _train_ a neural network to approximate a desired program through some optimization procedure. A neural network can also _learn_ autonomously how to adapt its weights to achieve some global objective, in a self-supervised or unsupervised way. Or it can simply mechanistically apply a learning rule with no clear global objective, like cellular automata (Krishnan et al., 2017). Furthermore, using evolutionary algorithms, one may _evolve_ a neural network, or a neuromorphic device, to implement some computation. These approaches are further detailed in Sections 3.1 and 3.3.
InstructingAn existing learning algorithm can be further 'programmed' through curated interactions with the environment or the user. This interactive training is common for personalized AI systems. For example, every Twitter user has a personalized Twitter feed which is learned by the user's behavior but can also be explicitly shaped by hiding or liking certain content.
Self-organizationA popular paradigm for on-chip learning is self-organization. Local learning and adaptation mechanisms lead the neural network to self-organize and thereby implement a desired computation, for example with self-organized maps or plasticity rules in SNNs (Krizhevsky et al., 2014). As is common with multi-scale dynamics (Section 2.1), it may be undecidable which local rules yield a particular global behavior. Thus, programming with self-organization can be exploratory to investigate what behavior emerges from different local rules, or it can be goal-driven when local rules are optimized to satisfy some behavioral constraints. Self-organization can also take place directly in physics to grow computing devices where the device is not explicitly designed (Krishnan et al., 2017; Krishnan et al., 2017; Krishnan et al., 2017).
Figure 1 illustrates the general process of programming. Programming begins with some informal intention of what computation the program should implement. This intention can be formalized into a specification, or the programmer may directly come up with an idea for a program that implements the intended computation, expressed in some formalism. This program is then communicated to the physical computer through a pre-defined programming interface. Finally, the system executing this program can be controlled or instructed to remain within the specification.
### Languages and Paradigms
Conventionally, programming amounts to _coding_ (writing source code) in some formal language. Herein, 'programming language' is used in an unconventionally wide sense to include any formal language that can be communicated to a physical system. This includes programming languages like Python but also extends to other formalisms like differential equations describing dynamical systems, or block diagrams describing signal processing systems. In any case, the 'programming language' must be compatible with the elementary instructions that the computer's programming interface provides. Given this compatibility, the programmer is free to explore the infinite space of programs. Work on elementary instruction sets for non-digital computers goes back at least to the 1940s and continues to the present day (Sundhi et al., 2017; Vaswani et al., 2017; Vaswani et al., 2017) but there is still no universally accepted model (Vaswani et al., 2017). Consequently, it is not clear what a neuromorphic programming language may look like (Vaswani et al., 2017); will it require new syntax such as visual representations, or will a program be represented by a string of symbols in some formal language? Since the goal is to improve the "general practice of programming" neuromorphic computers, Floyd (Flayd, 2017) argued that it is more effective to turn to programming paradigms rather than to languages. A programming paradigm is an approach to programming "based on a mathematical theory or a coherent set of principles" (Krishnan et al., 2017) and a programming language implements one or more programming paradigms. Centering the discussion on programming paradigms shifts the focus away from syntactical issues to the way programs are conceived and designed.
### Programming Trends
Beyond languages and paradigms, there is a set of well-developed tools for computer programming without most modern software systems would not exist. Such tools will be necessary if neuromorphic programming is to develop into a mature discipline.
_Efficiency._ Modern programming is done on computers running a powerful integrated development environment (IDE). This is essentially an _interface_ between the computer and the programmer, which enables a fast feedback loop between designing and testing programs. The keyboard-and-mouse interface to modern computers now seems trivial, but its success confirms its efficiency. Programming is interactive, with many intermittent compilation runs to check the program's semantics and syntax, where the syntax is often checked by the IDE directly without compiling.
_Teamwork._ Much has been invested into the coordination of large software teams (Han et al., 2017), resulting in some of the most complex and valuable computing systems in the world (Han et al., 2018). Collaborative version control systems are used by corporations, organizations, and open-source communities alike, enabling collaboration on large codebases with multiple programmers working on overlapping parts. Agile development and management are commonly used to efficiently coordinate large software projects (Srivastava et al., 2017).
_Automation._ High-level programming languages elevate the level of abstraction and automate much of the work that was previously done explicitly (Han et al., 2017). Furthermore, automated programming techniques are in full force, with inductive programming and machine learning leading the way toward programs that are automatically generated from data (see Section 3.1).
_Robustness._ As software systems increase in complexity, much work has been invested to make them robust to failures. Automated testing, continuous integration, and containerization all contribute to making large-scale software development more robust to different kinds of failures (Han et al., 2017). Modularization and structured programming have been key to managing large, interactive, distributed software systems. But despite significant advances in programming tools, software complexity remains an obstacle for achieving robust systems with no silver bullet in sight (Han et al., 2017; Dwork et al., 2017; Dwork et al., 2017; Dwork et al., 2017).
_Software engineering._ Everything above has focused on only one aspect of programming, namely the design of programs. Software engineering can be thought of as "programming integrated over time" (Han et al., 2017) in that it goes beyond the design of programs to also include maintenance, testing, validation, integration, and organization of large software-intensive systems (Han et al., 2018).
## 3. Programming Paradigms
### Conventional Programming
_Instruction-based._ The most common way of writing sequential, instruction-based programs uses the **imperative** paradigm, as implemented in C. Imperative programming was augmented with objects, which can contain instructions as well as data, to yield the **object-oriented** paradigm, as implemented in C++ or Java.
With the advent of multi-core microprocessors came the need to use resources on different cores simultaneously. This led to the development of **parallel programming** techniques, in which multiple processes are carried out simultaneously on different cores (Han et al., 2017). This is not to be confused with **concurrent programming** where the lifetime of multiple computing processes overlap and may interact with another (Han et al., 2017). Concurrency introduces issues of synchronization such as deadlocks and race conditions. **Distributed programming** deals with programs that are executed on multiple networked computers which interact to achieve a common goal.
**Emergent programming** uses multiple interacting sub-programs whose collective behavior constitutes the desired program. The individual instructions are typically not explicitly informed of the program to be created (Han et al., 2017). This approach has been used to design programs that exhibit some creativity (Han et al., 2017). This is reminiscent of local learning rules in neuromorphic computers (see Section 3.3).
_Declarative._ Instead of describing the control flow of a program, **declarative** programs describe the logic of the program. A declarative program describes _what_ the program does rather than _how_ it does it. This makes reasoning about programs easier and simplifies parallel programming (Bauer et al., 2017). Declarative programming is done in database query languages like SQL, functional programming languages like Haskell, or logic programming languages like Prolog. In **dataflow programming**, a program is modeled as a graph of data flowing between operations. This is a natural model for neuromorphic computers where data flows between neurons, and has been used for neuromorphic compilation (Han et al., 2017) (see Section 3.3). **Spatial programming** can be used to program reconfigurable hardware into dataflow engines (Han et al., 2017).
_Automated programming._ In **meta-programming**, it is possible for a program to write or modify programs, by simply treating the program as data. In **reflective programming**, a program modifies its own behavior whereas in **automatic programming**, a program generates another program. If a formal specification of the desired program is given, **program synthesis** can be used to generate a program that provably satisfies this specification (Han et al., 2017). If exact adherence to a formal specification is not required, but only the satisfaction of given constraints, **constraint programming** may be used (Han et al., 2017). If an incomplete specification is available, such as input-output examples, then **inductive programming** can be used to generate a suitable candidate program (Han et al., 2017). An inductive programming approach coupled with probabilistic programs has been proposed as a model for human-level concept learning (Han et al., 2017). Recently, deep learning (see below) has been used for inductive programming, under the name of neural program synthesis (Han et al., 2017). As already mentioned in Section 2.4, it is possible to instruct an interactive program and direct it to implement a desired computation. **End-user programming** allows users to obtain programs from a small set of examples, like the flashfill feature in spread-sheet programs which infers a formula from table manipulations done by the user (Han et al., 2017; Dwork et al., 2017).
_Probabilistic._ While classical programs are deterministic, the execution of a probabilistic program depends on random numbers, for example by calling a (pseudo) random number generator. Such a program can be viewed as sampling from a probability distribution. In **probabilistic programming**, the program itself is considered to be a distribution, and the programmer can analyze this distribution and condition the distribution on observations (Han et al., 2017). Indeed, the goal of probabilistic programming is not simply the execution of a program, but also the analysis thereof. By expressing a statistical model as a probabilistic program, statistical inference on such a model can be done automatically by the compiler through general-purpose inference schemes. Probabilistic programming has
been used for state-of-the-art generative vision models with very compact programs of only about 50 lines (Kumar et al., 2017).
_Learning._ In classical programming, a human programmer defines the program that specifies how input data is processed. **Machine learning** constructs programs that learn from the input data, in ways that may not have been anticipated by any human. Machine learning has deep roots in probability theory and overlaps significantly with probabilistic programming (Kumar et al., 2017; Kumar et al., 2017). In supervised machine learning, a mapping from inputs to outputs is learned from a set of examples. In reinforcement learning, a policy of how to act in some environment is learned from rewards and punishments. Both the learned mapping in supervised learning and the learned policy in reinforcement learning can be used as programs. This makes machine learning a new paradigm for (automated) programming (Kumar et al., 2017). Machine learning uses tools from optimization theory; the learning task is often partly framed as an optimization problem where some surrogate of the true performance metric is optimized, for example the average error over a set of input-output examples.
In **reservoir computing**, a neural network consists of an input layer which feeds into the high-dimensional recurrently-connected reservoir network from which the output is obtained through a readout layer. Only this final readout layer is trained while the reservoir is randomly initialized and typically not modified (see Section 3.3). **Deep learning** uses multi-layered ANNs for machine learning. The connectivity of such an ANN is usually fixed and then the weights are learned, typically in a supervised fashion using gradient descent to minimize the error on given input-output examples. In **differentiable programming**, programs are written in a way that they are fully differentiable with respect to some loss function, thereby allowing the use of gradient-based optimization methods to find better-performing programs. Deep learning is a special case of this, where programs are artificial neural networks that are differentiated using backpropagation. These techniques have also been adapted for spiking neural networks (Krizhevsky et al., 2014). Differentiable programming has been employed to merge deep learning with physics engines in robotics (Srivastava et al., 2014), it has been applied to scientific computing (Srivastava et al., 2014), and even towards a fully differentiable Neural Turing Machine (Krizhevsky et al., 2014).
_Optimization._ As already mentioned, machine learning relies heavily on tools from optimization theory. In pure optimization, the minimization of some cost function \(J\) is a goal in itself. In machine learning, a core goal is good generalization to unseen examples. This is expressed as some performance measure \(P\) which is intractable and therefore one minimizes some cost function \(J\) which will in turn also increase the performance measure \(P\). As such, if generalization is not needed then one may use optimization as a programming paradigm in which the result of the optimization is the desired program or the optimization process itself. **Evolutionary programming** uses population-based evolutionary optimization algorithms to find programs. In order to find a program that solves some problem is to define a fitness function that is maximized by a program that solves this problem. Evolutionary algorithms have been used to generate rules for a cellular automaton to solve computational problems that are difficult to solve by manually designing a learning rule (Srivastava et al., 2014). Evolutionary optimization is also a popular approach for neuromorphic devices, see Section 3.3.
Some dimensions of neuromorphic computing (Section 2.1) are exploited by paradigms in this section. Dataflow programming, distributed programming and deep learning harness distributedness in computation. Probabilistic programming uses stochasticity, as do optimization methods and machine learning methods. Emergent programming works with at least two different spatiotemporal scales as well as learning and optimization where the optimization loop operates on a slower timescale than the actual program. In some machine learning and optimization methods like reservoir computing or evolutionary optimization, a complete description of the program is not necessary, potentially accommodating some unobservability.
### Unconventional Programming
The present section investigates paradigms for programming physical systems. Computational models, and therefore programming methods, must ultimately be based in physics and resulting hardware constraints (Srivastava et al., 2014). Current programming methods are adapted to clocked digital hardware but with the forthcoming diversity of computer hardware and architectures (Srivastava et al., 2014) it is time to widen the set of hardware constraints that can be programmed with.
_Cellular programming._ As mentioned previously, cellular automata (CA) are a standard model of massively parallel computation. A CA is programmed by choosing its update rule and the program is executed on some initial configuration of the CA's lattice. Inspired by CAs, cellular architectures of neuromorphic devices have been proposed (Kumar et al., 2017; Krizhevsky et al., 2014). For over two decades, **amorphous computing** has been developing programming techniques inspired by the cellular cooperation in biological organisms (Srivastava et al., 2014). An amorphous computer is a system of irregularly placed, asynchronous, locally interacting computing elements that are possibly faulty, sensitive to the environment, and may generate actions (Beng et al., 2016; Krizhevsky et al., 2014). This line of research brought space-time programming (Krizhevsky et al., 2014) as a way of programming to control large networks of spatially embedded computers. Although not directly focused on neuromorphic computing, amorphous programming methods can provide a good starting point for robust programming methods in cellular architectures.
_Analog programming._ Neuromorphic hardware often contain analog components, which are difficult to work with because programming methods for analog computers are not at the same level of maturity as those for digital computers. Ulmann (Ulamn, 2016) argues that the development of reconfigurable analog computers will advance the state of analog computer programming and efforts to develop such hardware is in progress (Kumar et al., 2017). Nevertheless, methods from control engineering, signal processing and cybernetics have been developed and used for decades and can be adapted for neuromorphic systems. While digital computing was originally formulated as computing functions on the integers (Srivastava et al., 2014), **signal processing** can be seen as computing functions on temporal signals. For analog neuromorphic computers, signal processing provides a rich framework for computing with temporal signals (Kumar et al., 2017; Krizhevsky et al., 2014).
**Control theory** has developed a rich repertoire of methods to drive a dynamical system into a mode of operation that is robust, stable, and implements some desired dynamics. These methods can be used to keep analog computers within a desired regime of operation to implement a desired computation. It can be expected that analog computers can benefit from cross-fertilization between computer science and control theory [88]. A promising direction is data-driven control where a model of the system to be controlled is learned from experimental data using machine learning techniques [22]. Historically rooted in ideas from cybernetics and ultrastable systems [6], **autonomic computing** aims to design systems that are able to adapt themselves in order to stay within a high-level description of desired behavior [104]. The field takes inspiration from the autonomic nervous system, which is able to stay within a stable 'dynamic equilibrium' without global top-down control.
_Programming physical systems._ Building on evolutionary optimization, **evolution _in materio_**[91] was proposed to harness material properties for computation. It is argued that natural evolution excels in exploiting the physical properties of materials, and artificial evolution emulates this. Evolution has been applied widely in unconventional computing [2], for example with a disordered dopant-atom network for digit classification [25]. As already mentioned in the preceding section, **physical reservoir computing** can be used to harness the dynamics of physical systems for computation by modeling the physical system as a high-dimensional reservoir on top of which an output map is trained [133].
### Neuromorphic Programming
_Neuromorphic co-design._ As neuromorphic computers exploit physical phenomena of their underlying hardware, manually designed neuromorphic programs will necessarily be close to physics. Therefore, although not strictly a paradigm for 'programming', it is instructive to consider **neuromorphic co-design** as a paradigm for designing neuromorphic systems. The field is rooted in the original vision of neuromorphic computing [86] and designs application-specific [85] as well as reconfigurable [97] mixed-signal neuromorphic chips in sub-threshold CMOS technology which may also include on-chip learning. This approach uses tools from signal processing and computational neuroscience to implement a desired behavior in networks of silicon neurons [64]. Similar to analog computing, the field may benefit from a set of computational primitives to simplify the design of neuromorphic systems.
_Compilation._ Given a neural network, it is necessary to communicate this network to the hardware. **Neuromorphic compilation**[148] was proposed as a general framework to (approximately) compile neural networks into different hardware systems, automatically adapting to physical constraints. Such compilation can be done statically to exactly implement the specified network architecture [50; 126], or adaptively to further optimize the network after compilation [23]. In any case, it is important to consider the hardware constraints in this compilation [68].
To compile a neural network into hardware, it is necessary to first design the neural network's architecture. Deep learning has accumulated a plethora of well-performing network architectures for ANNs which can rapidly be converted into equivalent spiking neural networks (SNNs) through ANN-to-SNN **conversion**[113; 35]. The conversion to SNNs offers significant advantages in energy efficiency while often maintaining similar levels of performance. However, this conversion is not optimal because it typically does not leverage the computational power of spiking neurons and instead limits the richer dynamics of SNNs to the same less powerful domain of ANNs [121].
Compilation and conversion are promising directions, though descriptions at the level of neural network architectures may not provide a high enough abstraction for implementing programs that realize arbitrary computations.
_Learning._ Given the success of deep learning, learning is a natural paradigm for neuromorphic computers. While it would be naive to ignore the deep learning literature, it is also unrealistic to expect deep learning methods to work for SNNs as well as they do for ANNs since these methods were optimized for ANNs [31].
**Backpropagation,** the workhorse of deep learning, can be implemented directly in SNNs using surrogate gradients [102] or other neuromorphic adaptations. Simplifications of the backpropagation algorithm such as the random backpropagation algorithm [8] were also demonstrated in neuromorphic systems [101]. It is also possible to create a **surrogate model** of the physical device, then optimize the surrogate model in simulation with deep learning methods and transfer the optimized model back to the device [59; 42].
For recurrent neural networks, **reservoir computing** avoids the need to backpropagate information through the network to compute gradients. Instead, the reservoir is kept fixed and only a readout map from the reservoir to the output is trained. This training procedure requires the reservoir states to be read out and stored, which may not be possible given limited observability of some devices or limited data storage. Reservoir computing is a popular paradigm for neuromorphic computing, with dedicated frameworks for hardware implementation [87; 121].
Neural network training is often done off-device with external hardware. Frequent re-training creates a large overhead, limiting the performance and applicability of neuromorphic computers. As a result, **on-device learning** methods are an active topic of research [11]. **Plasticity** is a popular paradigm for on-device learning where local learning rules are used to modify the connectivity (structural plasticity) and connection strengths (synaptic plasticity) of a SNN. Parallels to emergent programming may be drawn here as the resulting behavior of the SNN emerges from the interaction of local rules. It is not clear what local rules will yield a particular network-level behavior, but evolutionary search [70] and meta-learning [27] have been used to (re-)discover desirable plasticity rules.
_Evolution._ A key advantage of evolutionary approaches is that they can jointly optimize the network's architecture and weights, thus simultaneously designing and training the network. Moreover, evolutionary methods do not require differentiability of activation functions, nor do they place any constraints on the network's architecture. Evolutionary approaches can find a SNN by
randomly choosing an initial population of candidate SNNs, selecting the highest-performing candidates according to some performance metric, and then creating new candidates through recombining and mutating the selected candidates [119, 122]. However, evolutionary approaches can be slower to converge than other training methods [121] and the resulting architectures are not easily understandable or reusable for different tasks [28].
_Neuromorphic algorithms._ With the increased availability of neuromorphic hardware, a number of handcrafted spiking neuromorphic algorithms (SNA) have been proposed. SNAs implement computations using temporal information processing with spikes, often to implement well-defined computations such as functions on sets of numbers [141], functions on graphs [56], solving constraint satisfaction problems or solving a steady-state partial differential equation using random walks [129]. SNAs are being actively developed and many application domains are yet to be explored [3].
_Neurocomputational primitives._ A variety of neurocomputational primitives have been proposed in the neuromorphic community. Such primitives can be useful for simple tasks and typically allow for composability to create more complex neuromorphic systems at a higher level of abstraction [10, 84]. **Dynamic neural fields** (DNFs) are a modern framework for neural attractor networks [124]. The stable states provided by attractor dynamics help with the intrinsic variability of analog neuromorphic circuits and have been shown to be a promising abstraction for neuromorphic programming [31]. Each DNF is a network of neurons that is, under some constraints, computationally equivalent to a **winner-take-all** (WTA) network [116]. The WTA is a common circuit motive in the neocortex [38]. **The neural state machine** (NSM) [82, 100] also builds on WTA networks to implement finite state machines in SNNs, and has been shown to run robustly on mixed-signal neuromorphic hardware. The **spiking phase-locked loop** (SPLL) [85] was designed for frequency detection as part of a neuromorphic tactile sensor. The **temporal difference encoder** (TDE) [55, 89] is a spiking model that was designed to compute the time difference between two consecutive input spikes. The number of output spikes and the time between them is inversely proportional to the time difference. This has been used for motion estimation and obstacle avoidance [90]. **Neural oscillators** generate rhythmic activity that can be used for feature binding and motor coordination, for example as a central pattern generator [77]. Other primitives are scattered around the literature and shared libraries of neurom computational primitives are only starting to be assembled [10]. **Neuromorphic synthesis**[100] may provide a systematic way of programming complex high-level behavior into neuromorphic chips. This was demonstrated for functions that can be described by finite state machines, but it may be promising to extend this work to a larger set of computational primitives for higher abstractions in neuromorphic programming.
_Higher abstractions._ The **neural engineering framework**[41] raises the level of abstraction beyond the level of neural networks. This allows dynamical systems to be distilled automatically into networks of spiking neurons that can then be compiled down to mixed-signal spiking neuromorphic accelerators like Braindrop [99] using the Nengo programming environment [14].
Intel recently launched **Lava1**, an open-source neuromorphic programming framework for the Loihi chips. Support for other neuromorphic hardware is planned. Lava is a multi-paradigm framework and includes libraries of neuromorphic algorithms for optimization, attractor networks, deep learning methods for SNNs, VSAs, and plans to include more paradigms.
Footnote 1: [https://github.com/lava-nc/lava](https://github.com/lava-nc/lava)
Aimone _et al._[4] proposed **Fugu**, a hardware-independent mechanism for composing different SNAs. In Fugu, a program is specified as a computational graph reminiscent of dataflow programming, where nodes represent SNAs and connections represent dataflow between the SNAs. This program can then be compiled into different hardware-specific configurations. The focus of this work is on digital neuromorphic processors and support for mixed-signal hardware is not discussed.
## 4. Outlook
Without a guiding theory that unites physics with computation, it is difficult to program computers that harness their underlying physical dynamics for computation. Building on decades of research in neuromorphic computing and engineering, initial features of neuromorphic programming methods can be identified.
As the field is moving toward general programming methods, it is important to clarify concepts and establish an efficient separation of concerns to allow effective cross-disciplinary collaboration and communication. Shared benchmarks and user-friendly tools will further boost progress [10]. Moreover, for neuromorphic systems to scale in the landscape of large heterogeneous computing systems, community-wide standards and protocols must be defined for the communication between neuromorphic systems.
The structure of large-scale neuromorphic programs is yet to be explored. It is assumed that a digital computer has a clearer architecture with fewer modules whereas the brain has a larger breadth of ongoing computations [69]. It remains to be seen if neuromorphic programs allow for the kinds of 'crisp abstractions' [18] that enable the deep hierarchies in digital programming as observed in compilation hierarchies, function call hierarchies, and class inheritance hierarchies. If such abstractions are not possible, hierarchies in neuromorphic programs will necessarily be wide and shallow, leading to many interacting components and only a few different levels of abstractions.
It is hoped that neuromorphic programmers can leverage the work outlined in this paper to build large-scale neuromorphic programs to tackle real-world tasks, and to further develop guiding principles and paradigms for neuromorphic programming.
###### Acknowledgements.
I wish to thank Herbert Jaeger for helpful comments. This project has received funding from the European Union's Horizon 2020 Research and Innovation Programme under the Marie Sklodowska-Curie grant agreement No. 860360 (POST DIGITAL).
|
2307.05198
|
An Inversion Statistic on the Hyperoctahedral Group
|
In this paper, we introduce an inversion statistic on the hyperoctahedral
group $B_n$ by using an decomposition of a positive root system of this
reflection group. Then we prove some combinatorial properties for the inversion
statistic. We establish an enumeration system on the group $B_n$ and give an
efficient method to uniquely derive any group element known its enumeration
order with the help of the inversion table. In addition, we prove that the
\textit{flag-major index} is equi-distributed with this inversion statistic on
$B_n$.
|
Hasan Arslan, Alnour Altoum, Hilal Karakus Arslan
|
2023-07-11T12:08:47Z
|
http://arxiv.org/abs/2307.05198v2
|
###### Abstract
###### Abstract
In this paper, we introduce an inversion statistic on the hyperoctahedral group \(B_{n}\) by using an decomposition of a positive root system of this reflection group. Then we prove some combinatorial properties for the inversion statistic. We establish an enumeration system on the group \(B_{n}\) and give an efficient method to uniquely derive any group element known its enumeration order with the help of the inversion table. In addition, we prove that the _flag-major index_ is equi-distributed with this inversion statistic on \(B_{n}\).
**An Inversion Statistic on the Hyperoctahedral Group**
Hasan Arslan\({}^{a,1}\), Alnour Altoum\({}^{a,2}\), Hilal Karakus Arslan\({}^{b,3}\)
\({}^{a}\)_Department of Mathematics, Faculty of Science, Ecriyes University, 38039, Kayseri, Turkey_
\({}^{b}\)_Hikmet Kozan Secondary School, Republic of Turkey Ministry of National Education, 38070, Kayseri, Turkey_
\({}^{1}\)[email protected] \({}^{2}\) [email protected]_
\({}^{3}\)[email protected]_
**Keywords**: Permutation statistic, hyperoctahedral group, inversion number, flag-major index, Mahonian statistics.
**2020 Mathematics Subject Classification**: 05A05, 05A15, 05A19, 20F55.
## 1 Introduction
The main aim of this paper is to introduce a new inversion statistic, which can be considered as a partition of the length function on the hyperoctahedral group \(B_{n}\). This inversion number is compatible with the length function on \(B_{n}\), just as the inversion number on \(S_{n}\). If a statistic is equi-distributed with the length function (i.e., the number of inversions) on a Coxeter group, then it is called as _Mahonian_. We give a bijective proof of the equi-distribution of this inversion statistic and the _flag-major index_ on \(B_{n}\) by defining a map \(\phi:B_{n}\to B_{n}\) such that
\[inv(w)=fmaj(\phi(w))\]
for each \(w\in B_{n}\). We illustrate this fact for the group \(B_{3}\). Inversion table of any element in the group \(B_{n}\) shares the same properties with a number in the
\(B_{n}\)-type number system. Therefore, we also supply an enumeration system on the hyperoctahedral group and provide an approach to uniquely obtain any group element, which is known its enumeration order, by means of its inversion table. Having written any positive integer in the \(B_{n}\)-type number system, it can be uniquely converted to an element of the group \(B_{n}\) using the concept of inversion table. This provides convenience in algebraic and combinatorical studies related to group structure, as well as in studies in the field of cryptology.
## 2 Preliminaries and Notation
In this section, we recall the definition of a real reflection group \(B_{n}\), which is also called a hyperoctahedral group. Throughout this paper, for any \(m,n\in\mathbb{Z}\) such that \(m\leq n\), we assume that \([m,n]:=\{m,m+1,\cdots,n\}\). Let \(\mathbb{R}^{n}\) be an Euclidean space. Let \(\{e_{1},\cdots,e_{n}\}\) be the set of standard basis vectors of \(\mathbb{R}^{n}\). In fact, a finite real reflection group \(B_{n}\subset GL_{n}(\mathbb{R})\) is generated by the reflections \(s_{1},\cdots,s_{n-1}\) of order \(2\) associated with the roots \(e_{2}-e_{1},\cdots,e_{n}-e_{n-1}\), respectively, and an exceptional reflection \(t_{1}\) of order \(2\) with root \(e_{1}\). The set \(S=\{t_{1},s_{1},\cdots,s_{n-1}\}\) is the canonical set of generators for the group \(B_{n}\). It is well-known that \(B_{n}\) is a semi-direct product of \(S_{n}\) and \(\mathcal{T}_{n}\), where \(S_{n}\) is the symmetric group generated by \(\{s_{1},\cdots,s_{n-1}\}\) and \(\mathcal{T}_{n}\) is a reflection subgroup of \(B_{n}\) generated by \(\{t_{1},\cdots,t_{n}\}\), where \(t_{i+1}:=s_{i}t_{i}s_{i}\) for each \(1\leq i\leq n-1\). Any element \(w\in B_{n}\) can be uniquely written in the form
\[w=\left(\begin{smallmatrix}1&2&\cdots&n\\ (-1)^{r_{1}}\beta_{1}&(-1)^{r_{2}}\beta_{2}&\cdots&(-1)^{r_{n}}\beta_{n} \end{smallmatrix}\right)=\beta\prod_{k=1}^{n}t_{k}^{r_{k}}\in B_{n},\]
where \(r_{i}\in\{0,1\}\) and we write \(\beta=\left(\begin{smallmatrix}1&2&\cdots&n\\ \beta_{1}&\beta_{2}&\cdots&\beta_{n}\end{smallmatrix}\right)\in S_{n}\) to mean that \(\beta_{i}=\beta(i)\) for all \(i=1,\cdots,n\). If we take into account the group \(B_{n}\) as a real reflection group with the following root system
\[\Psi=\{\pm e_{l},\ \ \pm e_{j}\pm e_{i}\ \ :\ l\in[1,n],\ 1\leq i\neq j\leq n\}.\]
then we have the sets of positive and negative roots regarding with \(\Psi\), which are, respectively, defined as follows:
\[\Psi^{+}=\{e_{l},\ \ e_{j}-e_{i},\ \ e_{j}+e_{i}\ \ :\ \ l\in[1,n],\ \ 1\leq i<j\leq n\},\]
and \(\Psi^{-}=-\Psi^{+}\). As is well-known from [1], \(\Psi\) can be written as \(\Psi=\Psi^{+}\bigsqcup\Psi^{-}\) a decomposition of \(\Psi^{+}\) and \(\Psi^{-}\). The length function \(L\) on \(B_{n}\) associated with the root system \(\Psi\) is defined as
\[L\ :\ B_{n}\rightarrow\mathbb{N},\ \ \ L(w)=\mid w(\Psi^{+})\cap\Psi^{-}\mid. \tag{1}\]
Furthermore, the length \(L(w)\) of \(w\) is equal to the length of the minimal expression for \(w\) in terms of the elements of \(S\). The longest element of \(B_{n}\) is \(w_{0}=t_{1}\cdots t_{n}\) and it is also central. It is well-known that the longest element \(w_{0}\) of the group \(B_{n}\) can be expressed as a signed permutation in the following form:
\[w_{0}=\begin{pmatrix}1&2&3&\cdots&n-1&n\\ -1&-2&-3&\cdots&-(n-1)&-n\end{pmatrix}.\]
We note here that the length of any reduced expression in \(B_{n}\) takes value at most \(n^{2}\), which is the length of \(w_{0}\).
Let \(\sigma=\sigma_{1}\cdots\sigma_{n}\) be a one-line presentation of \(\sigma=\left(\begin{smallmatrix}1&2&\cdots&n\\ \sigma_{1}&\sigma_{2}&\cdots&\sigma_{n}\end{smallmatrix}\right)\in S_{n}\). As is well-known from [2], the inversion number, the descent set and the major index of \(\sigma\) are respectively defined in the following way:
\[inv(\sigma)= |\{(i,j)\in[1,n]\times[1,n]\ :\ i<j\ and\ \pi_{i}>\pi_{j}\}|\] \[Des(\sigma)= \{i\in[1,n-1]\ :\ \sigma_{i}>\sigma_{i+1}\}\] \[maj(\sigma)= \sum_{i\in Des(\sigma)}i.\]
MacMahon proved in [3] that the the number of inversions inv is equi-distributed with the major index maj over the symmetric group \(S_{n}\), that is,
\[\sum_{\sigma\in S_{n}}q^{inv(\sigma)}=\sum_{\sigma\in S_{n}}q^{maj(\sigma)}.\]
Following [4] we let, \(\sigma_{0}:=t_{1}\) and for all \(i\in[1,n-1]\), \(\sigma_{i}:=s_{i}s_{i-1}\cdots s_{1}t_{1}\in B_{n}\). Thus, the collection \(\{\sigma_{0},\sigma_{1},\cdots,\sigma_{n-1}\}\) is a different set of generators for \(B_{n}\) and any \(w\in B_{n}\) has a unique expression
\[w=\sigma_{n-1}^{k_{n-1}}\cdots\sigma_{2}^{k_{2}}\sigma_{1}^{k_{1}}\sigma_{0}^ {k_{0}} \tag{2}\]
with \(0\leq k_{i}\leq 2i+1\) for all \(0\leq i\leq n-1\). _Flag-major index_ was defined for the group \(B_{n}\) as follows (see [4]): Let \(w\in B_{n}\). Then
\[fmaj(w)=\sum_{i=0}^{n-1}k_{i}. \tag{3}\]
It is well-known from [4] that the flag-major index is Mahonian, that is,
\[\sum_{w\in B_{n}}q^{fmaj(w)}=\prod_{i=1}^{n}[2i]_{q}=\prod_{i=1}^{n}q^{L(w)} \tag{4}\]
where \(q\) is an indeterminate and \([2i]_{q}=\frac{1-q^{2i}}{1-q}\) for every \(i=1,\cdots,n\).
**Theorem 2.1** (Adin-Roichman [4]).: _Let \(w\in B_{n}\). Then_
\[fmaj(w)=2maj(w)+neg(w). \tag{5}\]
_where \(neg(w)=|\{i\in[1,n]:w(i)<0\}|\) and maj is computed by using the following order on \(\mathbb{Z}\):_
\[-1<-2<\cdots<-n<1<2<\cdots<n\]
**Example 2.2**.: _If \(w=[2,-5,-3,-1,4]\in B_{5}\), then we have \(maj(w)=1+2+3=6\), \(neg(w)=3\) and so \(fmaj(w)=15\), where the order we use when calculating the maj index is \(-1<-2<-3<-4<-5<1<2<3<4<5\)._
In [5], Raharinirina constructed a \(B_{n}\)-type number system and showed that any positive integer can be uniquely expressed in the \(B_{n}\)-type number system.
**Definition 2.3** ([5]).: _The \(B_{n}\)-type number system is a radix base system that every positive integer \(x\) can be uniquely expressed in the following form:_
\[x=\sum_{i=0}^{n-1}d_{i}B_{i} \tag{6}\]
_where \(n\in\mathbb{Z}^{+}\), \(d_{i}\in\{0,1,2,\cdots,2i+1\}\) and \(B_{i}=2^{i}i!\)._
Then, for any positive integer \(x\) in the \(B_{n}\)-type number system, we will use the notation
\[x=(d_{n-1}:d_{n-2}:\cdots:d_{1}:d_{0}).\]
To write any positive integer \(x\) in a \(B_{n}\)-type number system due to [5], one proceeds with the following steps. In the first step, \(x\) is divided by \(2\) and the reminder \(r_{0}\) is set to \(d_{0}\) in the division process
\[x=2q_{0}+r_{0}.\]
Then divide \(q_{0}\) by \(4\) and the reminder \(r_{1}\) sets to \(d_{1}\) in the following division process
\[q_{0}=4q_{1}+r_{1}.\]
If we continue these operations by dividing \(q_{i-1}\) by \(2(i+1)\) and take \(r_{i}=d_{i}\) in the expression
\[q_{i-1}=2(i+1)q_{i}+r_{i}\]
until the quotient \(q_{n-1}\) is zero for some integer \(n\). Thus, at the last step, we get
\[q_{n-2}=2nq_{n-1}+r_{n-1}\]
and set \(r_{n-1}\) as \(d_{n-1}\). Eventually, the number \(x\) is written in the form
\[x=(d_{n-1}:d_{n-2}:\cdots:d_{1}:d_{0}) \tag{7}\]
in the \(B_{n}\)-type base system.
Therefore, we have developed two algorithms that will facilitate the process of obtaining the representation in \(B_{n}\)-type number system of any positive integer and vice versa. Any positive integer can be easily written in a \(B_{n}\)-type number system in a unique way when using the Python algorithm provided below:
**Algorithm 1:**
x=int(input('Enter any positive integer:')) m=2 for i in range(1,x):
\(d=x\%m\)
if \(x>0:\)
\(x=x//h\)
m=m+2
else:
break
print(d, end=':')
The following example shows how this algorithm works:
**Example 2.4**.: _Pick the integer \(x=163\). The expression of \(x\) in \(B_{4}\)-type number system is \(x=(3:2:1:1)\)._
Alternatively, a positive integer can also be created from any number in the \(B_{n}\)-type number system with the help of the following Python algorithm:
**Algorithm 2:**
n=int(input('Enter the index of \(B_{n}\) base'))
f=1
x=0
for i in range(0,n):
d=int(input('Enter a number in \(B_{n}\)-type number system'))
if \(i==0\) or \(i==1:\)
f=1
else:
\(f=f*i\)
\(t=2*\ast(i)*f\)
z=d*t
x +=z print('The decimal number is: ',x)
**Example 2.5**.: _Take \(x=(10:12:3:9:10:5:1:1:0:1)\) as a number in \(B_{10}\)-type number system. It is actually an integer representation in the \(B_{10}-type\) number system of the positive integer \(x=1984199097\)._
## 3 Revisited inversion statistic on group \(B_{n}\)
In this section, we will give a various approach to the concept of the inversion statistic defined on the group \(B_{n}\) than that of [6]. We will show that any element of the group \(B_{n}\) can be uniquely represented in the \(B_{n}\)-type number system by means of the inversion statistic method.
Now we define the set
\[\Psi_{i}=\{e_{n+1-i},\ \ e_{n+1-i}-e_{j},\ \ e_{n+1-i}+e_{j}\ :\ j<n+1-i\leq n\},\]
and \(inv_{i}(w)=\mid w(\Psi_{i})\cap\Psi^{-}\mid\) for each \(i=1,\cdots,n\). The sequence \(I(w)=(inv_{1}(w):\cdots:inv_{n}(w))\) is called _inversion table_ of element \(w\in B_{n}\). It should be noted here that an inversion table can be think of as a number in the \(B_{n}\)-type number system. If \(x\) be the corresponding positive integer to the inversion table \(I(w)\), then \(x+1\) is said to be the _rank_ of \(w\). This enables us to enumerate all the elements of the group \(B_{n}\). The inversion table of \(w\) can be essentially created by applying the rule given in the following theorem:
**Theorem 3.1**.: _For \(w=\beta\prod_{k=1}^{n}t_{k}^{r_{k}}\in B_{n}\), we have_
\[inv_{i}(w)=r_{n+1-i}+2.\mid\{(j,n+1-i):j<n+1-i,\ \ \beta_{j}<\beta_{n+1-i},r_{n+1-i} \neq 0\}\mid+inv_{i}(\beta) \tag{8}\]
_for all \(i=1,\cdots,n\), where \(inv_{i}(\beta)=\mid\{(j,n+1-i):j<n+1-i,\ \ \beta_{j}>\beta_{n+1-i}\}\mid\) in \(S_{n}\) and \(r_{n+1-i}\in\{0,\ 1\}\). More precisely, \(inv_{i}(w)=1+2.\mid\{(i,j):i<j,\ \ \beta_{j}<\beta_{n+1-i}\}\mid+inv_{i}(\beta)\) when \(r_{n+1-i}=1\) and \(inv_{i}(w)=inv_{i}(\beta)\) when \(r_{n+1-i}=0\)._
Proof.: Let \(e_{n+1-i}\in\Psi_{i}\). Then \(w(e_{n+1-i})=(-1)^{r_{n+1-i}}e_{\beta_{n+1-i}}\in\Phi^{-}\) if and only if \(r_{n+1-i}=1\). Now let \(e_{n+1-i}\pm e_{j}\in\Psi_{i}\). We denote \(e_{n+1-i}\pm e_{j}\) by \(e_{n+1-i}-(-1)^{k}e_{j}\), where \(k\) is \(0\) or \(1\). Then we have \(w(e_{n+1-i}\pm e_{j})=(-1)^{r_{n+1-i}}e_{\beta_{n+1-i}}-(-1)^{k+r_{j}}e_{\beta_ {j}}\), which lies in \(\Psi^{-}\) if and only if either \(r_{n+1-i}=1\) and \(\beta_{j}<\beta_{n+1-i}\) (where \(k\) takes exactly two values) or \(k+r_{j}=2\) and \(\beta_{j}>\beta_{n+1-i}\). This completes the proof. Clearly, \(inv_{i}(w)=inv_{i}(\beta)\) if \(r_{n+1-i}=0\). This completes the proof.
The formula given in (8) is nothing else but a special case of the formula expressed in Theorem 4.5 in [7] (the case \(m=2\)). For all \(i=1,\cdots,n\), we get \(inv_{i}(w)\in[0,2(n-i)+1]\).
Let \(inv(w)\) denote the sum of \(i\)-inversions of the permutation \(w\in B_{n}\). It is clear that \(L(w)=inv(w)\). Subsequently, one can practically determine the length of \(w\) with the help of its inversion table.
**Example 3.2**.: _Let \(w=\left(\begin{smallmatrix}1&2&3&4&5&6&7&8\\ 2&4&1&-3&6&7&-5&8\end{smallmatrix}\right)\in B_{8}.\) Taking into account the equation (8) we obtain the inversion table of \(w\) as \(I(w)=(0:11:0:0:6:2:0:0)\), and so we conclude that the length of \(w\) is \(L(w)=19\) and that the rank of \(w\) is \(507185\) using Algorithm 2. On the other hand, the reduced expression of \(w\) is \(s_{1}s_{2}s_{3}s_{1}t_{1}s_{1}s_{2}s_{3}s_{4}s_{3}s_{2}s_{1}t_{1}s_{1}s_{2}s_{3 }s_{4}s_{5}s_{6}\) according to the canonical generating set \(S=\{t_{1},s_{1},\cdots,s_{7}\}\), and so \(L(w)=19\) from another viewpoint._
Taking into account the equation (8), we can give the following result for the longest element \(w_{0}\) in \(B_{n}\).
**Corollary 3.3**.: _Let \(w_{0}\) be the longest element of the group \(B_{n}\). Then the inversion table of \(w_{0}\) is_
\[I(w_{0})=(d_{n-1}:d_{n}:\cdots:d_{1}:d_{0})=(2n-1:2n-3:\cdots:5:3:1).\]
_Therefore, it is clear that the order of group \(B_{n}\) is_
\[\mid B_{n}\mid=\prod_{i=0}^{n-1}(d_{i}+1)=2^{n}n!.\]
It is well-known from [1] that the exponents of the group \(B_{n}\) are \(1,\ 3,\cdots,2n-1\), respectively. Note that all components in the inversion table of \(w_{0}\) in Corollary 3.3 exactly coincide with the exponents of the group \(B_{n}\).
Now, on the contrary, we set up a fruitful technique of how to create the signed permutation \(w\in B_{n}\) from a given rank value in the following way: First of all, consider a rank \(k\). Subsequently, turn \(k-1\) into a number in \(B_{n}\)-type number system. Let us show \(k-1\) by \((d_{n-1}:\cdots:d_{1}:d_{0})\in B_{n}\). Essentially, we want to obtain a signed permutation \(w\) such that
\[inv_{i}(w)=d_{n-i}\text{ for all }i=1,\cdots,n.\]
We will build a signed permutation \(w=\left(\begin{smallmatrix}1&2&\cdots&n-1&n\\ w_{1}&w_{2}&\cdots&w_{n-1}&w_{n}\end{smallmatrix}\right)\) associated with \((d_{n-1}:\cdots:d_{1}:d_{0})\) by proceeding the following steps:
* Having listed all possible values that the desired signed permutation can take in the following order \[n>\cdots>1>-1>\cdots>-n\] (9) enumerate them up from \(0\) to \(2n-1\) by starting with the leftmost value in (9). Then find the value corresponding to the number \(d_{n-1}\) from (9) and set it as \(w_{n}\).
* Say \(w_{n}:=(-1)^{r_{i}}i\), where \(r_{i}\in\{0,1\}\). Extract the terms \(i,-i\) from (9). After that, reorder the remaining values as \[n>\cdots>i+1>i-1>\cdots>1>-1>\cdots>-(i-1)>-(i+1)>\cdots>-n\] (10) and renumber them from \(0\) to \(2(n-1)-1\) by starting with the leftmost term. Then determine the value corresponding to the number \(d_{n-2}\) from (10) and assign it to \(w_{n-1}\).
* Carry out the same procedure for (10) and determine in this manner \(w_{n-2}\).
* Proceed these iterations until you determine all \(w_{i}\) values for each \(1\leq i\leq n\).
Let us consider the following example to make this method clear.
**Example 3.4**.: _We will find the \(1464993^{th}\) group element of \(B_{8}\). If we apply Algorithm 1 to the positive integer \(1464992\), then we get the inversion table of this element which we are looking for as \(I(w)=(2:3:9:5:0:4:0:0)\)._
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline _Step_ & _P.P.V._ & \(8\) & \(7\) & \(6\) & \(5\) & \(4\) & \(3\) & \(2\) & \(1\) & _-1_ & _-2_ & _-3_ & _-4_ & _-5_ & _-6_ & _-7_ & _-8_ \\ \cline{2-19} & _P.I.V._ & \(0\) & \(1\) & \(2\) & \(3\) & \(4\) & \(5\) & \(6\) & \(7\) & \(8\) & \(9\) & _10_ & _11_ & _12_ & _13_ & _14_ & _15_ \\ \hline _Step_ & _P.P.V._ & \(8\) & \(7\) & \(5\) & \(4\) & \(3\) & \(2\) & \(1\) & _-1_ & _-2_ & _-3_ & _-4_ & _-5_ & _-7_ & _-8_ & \\ \cline{2-19} & _P.I.V._ & \(0\) & \(1\) & \(2\) & \(3\) & \(4\) & \(5\) & \(6\) & \(7\) & \(8\) & \(9\) & _10_ & _11_ & _12_ & _13_ & \\ \hline _Step_ & _P.P.V._ & \(8\) & \(7\) & \(5\) & \(3\) & \(2\) & \(1\) & _-1_ & _-2_ & _-3_ & _-5_ & _-7_ & _-8_ & \\ \hline _Step_ & _P.I.V._ & \(0\) & \(1\) & \(2\) & \(3\) & \(4\) & \(5\) & \(6\) & \(7\) & \(8\) & \(9\) & _10_ & _11_ & \\ \hline _Step_ & _P.P.V._ & \(8\) & \(7\) & \(3\) & \(2\) & \(1\) & _-1_ & _-2_ & _-3_ & _-7_ & _-8_ & \\ \hline _Step_ & _P.I.V._ & \(0\) & \(1\) & \(2\) & \(3\) & \(4\) & \(5\) & \(6\) & \(7\) & \(8\) & \(9\) & & \\ \hline _Step_ & _P.P.V._ & \(8\) & \(7\) & \(3\) & \(2\) & _-2_ & _-3_ & _-7_ & _-8_ & & & & \\ \cline{2-19} & _P.I.V._ & \(0\) & \(1\) & \(2\) & \(3\) & \(4\) & \(5\) & \(6\) & \(7\) & & & & \\ \hline _Step_ & _P.P.V._ & \(7\) & \(3\) & \(2\) & _-2_ & _-3_ & _-7_ & & & & & & \\ \cline{2-19} & _P.I.V._ & \(0\) & \(1\) & \(2\) & \(3\) & \(4\) & \(5\) & & & & & & \\ \hline \end{tabular}
_In the above table, the abbreviations of possible values that the desired signed permutation can take and possible inversion values are represented by P.P.V. and P.I.V., respectively. Considering the above table, then \(w\in B_{8}\) is built up as_
\[w=\left(\begin{smallmatrix}1&2&3&4&5&6&7&8\\ 2&7&-3&8&-1&-5&4&6\end{smallmatrix}\right)\text{.}\]
Based on the above facts, we may state the following result without proof.
**Proposition 3.5**.: _Let_
\[\mathcal{T}_{2,n} =\{(a_{1}:\cdots:a_{n})\ :\ 0\leq a_{i}\leq 2(n-i+1)-1,\ i=1, \cdots,n\}\] \[=[0,2n-1]\times[0,2n-3]\times\cdots\times[0,3]\times[0,1].\]
_The map \(I:B_{n}\rightarrow\mathcal{T}_{2,n}\) that assigns each permutation to its inversion table is a bijection._
Therefore, we conclude that the inversion table \(I(w)\) is basically another way to represent a permutation \(w\in B_{n}\). The fact that \(inv(w)=\sum_{i=1}^{n}inv_{i}(w)\) for any \(w\in B_{n}\) allows us to give a new approach to the proof of Poincare polynomial for \(B_{n}\) in the sense of \(S_{n}\) (see [2]).
**Theorem 3.6**.: _The Poincare polynomial for \(B_{n}\) is in the following form:_
\[\sum_{w\in B_{n}}q^{inv(w)}=\prod_{i=1}^{n}[2i]_{q}\]
_where \(q\) is an indeterminate and \([2i]_{q}=\frac{1-q^{2i}}{1-q}\) for every \(i=1,\cdots,n\)._
Proof.: If \(I(w)=(inv_{1}(w):\cdots:inv_{n}(w))=(a_{1}:\cdots:a_{n})\) then \(inv(w)=\sum_{i=1}^{n}a_{i}\). Hence
\[\sum_{w\in B_{n}}q^{L(w)}=\sum_{w\in B_{n}}q^{inv(w)}= \sum_{a_{1}=0}^{2n-1}\ \ \sum_{a_{2}=0}^{(2n-3)}\cdots\sum_{a_{n}=0}^{1}q^{a_{1}+a_{2}+ \cdots+a_{n}}\] \[= (\sum_{a_{1}=0}^{2n-1}q^{a_{1}})(\sum_{a_{2}=0}^{2n-3}q^{a_{2}}) \cdots(\sum_{a_{n}=0}^{1}q^{a_{n}})\] \[= \prod_{i=1}^{n}[2i]_{q}\]
as desired.
Let \(\pi=[\pi_{1},\cdots,\pi_{n-1}]\in B_{n-1}\). We want to observe how the insertion of \(n\) (resp. \(-n\)) into the permutation \(\pi\) affects the inversion statistic. There are clearly \(n\) places where we can put \(n\) (resp. \(-n\)) into the permutation \([\pi_{1},\cdots,\pi_{n-1}]\). More precisely, for each \(i=1,\cdots,n-1\) there is one place immediately after \(\pi_{i}\) which is called space \(i\) and there is one more place immediately before \(\pi_{1}\) which we call space \(0\). It is easy to see that the following insertion lemma holds. We denote by \(\pi_{n,i}\) (resp. \(\pi_{-n,i}\)) the permutation in \(B_{n}\) by inserting \(n\) (resp. \(-n\)) into place \(i\) in \(\pi\).
**Lemma 3.7**.: _Suppose that \(\pi=[\pi_{1},\cdots,\pi_{n-1}]\) is a permutation in \(B_{n-1}\). Then we have_
1. \(inv\pi_{n,i}=n-i-1+inv\pi\)__
2. \(inv\pi_{-n,i}=n+i+inv\pi\)
**Example 3.8**.: _We consider \(\pi=[-3,1,2,-4,-5]\in B_{5}\). Then the inversion table of \(\pi\) is \(I(\pi)=(9:7:1:1:1)\)and \(inv\pi=19\). If \(\pi_{6,2}=[-3,1,6,2,-4,-5]\), then \(I(\pi_{6,2})=(10:8:2:0:1:1)\) and \(inv\pi_{6,2}=22\). If \(\pi_{-6,2}=[-3,1,-6,2,-4,-5]\), then \(I(\pi_{-6,2})=(10:8:2:5:1:1)\) and \(inv\pi_{-6,2}=27\)._
We immediately give the next corollary as a result of Lemma 3.7.
**Corollary 3.9**.: _Let \(\pi=[\pi_{1},\cdots,\pi_{n-1}]\in B_{n-1}\). Then we have_
1. \(\sum_{i=0}^{n-1}q^{inv\pi_{n,i}}=[n]_{q}q^{inv\pi}\)_,_
2. \(\sum_{i=0}^{n-1}q^{inv\pi_{-n,i}}=q^{n}[n]_{q}q^{inv\pi}\)_._
Hence for any \(\pi\in B_{n-1}\), we conclude that
\[\sum_{i=0}^{n-1}(q^{inv\pi_{n,i}}+q^{inv\pi_{-n,i}})=([n]_{q}+q^{n}[n]_{q})q^{ inv\pi}=[2n]_{q}q^{inv\pi}. \tag{11}\]
Hence, it is not hard to prove by induction that
\[\sum_{\pi\in B_{n}}q^{inv\pi}=[2n]_{q}\sum_{\tau\in B_{n-1}}q^{inv\tau}=\prod_{ i=1}^{n}[2i]_{q}. \tag{12}\]
For any \(w\in B_{n}\) with the flag major index \(fmaj(w)=k_{0}+k_{1}+\cdots+k_{n-1}\), where \(0\leq k_{i}\leq 2i+1\), \(i=0,1,\cdots,n-1\). Then we have
\[\sum_{w\in B_{n}}q^{fmaj(w)} =\sum_{k_{n-1}=0}^{2n-1}\ \sum_{k_{n-2}=0}^{2n-3}\cdots\sum_{k_{0}=0}^{1}q^{k_{0}+k_{1}+ \cdots+k_{n-1}}\] \[=(\sum_{k_{n-1}=0}^{2n-1}q^{k_{n-1}})(\sum_{k_{n-2}=0}^{2n-3}q^{k _{n-2}})\cdots(\sum_{k_{0}=0}^{1}q^{k_{0}})\] \[=\prod_{i=1}^{n}[2i]_{q}.\]
Considering Theorem 3.6 and the above result together, we conclude that the inversion statistic is equi-distrubuted with and the flag-major index over \(B_{n}\). Moreover, we can define a map \(\phi:B_{n}\to B_{n}\) such that \(inv(w)=fmaj(\phi(w))\) for each \(w\in B_{n}\) as follows: Let the inversion table of \(w\) be \(I(w)=(a_{n-1}:\cdots:a_{1}:a_{0})\). If we define
\[\phi(w)=\sigma_{n-1}^{a_{n-1}}\cdots\sigma_{1}^{a_{1}}\sigma_{0}^{a_{0}}\]
then it is obvious that \(\phi\) is a bijection and \(inv(w)=\sum_{i=0}^{n-1}a_{i}=fmaj(\phi(w))\), where \(\sigma_{i}\) is defined as in (2) for each \(i,\ 0\leq i\leq n-1\).
**Example 3.10**.: _In Table 1, we respectively record the ranks and the inversion tables of the forty-eight elements of \(B_{3}\) and their images under the map \(\phi\). Note that \(inv(w)=fmaj(\phi(w))\) holds for each \(w\in B_{3}\). In the following table, we will denote any permutation \(w\) in \(B_{3}\) by \(w_{1}w_{2}w_{3}\)._
**Question:** It is a natural question to ask here when considering both the inversion number and the flag-major index together, then how exactly can Haglund-Remmel-Wilson identity be defined for the group \(B_{n}\)? As a matter of fact, the flag major part of Haglund-Remmel-Wilson identity for the group \(B_{n}\) was given in [8] by depending on \(q\)-Stirling numbers of the second kind in type B. In the case of symmetric group, the proof of Haglund-Remmel-Wilson identity,
\[\sum_{\sigma\in S_{n}}q^{inv(\sigma)}\prod_{j\in Des(\sigma)}(1+\frac{z}{q^{1+ inv_{j}(\sigma)}})=\sum_{\sigma\in S_{n}}q^{maj(\sigma)}\prod_{j=1}^{des( \sigma)}(1+\frac{z}{q^{j}})\]
where \(inv_{j}(\sigma)=|\{(i,j)\in[1,n]\times[1,n]\ :\ i<j\ and\ \sigma_{i}>\sigma_{j}\}|\) is defined just as Theorem 3.1, was proved by Remmel and Wilson in [9].
|
2302.07151
|
On Zero-Sum Two Person Perfect Information Stochastic Games
|
A zero-sum two person Perfect Information Stochastic game (PISG) under
limiting average payoff has a value and both the maximiser and the minimiser
have optimal pure stationary strategies. Firstly we form the matrix of
undiscounted payoffs corresponding to each pair of pure stationary strategies
(for each initial state) of the two players and prove that this matrix has a
pure saddle point. Then by using the results by Derman [1] we prove the
existence of optimal pure stationary strategy pair of the players. A crude but
finite step algorithm is given to compute such an optimal pure stationary
strategy pair of the players.
|
K. G. Bakshi, S. Sinha
|
2023-02-14T15:59:56Z
|
http://arxiv.org/abs/2302.07151v1
|
# On Zero-Sum Two Person Perfect Information Stochastic Games
###### Abstract
A zero-sum two person Perfect Information Stochastic game (PISG) under limiting average payoff has a value and both the maximiser and the minimiser have optimal pure stationary strategies. Firstly we form the matrix of undiscounted payoffs corresponding to each pair of pure stationary strategies (for each initial state) of the two players and prove that this matrix has a pure saddle point. Then by using the results by Derman [1] we prove the existence of optimal pure stationary strategy pair of the players. A crude but finite step algorithm is given to compute such an optimal pure stationary strategy pair of the players.
**Keywords:** Stochastic games, Markov Decision Processes, Perfect Information, Stationary Strategies, Linear Programming.
**AMS subject classifications:** 90C40, 91A15, 90C05.
## 1 Introduction
Stochastic games are generalizations of Markov decision processes (MDPs) to the case of two or more players. Shapley (1953) [12] introduced 'Stochastic games' in his paper, which is known as Markov games these days. If two players play a matrix game repeatedly over the infinite time horizon and the limiting average payoff is considered, then the value of this infinitely repeated game coincides with the value of the one shot game (by Folk Theorem [3]). Shapley [12] introduced the idea of not playing the same matrix game everyday (i.e., in every stage of the game), but playing one among finitely many matrix games, with a motion among them governed by the present game and the actions chosen there in such a manner that the game is certain to stop in finite time. Then the payoffs of the players can be formulated as the ratio of two bilinear forms. Neumann [10] established the minimax theorem for such games and Loomis [9] gave an elementary proof of this theorem. The case of non-terminating limiting average Stochastic games were studied by Gillette [4], Hoffman and Karp [5]. By undiscounted pay-off we mean limiting average pay-off in this paper. Liggett and Lippman [8] previously proved the existence of pure stationary optimal
strategy pair of the players in an undiscounted perfect information stochastic game. We propose an alternative proof (with less complexity) of the result by Liggett and Lippman [8]. By forming the matrix of undiscounted payoffs corresponding to each pair of pure stationary strategies (for each initial state) of the two players we prove that this matrix has a pure saddle point, which is essentially a pure semi-stationary strategy pair of the players. Then we prove the existence of optimal pure stationary strategy pair of the players by using the results by Derman [1]. We consider the policy-improvement algorithm to compute optimal pure stationary strategy pair of the players. This is a best response algorithm, in which each player looks for his own Blackwell optimal strategy. It is obvious that this is a finite step algorithm and it terminates in finite time by the conjecture 8.1 of Raghavan and Syed (2002) [11]. The paper is organized as follows. Section 2 contains definitions and properties of an undiscounted two person zero-sum Stochastic games considered under limiting average pay-off. Section 3 contains main result of this paper. In section 4 we propose a policy improvement algorithm to compute an optimal stationary strategy pair for the players of such perfect information undiscounted Stochastic games. Section 5 contains some numerical examples illustrating our theorem and proposed algorithm.
## 2 Preliminaries
### Finite two person zero-sum Stochastic games
A zero-sum two person finite stochastic game is described by a collection of five objects \(\Gamma=<S,\{A(s):s\in S\},\{B(s):s\in S\},q,r>\), where \(S=\{0,1,\cdots,z\}\) is the finite non-empty state space and \(A(s)=\{0,1,\cdots,m_{s}\},B(s)=\{0,1,\cdots,n_{s}\}\) are respectively the non-empty sets of admissible actions of the players I and II respectively in the state \(s\). Let us denote \(K=\{(s,i,j):s\in S,i\in A(s),j\in B(s)\}\) to be the set of admissible triplets. For each \((s,i,j)\in K\), we denote \(q(.\mid s,i,j)\) to be the transition law of the game. Finally \(r\) is the real valued functions on \(K\), which represents the immediate (expected) reward for the player-I (whereas -\(r\) is the reward for the player-II). Let us consider player I as the maximiser and player II as the minimiser in the zero-sum two person stochastic game.
The Stochastic game over infinite time is played as follows. At the 0th decision epoch, the game strats at \(s_{0}\in S\) and the players I and II simultaneously and independently choose actions \(i_{0}\in A(s_{0})\) and \(j_{0}\in B(s_{0})\) respectively. Consequently player I and II get immediate rewards \(r(s_{0},i_{0},j_{0})\) and \(-r(s_{0},i_{0},j_{0})\) respectively and the game moves to the state \(s_{1}\) with probability \(q(s_{1}\mid s_{0},i_{0},j_{0})\). After reaching the state \(s_{1}\) on the next decision epoch, the game is repeated over infinite time with the state \(s_{0}\) replaced by \(s_{1}\). Shapley extended the idea of defining SGs where \(\sum_{s^{{}^{\prime}}\in S}q(s^{{}^{\prime}}\mid s,i,j)<1\) for all \((s,i,j)\in K\) and the play terminates with probability \(1-\sum_{s^{{}^{\prime}}\in S}q(s^{{}^{\prime}}\mid s,i,j)<1\). Such games are called'stopping SGs'. The 'non-stopping SGs' are those where \(\sum_{s^{{}^{\prime}}\in S}q(s^{{}^{\prime}}\mid s,i,j)=1\) for all \((s,i,j)\in K\), i.e., the play never terminates.
By a strategy (behavioural) \(\pi_{1}\) of the player I, we mean a sequence \(\{(\pi_{1})_{n}(.\mid hist_{n})\}_{n=1}^{\infty}\), where \((\pi_{1})_{n}\) specifies which action is to be chosen on the \(n\)-th decision epoch by associating with each history \(hist_{n}\) of the system up to \(n\)th decision epoch (where \(hist_{n}\)=\((s_{0},a_{0},b_{0},s_{1},a_{1}\\,b_{1}\cdots,s_{n-1},a_{n-1},b_{n-1},s_{n})\) for \(n\geq 2\), \(hist_{1}=(s_{0})\) and \((s_{k},a_{k},j_{k})\in K\) are respectively the state and actions of the players at the \(k\)-th decision epoch) a probability distribution \((\pi_{1})_{n}(.\mid hist_{n})\) on \(A(s_{n})\). Behavioural strategy \(\pi_{2}\) for player II can be defined analogously.
Generally by any unspecified strategy, we mean behavioural strategy here. We denote \(\Pi_{1}\) and \(\Pi_{2}\) to be the sets of strategy (behavioural) spaces of the players I and II respectively. A strategy \(f^{{}^{\prime}}=\{f^{{}^{\prime}}_{n}\}_{n=1}^{\infty}\) for the player I is called semi-Markov if for each \(n\), \(f^{{}^{\prime}}_{n}\) depends on \(s_{1},s_{n}\) and the decision epoch number \(n\). Similarly we can define a semi-Markov strategy \(g^{{}^{\prime}}=\{g^{{}^{\prime}}_{n}\}_{n=1}^{\infty}\) for the player II.
A strategy \(\pi_{1}=\{\pi_{1n}\}_{n=1}^{\infty}\) is called a stationary strategy if \(\exists\) a map \(f:S\rightarrow\mathbb{P}(A)=\{\mathbb{P}(A(s)):s\in S\}\), where \(\mathbb{P}(A(s))\) is the set of probability distribution on \(A(s)\) such that \(\pi_{1n}=f\) for all \(n\) and \(f(s)\in\mathbb{P}(A(s))\). A stationary strategy for player I is defined as \(z\) tuple \(f=(f(1),f(2),\cdots,f(z))\), where each \(f(s)\) is the probability distribution on \(A(s)\) given by \(f(s)=(f(s,1),f(s,2),\cdots,f(s,m_{s}))\). \(f(s,i)\) denotes the probability of choosing action \(i\) in the state \(s\) by player-I. By similar manner, one can define a stationary strategy \(g\) for player II as \(g=(g(1),g(2),\cdots,g(z))\) where each \(g(s)\) is the probability distribution on \(B(s)\). Let us denote \(F_{1}^{s}\) and \(F_{2}^{s}\) to be the set of stationary strategies for player I and II respectively. A semi-stationary strategy is a semi-Markov strategy which is independent of the decision epoch \(n\), i.e., for a initial state \(s_{1}\) and present state \(s_{2}\), if a semi-Markov strategy \(f^{{}^{\prime}}(s_{1},s_{2},n)\) turns out to be independent of \(n\), then we call it a semi-stationary strategy. Let us denote \(\xi_{1}\) and \(\xi^{2}\) to be the set of semi-stationary strategies for player-I and II respectively.
A stationary strategy is called pure if any player selects a particular action with probability 1 while visiting a state \(s\). We denote \(F_{1}^{sp}\) and \(F_{2}^{sp}\) to be the set of pure stationary strategies of the players I and II respectively. Also \(\xi_{1}^{sp}\) and \(\xi_{2}^{sp}\) are denoted as the set of pure semi-stationary strategies for the player-I and II respectively.
**Definition 1** A zero-sum two person SG \(\Gamma=<S,\{A(s):s\in S\},\{B(s):s\in S\},q,r>\) is called a perfect information stochastic game (PISG) if the following properties hold
(i)\(S=S_{1}\cup S_{2},S_{1}\cap S_{2}=\phi\).
(ii)\(\mid B(s)\mid=1\), for all \(s\in S_{1}\), i.e., on \(S_{1}\) player-II is a dummy.
(iii)\(\mid A(s)\mid=1\), for all \(s\in S_{2}\), i.e., on \(S_{2}\) player-I is a dummy.
### Undiscounted zero-sum two person stochastic games
Let \((X_{1},A_{1},B_{1},X_{2},A_{2},B_{2}\cdots)\) be a co-ordinate sequence in \(S\times(A\times B\times S)^{\infty}\). Given behavioural strategy pair \((\pi_{1},\pi_{2})\in\Pi_{1}\times\Pi_{2}\), initial state \(s\in S\), there exists a unique probability measure \(P_{\pi_{1}\pi_{2}}(\cdot\mid X_{0}=s)\) (hence an expectation \(E_{\pi_{1}\pi_{2}}(\cdot\mid X_{0}=s)\)) on the product \(\sigma\)- field of \(S\times(A\times B\times S)^{\infty}\) by Kolmogorov's extension theorem. For a pair of strategies \((\pi_{1},\pi_{2})\in\Pi_{1}\times\Pi_{2}\) for the players I and II respectively, the limiting average (undiscounted) pay-off for player I, starting from a state \(s\in S\) is defined by:
\[\phi(s,\pi_{1},\pi_{2})=\liminf_{n\rightarrow\infty}\frac{1}{n}E_{\pi_{1}\pi_{ 2}}\sum_{m=1}^{n}[r(X_{m},A_{m},B_{m})\mid X_{0}=s] \tag{2.1}\]
Alternatively, for any pair of stationary strategies \((f_{1},f_{2})\in F_{1}^{s}\times F_{2}^{s}\) of player I and II, we write the undiscounted pay-off for player I as:
\[\phi(s,f_{1},f_{2})=\liminf_{n\rightarrow\infty}\frac{1}{n}\sum_{m=1}^{n}r^{m }(s,f_{1},f_{2}) \tag{2.2}\]
for all \(s\in S\). Where \(r^{m}(s,f_{1},f_{2})\) is the respectively the expected reward for player I at the \(m\) th decision epoch, when player I chooses \(f_{1}\) and player II chooses \(f_{2}\) respectively and the initial state is \(s\).
**Definition 2** For a pair of strategies \((f_{1},f_{2})\in F_{1}^{s}\times F_{2}^{s}\), we define the transition probability matrix by:
\[Q(f_{1},f_{2})=[q(s^{{}^{\prime}}\mid s,f_{1}(s),f_{2}(s))]_{s,s^{{}^{\prime}} =1}^{z},\]
where \(q(s^{{}^{\prime}}\mid s,f_{1}(s),f_{2}(s))=\sum_{i\in A(s)}\sum_{j\in B(s)}q(s^ {{}^{\prime}}\mid s,i,j)f_{1}(s,i)f_{2}(s,j)\) is the probability is that the system jumps to the state \(s^{{}^{\prime}}\) from given state \(s\) when the players play the stationary strategies \(f_{1}\) and \(f_{2}\).
**Lemma 1**(Kemeney and Snell, 1976, [7]) Let \(Q\) be any \(z\times z\) Markov matrix, then the sequence \(\lim_{n\to\infty}\frac{1}{n+1}\sum_{m=0}^{n}Q^{m}(f_{1},f_{2})\) converges as \(n\to\infty\) to a Markov matrix \(Q^{*}\) (the cesaro limiting matrix) such that \(QQ^{*}=Q^{*}Q=Q^{*}Q^{*}=Q^{*}\).
For each \((f_{1},f_{2})\in F_{1}\times F_{2}\), we define \(r(f_{1},f_{2})=[r(s,f_{1},f_{2})]_{z\times 1}\) as the expected reward, where for each \(s\in S\),
\[r(s,f_{1},f_{2})=\sum_{i\in A(s)}\sum_{j\in B(s)}r(s,i,j)f_{1}(s,i)f_{2}(s,j).\]
Now we have the following result:
**Proposition 1** For each player of pure stationary strategies \((f_{1},f_{2})\in F_{1}^{sp}\times F_{2}^{sp}\),
\[\phi(s,f_{1},f_{2})=[Q^{*}(f_{1},f_{2})r(f_{1},f_{2})](s)\forall s\in S.\]
Where \(Q^{*}(f_{1},f_{2})\) is the cesaro limiting matrix of \(Q(f_{1},f_{2})\).
**Definition 3** A zero-sum two person undiscounted stochastic game is said to have a value vector \(\phi=[\phi(s)]_{N\times 1}\) if \(\sup_{\pi_{1}\in\Pi_{1}}\inf_{\pi_{2}\in\Pi_{2}}\phi(s,\pi_{1},\pi_{2})=\phi( s)=\inf_{\pi_{2}\in\Pi_{2}}\sup_{\pi_{1}\in\Pi_{1}}\phi(s,\pi_{1},\pi_{2})\) for all \(s\in S\). A pair of strategies \((\pi_{1}^{*},\pi_{2}^{*})\in\Pi_{1},\times\Pi_{2}\) is said to be an optimal strategy pair for the players if \(\phi(s,\pi_{1}^{*},\pi_{2})\geq\phi(s)\geq\phi(s,\pi_{1},\pi_{2}^{*})\) for all \(s\in S\) and all \((\pi_{1},\pi_{2})\in\Pi_{1}\times\Pi_{2}\). A finite (state and action spaces) Markov decision process is defined by a collection of four objects \(\hat{\Gamma}=<S,\hat{A}=\{A(s):s\in S\},\hat{q},\hat{r}>\), where \(S=\{0,1,\cdots,z\}\) is the finite state space, \(\hat{A}(s)=\{1,2,\cdots,d\}\) is the finite set of admissible actions in the state \(s\). \(\hat{q}(s^{{}^{\prime}}\mid s,a)\) is the transition probabilty (i.e., \(\hat{q}(s^{{}^{\prime}}\mid s,a)\geq 0\) and \(\sum_{s^{{}^{\prime}}\in S}\hat{q}(s^{{}^{\prime}}\mid s,a)=1\)) that the next state is \(s^{{}^{\prime}}\), where \(s\) is the initial state and the decision maker chooses action \(a\) in the state \(s\). The decision process proceeds over infinite time just as stochastic game, where instead of two players we consider a single decision maker. The definition of strategy spaces for the decision maker is same as in the case of stochastic games. Let us denote \(\Pi\), \(F\), \(F_{s}\) as the set of behavioural, stationary, pure-stationary strategies respectively of the decision maker. Let \((X_{1},A_{1},X_{2},A_{2},\cdots)\) be a coordinate sequence in \(S\times(\hat{A}\times S)^{\infty}\). Given a behavioural strategy \(\pi\in\Pi\), initial state \(s\in S\), there exists a unique probability measure \(P_{\pi}(.\mid X_{0}=s)\) (hence an expectation \(E_{\pi}(.\mid X_{0}=s)\)) on the product \(\sigma\)- field of \(S\times(\hat{A}\times S)^{\infty}\) by Kolmogorov's extension theorem.
For a behavioural strategy \(\pi\in\Pi\), the expected limiting average pay-off is defined by
\[\hat{\phi}(s,\pi)=\liminf_{n\to\infty}\frac{1}{n}\sum_{m=1}^{n}E_{\pi}[\hat{r}( X_{m},A_{m})\mid X_{0}=s]. \tag{2.3}\]
for all \(s\in S\).
Main result
**Theorem 2** Any zero-sum two person undiscounted perfect information Stochastic game has a solution in pure stationary strategies.
Proof.: Let \(\Gamma=<S=S_{1}\cup S_{2},A=\{A(s):s\in S_{1}\},B=\{B(s):s\in S_{2}\},q,r>\) be a zero-sum two person perfect information Stochastic game under limiting average pay-off, where \(S=\{0,1,,\cdots,z\}\) is the finite state space. We assume that in \(\mid S_{1}\mid\) number of states, player-II is a dummy and for states \(\{\mid S_{1}\mid+1,\cdots,\mid S_{1}\mid+\mid S_{2}\mid\}\) player-I is a dummy. We assume that in this perfect information game, each player has \(d\) number of pure actions in each state where they are non-dummy. Thus, player-I has \(\mid S_{1}\mid.d\) number of pure actions available in each state \(s\in S\), where he/she is non-dummy and player-II has \(\mid S_{2}\mid.d\) number of pure actions where he/she is non-dummy in the PISG \(\Gamma\). Let us the consider the pay-off matrix
\[A_{\mid S_{1}\mid.d\times\mid S_{2}\mid.d}=\left[\begin{array}{cccc}\phi(s,f_ {0},g_{0})&\phi(s,f_{0},g_{1})&\cdots&\phi(s,f_{0},g_{\mid S_{2}\mid.d})\\ \phi(s,f_{1},g_{0})&\phi(s,f_{1},g_{1})&\cdots&\phi(s,f_{1},g_{\mid S_{2}\mid.d })\\ \vdots&\vdots&\ddots&\vdots\\ \phi(s,f_{\mid S_{1}\mid.d},g_{0})&\phi(s,f_{\mid S_{1}\mid.d},g_{1})&\cdots& \phi(s,f_{\mid S_{1}\mid.d},g_{\mid S_{2}\mid.d})\end{array}\right]\]
Where \((f_{0},f_{1},\cdots,f_{\mid S_{1}\mid.d})\) and \((g_{0},g_{1},\cdots,g_{\mid S_{2}\mid.d})\) are the pure stationary strategies chosen by player-I and II repsectively. In order to prove the existence of a pure semi-stationary strategy, we have to prove that this matrix has a pure saddle point for each initial state \(s\in S\). Now by Shapley [2], if A is the matrix of a two-person zero-sum game and if every \(2\times 2\) submatrix of \(A\) has a saddle point, then A has a saddle point. So, we concentrate only on a \(2\times 2\) matrix and observe if it has a saddle point or not. We consider the \(2\times 2\) submatrix:
\[\left[\begin{array}{cc}\phi(s,f_{i},g_{j})&\phi(s,f_{i},g_{j^{{}^{\prime}}}) \\ \phi(s,f_{i^{{}^{\prime}}},g_{j})&\phi(s,f_{i^{{}^{\prime}}},g_{j^{{}^{\prime}}} )\end{array}\right]\]
Where \(i^{{}^{\prime}},i\in\{0,1,\cdots,\mid S_{1}\mid.d\},(i\neq i^{{}^{\prime}})\) and \(j,j^{{}^{\prime}}\in\{0,1,\cdots,\mid S_{2}\mid.d\},(j\neq j^{{}^{\prime}})\). Now, by suitably renumbering the strategies, we can write the above sub-matrix as:
\[\left[\begin{array}{cc}\phi(s,f_{1},g_{1})&\phi(s,f_{1},g_{2})\\ \phi(s,f_{2},g_{1})&\phi(s,f_{2},g_{2})\end{array}\right]\]
Using the definition of \(\phi(s,f_{1},f_{2})\) in section 2, we get that
\[\begin{array}{c}\phi(s,f_{i},g_{j})=\sum_{s^{{}^{\prime}}\in S}q^{*}(s^{{}^ {\prime}}\mid s,f_{i},g_{j})r(s^{{}^{\prime}},f_{i},g_{j})\\ =\sum_{t=1}^{S_{1}}[q^{*}(t\mid s,f_{i}.)r(t,f_{i}.)]+\sum_{v=S_{1}+1}^{S_{1}+ S_{2}}[q^{*}(v\mid s,g_{j})r(v,g_{j})]\end{array}\]
Where
\[f_{i}(s,.)=\left\{\begin{array}{ll}f_{i}.(s,.)&s\in S_{1}\\ 1&s\in S_{2}\end{array}\right.\]
and
\[g_{j}(s,.)=\left\{\begin{array}{ll}1&s\in S_{1}\\ g_{j}(s,.)&s\in S_{2}\end{array}\right.\]
We replace \(\phi(s,f_{i},g_{j})\) by the expression above in the matrix \(A\). We consider the following two cases when \(A\) can not have a pure saddle point.
Case-1: \(\phi(s,f_{1},g_{1})\) is row minimum and column minimum, \(\phi(s,f_{1},g_{2})\) is row maximum and column maximum, \(\phi(s,f_{2},g_{1})\) is row-maximum and column maximum and \(\phi(s,f_{2},g_{2})\) is row-minimum and column-minimum. These four conditions can be written as: \(\phi(s,f_{1},g_{1})<\phi(s,f_{1},g_{2})\), \(\phi(s,f_{1},g_{1})<\phi(s,f_{2},g_{1})\), \(\phi(s,f_{2},g_{2})<\phi(s,f_{2},g_{1})\), \(\phi(s,f_{2},g_{2})<\phi(s,f_{1},g_{2})\). Thus we get the following inequalities:
\[\sum_{t=1}^{S_{1}}[q^{*}(t\mid s,f_{1.})r(t,f_{1.})]+\sum_{v=S_{1}+1}^{S_{1}+S_ {2}}[q^{*}(v\mid s,g_{1.})r(v,g_{1.})] \tag{3.1}\] \[<\sum_{t=1}^{S_{1}}[q^{*}(t\mid s,f_{1.})r(t,f_{1.})]+\sum_{v=S_{1 }+1}^{S_{1}+S_{2}}[q^{*}(v\mid s,g_{2.})r(v,g_{2.})]\] \[\sum_{t=1}^{S_{1}}[q^{*}(t\mid s,f_{1.})r(t,f_{1.})]+\sum_{v=S_{1 }+1}^{S_{1}+S_{2}}[q^{*}(v\mid s,g_{1.})r(v,g_{1.})]\] (3.2) \[<\sum_{t=1}^{S_{1}}[q^{*}(t\mid s,f_{2.})r(t,f_{2.})]+\sum_{v=S_{1 }+1}^{S_{1}+S_{2}}[q^{*}(v\mid s,g_{1.})r(v,g_{1.})]\] \[\sum_{t=1}^{S_{1}}[q^{*}(t\mid s,f_{2.})r(t,f_{2.})]+\sum_{v=S_{1 }+1}^{S_{1}+S_{2}}[q^{*}(v\mid s,g_{2.})r(v,g_{2.})]\] (3.3) \[<\sum_{t=1}^{S_{1}}[q^{*}(t\mid s,f_{2.})r(t,f_{2.})]+\sum_{v=S_{1 }+1}^{S_{1}+S_{2}}[q^{*}(v\mid s,g_{1.})r(v,g_{1.})]\] \[\sum_{t=1}^{S_{1}}[q^{*}(t\mid s,f_{2.})r(t,f_{2.})]+\sum_{v=S_{1 }+1}^{S_{1}+S_{2}}[q^{*}(v\mid s,g_{2.})r(v,g_{2.})]\] (3.4) \[<\sum_{t=1}^{S_{1}}[q^{*}(t\mid s,f_{1.})r(t,f_{1.})]+\sum_{v=S_{1 }+1}^{S_{1}+S_{2}}[q^{*}(v\mid s,g_{2.})r(v,g_{2.})]\]
Hence, (3.1) yields
\[\sum_{v=S_{1}+1}^{S_{1}+S_{2}}q^{*}(v\mid s,g_{2})r(v,g_{2.})-q^{*}(v\mid s,g_ {1.})r(v,g_{1.}){>}0 \tag{3.5}\]
(3.3) yields
\[\sum_{v=S_{1}+1}^{S_{1}+S_{2}}q^{*}(v\mid s,g_{1.})r(v,g_{1.})-q^{*}(v\mid s,g_ {2.})r(v,g_{2.}){>}0 \tag{3.6}\]
From (3.5) and (3.6) we clearly get a contradiction. Now we consider the next case:
Case-2: \(\phi(s,f_{1},g_{1})\) is row maximum and column maximum, \(\phi(s,f_{1},g_{2})\) is row minimum and column minimum, \(\phi(s,f_{2},g_{1})\) is row-minimum and column minimum and \(\phi(s,f_{2},g_{2})\) is row-maximum and column-maximum. These four conditions can be written as: These four conditions can be written as: \(\phi(s,f_{1},g_{1})>\phi(s,f_{1},g_{2})\), \(\phi(s,f_{1},g_{1})>\phi(s,f_{2},g_{1})\)
\(\phi(s,f_{2},g_{2})>\phi(s,f_{2},g_{1})\), \(\phi(s,f_{2},g_{2})>\phi(s,f_{1},g_{2})\). We can re-write them as follows:
\[\sum_{t=1}^{S_{1}}[q^{*}(t\mid s,f_{1.})r(t,f_{1.})]+\sum_{v=S_{1}+1 }^{S_{1}+S_{2}}[q^{*}(v\mid s,g_{1.})r(v,g_{1.})] \tag{3.7}\] \[>\sum_{t=1}^{S_{1}}[q^{*}(t\mid s,f_{1.})r(t,f_{1.})]+\sum_{v=S_{1 }+1}^{S_{1}+S_{2}}[q^{*}(v\mid s,g_{2.})r(v,g_{2.})]\] \[\sum_{t=1}^{S_{1}}[q^{*}(t\mid s,f_{1.})r(t,f_{1.})]+\sum_{v=S_{1 }+1}^{S_{1}+S_{2}}[q^{*}(v\mid s,g_{1.})r(v,g_{1.})]\] (3.8) \[>\sum_{t=1}^{S_{1}}[q^{*}(t\mid s,f_{2.})r(t,f_{2.})]+\sum_{v=S_{ 1}+1}^{S_{1}+S_{2}}[q^{*}(v\mid s,g_{1.})r(v,g_{1.})]\] \[\sum_{t=1}^{S_{1}}[q^{*}(t\mid s,f_{2.})r(t,f_{2.})]+\sum_{v=S_{ 1}+1}^{S_{1}+S_{2}}[q^{*}(v\mid s,g_{2.})r(v,g_{2.})]\] (3.9) \[>\sum_{t=1}^{S_{1}}[q^{*}(t\mid s,f_{2.})r(t,f_{2.})]+\sum_{v=S_{ 1}+1}^{S_{1}+S_{2}}[q^{*}(v\mid s,g_{1.})r(v,g_{1.})]\] \[\sum_{t=1}^{S_{1}}[q^{*}(t\mid s,f_{2.})r(t,f_{2.})]+\sum_{v=S_{ 1}+1}^{S_{1}+S_{2}}[q^{*}(v\mid s,g_{2.})r(v,g_{2.})]\] (3.10) \[>\sum_{t=1}^{S_{1}}[q^{*}(t\mid s,f_{1.})r(t,f_{1.})]+\sum_{v=S_{ 1}+1}^{S_{1}+S_{2}}[q^{*}(v\mid s,g_{2.})r(v,g_{2.})].\]
Hence, (3.7) yields
\[\sum_{v=S_{1}+1}^{S_{1}+S_{2}}q^{*}(v\mid s,g_{1.})r(v,g_{1.})-q^{*}(v\mid s, g_{2.})r(v,g_{2.}){>}0 \tag{3.11}\]
(3.9) yields
\[\sum_{v=S_{1}+1}^{S_{1}+S_{2}}q^{*}(v\mid s,g_{2.})r(v,g_{2.})-q^{*}(v\mid s, g_{1.})r(v,g_{1.}){>}0 \tag{3.12}\]
From (3.11) and (3.12) we clearly get a contradiction. Thus, every \(2\times 2\) submatrix has a pure saddle point and by Shapley [2], we claim that the matrix \(A\) has a pure saddle point, namely \((F^{*},G^{*})\). Now \(F^{*}=(f_{0},f_{1},\cdots,f_{t},\cdots,f_{z})\) and \(G^{*}=(g_{0},g_{1},\cdots,g_{t},\cdots,g_{z})\) where \(f_{t}\) and \(g_{t}\) are the pure stationary strategies for the initial state \(t\) chosen by player-I and II respectively. Now we prove the following lemma to prove the existence of pure stationary strategy pair which is optimal for the players:
Lemma 3.1: Let us fix an initial state \(t\in S\) in the PISG \(\Gamma\). Suppose \((f_{t},g_{t})\in F_{1}^{sp}\times F_{2}^{sp}\) be an optimal pure stationary strategy pair of the players satisfying:
\[\phi(t,f_{t},g_{t})\leq\phi(t,f_{t},g)\ \forall g\in F_{2}^{sp}\ \mbox{and for some initial state $t\in S$.}\]
Let us denote \(D_{t}\) to be the \(t\)-th row of the bi-matrix identifying the strategy pair \((f_{t},g_{t})\), i.e., \(D_{t}=((f_{t}(t,0),g_{t}(t,0))\cdots,(f_{t}(t,d),g_{t}(t,d))\), where \(d\) is the total number of pure actions in state \(t\) for both the players. Then \((f^{*},g^{*})\in F_{1}^{sp}\times F_{2}^{sp}\) is a pure stationary strategy pair of the players identified by the bi-matrix \(D^{*}\) having \(D_{t}\) as its \(t\)-th row. We can write the bi-matrix \(D^{*}\) as:
\[D^{*}_{(z+1)\times d}=\left[\begin{array}{cccc}(f_{0}(0,0),g_{0}(0,0))&(f_{ 0}(0,1),g_{0}(0,1))&\cdots&(f_{0}(0,d),g_{0}(0,d))\\ (f_{1}(1,0),g_{1}(1,0))&(f_{1}(1,1),g_{1}(1,1))&\cdots&(f_{1}(1,d),g_{1}(1,d) )\\ \vdots&\vdots&\ddots&\vdots\\ (f_{z}(z,0),g_{z}(z,0))&(f_{z}(z,1),g_{z}(z,1))&\cdots&(f_{z}(z,d),g_{z}(z,d) )\end{array}\right]\]
and the pair \((f^{*},g^{*})\) satisfies:
\[\phi(t,f^{*},g^{*})\leq\phi(t,f^{*},g)\forall g\in F_{2}^{sp},\forall t\in S. \tag{3.13}\]
Proof.: For an initial state \(t\in S(=\{0,1,\cdots,z\})\) and a pair of behavioural strategy \((\pi_{1},\pi_{2})\in\Pi_{1}\times\Pi_{2}\) of the players, we consider the \((z+1)d^{2}\) component vector:
\[\xi_{n}^{\pi_{1}\pi_{2}}=\{x_{n000}^{t},x_{n001}^{t},\cdots,x_{ns^{\prime}ab}^ {t},\cdots,x_{nzd^{2}}^{t}\}\]
where \(x_{ns^{\prime}ab}^{t}=\frac{1}{n}\sum_{m=1}^{n}P_{\pi_{1}\pi_{2}}(X_{m}=s^{{}^ {\prime}},A_{m}=a,B_{m}=b\mid X_{0}=t)\). Let \(\xi^{\pi_{1}\pi_{2}}(t)=\lim_{n\to\infty}\xi_{n}^{\pi_{1}\pi_{2}}(t)\), whenever the limit exists and \(\lim_{n\to\infty}x_{ns^{\prime}ab}^{t}=x_{s^{\prime}ab}^{t}\). Denote \(\Theta(\xi_{n}^{\pi_{1}\pi_{2}}(t))=\sum_{s^{{}^{\prime}}\in S}\sum_{a\in A(s ^{{}^{\prime}})}\sum_{b\in B(s^{{}^{\prime}})}x_{ns^{{}^{\prime}}ab}^{t}.r(s^ {{}^{\prime}},a,b)\). Then
\[\phi(t,\pi_{1},\pi_{2})=\liminf_{n\to\infty}\sum_{s^{{}^{\prime}} \in S}\sum_{a\in A(s^{{}^{\prime}})}\sum_{b\in B(s^{{}^{\prime}})}x_{ns^{{}^{ \prime}}ab}^{t}.r(s^{{}^{\prime}},a,b) = \liminf_{n\to\infty}[\xi_{n}^{\pi_{1}\pi_{2}}(t)].\bar{r}. \tag{3.14}\] \[= \liminf_{n\to\infty}\Theta(\xi_{n}^{\pi_{1}\pi_{2}})(t)\]
Where \(\bar{r}\) is the reward vector of order \(zd^{2}\). Define \(\Theta(\xi^{fg}(t))=\lim_{n\to\infty}\Theta(\xi_{n}^{fg}(t))\), considering that the limit exists for all pure stationary strategy pair \((f,g)\in F_{1}^{sp}\times F_{2}^{sp}\).
Let \(p(s^{{}^{\prime}}\mid t,f^{*},g^{*})\) be the transition probability from the state \(t\) to \(s^{{}^{\prime}}\) defined for the strategy pair \((f^{*},g^{*})\). As this is a stochastic game, we can apply the Markov property that for any two states \(x,y\in S\) and \(m,n\in\mathbb{N}\),
\[p^{n+m}(x,y) = P(X_{n+m}=y\mid X_{0}=x) \tag{3.15}\] \[= \sum_{z\in S}P(X_{n}=z\mid X_{0}=x)P(X_{n+m}=y\mid X_{0}=x,X_{n}=z)\] \[= \sum_{z\in S}P^{n}(x,z)P(X_{n+m}=y\mid X_{0}=x,X_{n}=z)\] \[= p^{n}(x,z)p^{m}(z,y)\]
where \(p^{m}(z,y)\) is the \(m\)-th step transition probability from the state \(z\) to \(y\). Now using the above property and using the definition of \(\xi^{fs_{t}g_{t}}(t)\) we have
\[\Theta(\xi^{fs_{t}g_{t}}(t)) = \Theta(\sum_{s^{{}^{\prime}}\in S}p(s^{{}^{\prime}}\mid t,f^{*},g ^{*})\xi^{fs_{t}g_{t}}(s^{{}^{\prime}})) \tag{3.16}\] \[= \sum_{s^{{}^{\prime}}\in S}p(s^{{}^{\prime}}\mid t,f^{*},g^{*}) \Theta(\xi^{fs_{t}g_{t}}(s^{{}^{\prime}}))[\mbox{as $\Theta$ is a continuous function}]\]
Now, as \((f_{t},g_{t})\) is an optimal pure stationary strategy pair for the players when the initial state is t, we can write (3.16) as
\[\Theta(\xi^{f_{t}g_{t}}(t))=\sum_{s^{{}^{\prime}}\in S}p(s^{{}^{\prime}}\mid t,f^{ *},g^{*})\Theta(\xi^{f_{s^{{}^{\prime}}}g_{s^{{}^{\prime}}}}(s^{{}^{\prime}})) \tag{3.17}\]
Now iterating (3.17) \(l\) times we get
\[\Theta(\xi^{f_{t}g_{t}}(t)) = \sum_{s^{{}^{\prime}}\in S}p^{l}(s^{{}^{\prime}}\mid t,f^{*},g^{* })\Theta(\xi^{f_{s^{{}^{\prime}}}g_{s^{{}^{\prime}}}}(s^{{}^{\prime}})) \tag{3.18}\]
If we expand the right hand side of the above expression, the right hand side becomes:
\[p^{l}(0\mid t,f^{*},g^{*})[\sum_{s\in S}\sum_{a\in A(s)}\sum_{b\in B(s)}r_{0}( s,a,b)x_{sab}^{0}]+\cdots+p^{l}(z\mid t,f^{*},g^{*})[\sum_{s\in S}\sum_{a\in A(s)} \sum_{b\in B(s)}r_{z}(s,a,b)x_{sab}^{z}] \tag{3.19}\]
Let \(r^{{}^{\prime}}(s,a,b)=r_{0}(s,a,b)+r_{1}(s,a,b)+\cdots+r_{z}(s,a,b)\). Then we can write (3.19) as:
\[\Theta(\xi^{f_{t}g_{t}}(t)) = \sum_{s\in S}\sum_{a\in A(s)}\sum_{b\in B(s)}r^{{}^{\prime}}(s,a, b).x_{sab}^{t} \tag{3.20}\] \[= \Theta(\xi^{f^{*}g^{*}}(t))\]
Thus form (3.20) and (3.13) we get that
\[\phi(t,f^{*},g^{*})\leq\phi(t,f^{*},g)\forall t\in S\mbox{ and }\forall g\in F _{2}^{sp}.\]
By similar manner we can show that \(\phi(t,f^{*},g^{*})\geq\phi(t,f,g^{*})\forall t\in S\) and \(\forall f\in F_{1}^{sp}\). Thus the pair \((f^{*},g^{*})\) is the optimal pure stationary strategy pair of the players in the PISG \(\Gamma\).
## 4 Algorithm to solve a zero-sum two person perfect information stochastic game
Let \(\Gamma\) be a zero-sum two person perfect information stochastic game. We consider the following policy-improvement algorithm to compute optimal stationary strategy of the players. This is a best response algorithm, in which each player looks for his own Blackwell optimal strategy. The algorithm is stated below:
**Step 1:** Choose a random pure strategy for player-II \(g_{0}\) and set \(k=0\).
**Step 2:** Find the Blackwell optimal strategy \(f_{k}\) for player-I in the MDP \(\Gamma(g_{k})\).
**Step 3:****if**\(g_{k}\) is blackwell optimal strategy for player-II in \(\Gamma(f_{k})\), set \((f^{*},g^{*})=(f_{k},g_{k})\) and stop.
**Step 4:****else** find the blackwell optimal strategy \(g_{k+1}\) for player-II in the MDP \(\Gamma(f_{k})\), set \(k=k+1\) and go to step 2. It is obvious that this is a finite step algorithm and it terminates in finite time by the conjecture 8.1 of Raghavan and Syed (2002) [11]. The process of finding a Blackwell optimal strategy for an undiscounred MDP was proposed by Hordijk et al.(1985) [6]. It consists of a linear programming problem with several parameters as given
below:
\[\max\sum_{s=1}^{z}\sum_{a\in A(s)}r(s,a)w_{sa}\]
subject to:
\[\begin{array}{c}\sum_{s=1}^{z}\sum_{a\in A(s)}(\delta(s,s^{{}^{\prime}})-q(s^{{ }^{\prime}}\mid s,a))w_{sa}=0,\,s^{{}^{\prime}}\in S\\ \sum_{a\in A(s)}w_{sa}+\sum_{s=1}^{z}\sum_{a\in A(s)}(\delta(s,s^{{}^{\prime}}) -q(s^{{}^{\prime}}\mid s,a))y_{sa}=\beta_{s},\,s^{{}^{\prime}}\in S\\ w_{sa}\geq 0\end{array}\]
where \(\beta_{s}>0\) are given numbers for each \(s\in S\), such that \(\sum_{s\in S}\beta_{s}=1\). The Blackwell optimal pure stationary strategy is computed as:
\[f^{*}(s)=\frac{w_{sa}^{*}}{\sum_{a\in A(s)w_{sa}^{*}}}\]
where \(w_{sa}^{*}\) is the optimal solutionof the above LP. By Hordijk et al.[6], this pure stationary strategy is average optimal as well. We elaborate the above algorithm by following examples:
## 5 Numerical examples
**Example 1**: Consider a PISG \(\Gamma\) with three states \(S=\{1,2,3\}\), \(A(1)=\{1,2\}=A(2)\), \(A(3)=\{1\}\), \(B(1)=\{1\}=B(2)\) AND \(B(3)=\{1,2,3\}\). In this example player-I is a dummy player here for the state 3 and player-II is dummy for states 1 and 2. Rewards and transition probabilities for the players are given below
\[\begin{array}{c}\mbox{State-1:}\begin{array}{c}\mbox{\begin{array}{c}5\\ (\frac{1}{2},\,\frac{1}{2},\,0)\\ 7\end{array}\end{array}}\mbox{State-2:}\begin{array}{c}\mbox{\begin{array}{c}1 \\ (\frac{1}{3},0,\,\frac{2}{3})\\ 0.5\\ (0,0,1)\end{array}\end{array}}\mbox{State-3:}\begin{array}{c}\mbox{ \begin{array}{c}3\\ (0,\,\frac{1}{2},\frac{1}{2})\\ 4\\ (1,0,0)\end{array}\end{array}}\mbox{State-3:}\begin{array}{c}\mbox{ \begin{array}{c}3\\ (0,\,\frac{1}{2},\frac{1}{2})\\ 4\\ (1,0,0)\end{array}\end{array}}\end{array}\]
where a cell \(\begin{array}{c}\mbox{\begin{array}{c}r\\ (q_{1},\,q_{2},\,q_{3})\end{array}}\mbox{represents that }r\) is the immediate reward function and \((q_{1},q_{2},q_{3})\) are the transition probabilities that the next states are 1, 2 and 3 respectively if this cell is chosen at present state. The pure strategies for player-I are: \(f_{0}=\{(1,0),(1,0),1\}\), \(f_{1}=\{(1,0),(0,1),1\}\), \(f_{2}=\{(0,1),(1,0)\}\),\(f_{3}=\{(0,1),(0,1)\}\). The pure strategies of player-II are: \(g_{0}=\{1,1,(1,0,0)\}\), \(g_{1}=\{1,1,(0,1,0)\}\), \(g_{2}=\{1,1,(0,0,1)\}\). Firstly set \(k=0\) and we fix the strategy \(g_{0}\) of the player-II in \(\Gamma\). Thus we get a reduced MDP \(\Gamma(g_{0})\) given below:
\begin{tabular}{|r|c|c|c|} \hline State-1: & \begin{tabular}{c} 5 \\ (\(\frac{1}{2}\), \(\frac{1}{2}\), 0) \\ \end{tabular} & \begin{tabular}{c} State-2: \\ \end{tabular} & \begin{tabular}{c} 1 \\ (\(\frac{1}{3}\),0, \(\frac{2}{3}\)) \\ \end{tabular} &
\begin{tabular}{c} State-3: \\ (0,1,\(\frac{1}{2}\),\(\frac{1}{2}\)) \\ \end{tabular} \\ \hline \end{tabular}
Now we formulate the following linear programming problem inthe variables \(x=(x_{11},x_{12},x_{21},\)\(x_{22},x_{31})\) and \(y=(y_{11},y_{12},y_{21},y_{22},y_{31})\) to obtain player-I's Blackwell optimal strategy:
\[\max R=5x_{11}+7x_{12}+x_{21}+0.5x_{22}+3x_{31}\]
subject to
\[3x_{11}+6x_{12}-2x_{21} =0 \tag{5.1}\] \[-3x_{11}-6x_{12}+6x_{21}+6x_{22}-3x_{31} =0\] (5.2) \[-8x_{21}-12x_{22}+6x_{31} =0\] (5.3) \[6x_{11}+6x_{12}+3y_{11}+6y_{12}-2y_{21} =6\beta_{1}\] (5.4) \[2x_{21}+2x_{22}-y_{11}-2y_{12}+2y_{21}+2.y_{22}-y_{31} =2\beta_{2}\] (5.5) \[12x_{31}-8y_{21}-12y_{22}+6y_{31} =12\beta_{3}\] (5.6) \[x,y\geq 0. \tag{5.7}\]
We fix \(\beta_{1}=\beta_{2}=\beta_{3}=\frac{1}{3}\). The solution of the above linear programming problem bt dual-simplex method is given below:
\(\max R=2.778\), \(x=(0.222,0,0.333,0,0.444)\), \(y=(0,0.111,0,0.111,0)\).
Now by the method to compute optimal pure stationary strategy described in section 4, we get that \(f_{0}=\{(1,0),(0,1),1\}\) is the optimal pure stationary strategy for player-I in \(\Gamma(g_{0})\). Now we fix this strategy for player-I. Thus we get a resultant MDP as follows:
\begin{tabular}{|r|c|c|} \hline State-1: & \begin{tabular}{c} 5 \\ (\(\frac{1}{2}\), \(\frac{1}{2}\), 0) \\ \end{tabular} & State-2: & \begin{tabular}{c} 1 \\ (\(\frac{1}{3}\),0, \(\frac{2}{3}\)) \\ \end{tabular} & State-3: &
\begin{tabular}{c} 4 \\ (1,0,0) \\ \end{tabular} & We formulate the linear programming problem of the above MDP for the variables \(x=(x_{11},x_{21},x_{31},x_{32},x_{33})\) and \(y=(y_{11},y_{21},y_{31},y_{32},y_{33})\) as follows:
\[\min R^{{}^{\prime}}=5x_{11}+x_{21}+3x_{31}+4x_{32}+2x_{33}\]
subject to
\[3x_{11}-2x_{21}-6x_{32}-3x_{33} =0 \tag{5.8}\] \[-2x_{11}+4x_{21}-2x_{31}-x_{33} =0\] (5.9) \[-8x_{21}+6x_{31}+12x_{32}+9x_{33} =0\] (5.10) \[6x_{11}+3y_{11}-2y_{21}-6y_{32}-3y_{33} =6\beta_{1}\] (5.11) \[4x_{21}-2y_{11}+4y_{21}-2y_{31}-y_{33} =4\beta_{2}\] (5.12) \[12x_{31}+12x_{32}+12x_{33}-8y_{21}+6y_{31}+12y_{32}+9y_{33} =12\beta_{3}\] (5.13) \[x,y\geq 0. \tag{5.14}\]
The solution of the above LP by dual-simplex method is given below:
\(\min R^{{}^{\prime}}=2.778\), \(x=(0.222,0.333,0.444,0,0)\), \(y=(0.333,0.1667,0,0,0)\). So by the same method described in section 4, we compute the optimal pure stationary strategy for player-II as: \(g_{0}=\{1,1,(1,0,0)\}\). Thus the algorithm stops in this step and we get the optimal pure (limiting average) stationary strategy \((F^{*},G^{*})=(f_{0},g_{0})\).
**Example 2:** Consider a PISG \(\Gamma\) with four states \(S=\{1,2,3,4\}\), \(A(1)=\{1,2\}=A(2)\), \(A(3)=\{1\}\), \(B(1)=\{1\}=B(2)\) AND \(B(3)=\{1,2,3\}\). In this example player-I is a dummy player here for the state 3 and player-II is dummy for states 1 and 2. Rewards and transition probabilities for the players are given below
\begin{tabular}{|c|c|c|c|c|} \hline State-1: & \begin{tabular}{c} 2 \\ (\(\frac{1}{2}\), \(\frac{1}{2}\), 0,0) \\ \end{tabular} & State-2: & \begin{tabular}{c} 1 \\ (\(\frac{1}{3}\),0, \(\frac{2}{3}\),0) \\ \end{tabular} & State-3: & \begin{tabular}{c} 5 \\ (\(0,0,\frac{1}{2},\frac{1}{2}\)) \\ \end{tabular} &
\begin{tabular}{c} 0 \\ (\(0,0,1,0)\) \\ \end{tabular} \\ \hline \end{tabular}
\begin{tabular}{|c|c|c|c|} \hline State 4: & \begin{tabular}{c} 11 \\ (\(\frac{1}{2}\), \(0,\frac{1}{2}\), 0) \\ \end{tabular} &
\begin{tabular}{c} 12 \\ (\(1,0,0,0\)) \\ \end{tabular} \\ \hline \end{tabular}
where a cell
\begin{tabular}{c} \(r\) \\ (\(q_{1}\), \(q_{2}\), \(q_{3}\), \(q_{4}\)) \\ \end{tabular} & represents that \(r\) is the immediate reward function and \((q_{1},q_{2},q_{3},q_{4})\) are the transition probabilities that the next states are 1, 2, 3 and 4 respectively if this cell is chosen at present state. The pure strategies for player-I are: \(f_{0}=\{(1,0),(1,0),1,1\}\), \(f_{1}=\{(1,0),(0,1),1,1\}\), \(f_{2}=\{(0,1),(1,0),1,1\}\),\(f_{3}=\{(0,1),(0,1),1,1\}\). The pure strategies of player-II are: \(g_{0}=\{1,1,(1,0),(1,0)\}\), \(g_{1}=\{1,1,(1,0),(0,1)\}\), \(g_{2}=\{1,1,(0,1),(1,0)\}\) and \(g_{3}=\{1,1,(0,1),(0,1)\}\). Firstly set \(k=0\) and we fix the strategy \(g_{0}\) of the player-II in \(\Gamma\). Thus we get a reduced MDP \(\Gamma(g_{0})\) given below:
\begin{tabular}{|c|c|c|c|} \hline State-1: & \begin{tabular}{c} 2 \\ (\(\frac{1}{2}\), \(\frac{1}{2}\), 0,0) \\ \end{tabular} & State-2: & \begin{tabular}{c} 1 \\ (\(\frac{1}{3}\),0, \(\frac{2}{3}\),0) \\ \end{tabular} & State-3: &
\begin{tabular}{c} 5 \\ (\(0,0,\frac{1}{2},\frac{1}{2}\)) \\ \end{tabular} \\ \hline \end{tabular}
\begin{tabular}{|c|c|c|} \hline State 4: &
\begin{tabular}{c} 11 \\ (\(\frac{1}{2}\), \(0,\frac{1}{2}\), 0) \\ \end{tabular} \\ \hline \end{tabular}
Now we formulate the following linear programming problem in the variables \(x=(x_{11},x_{12},x_{21},x_{22},x_{31},x_{32},x_{41},x_{42})\) and \(y=(y_{11},y_{12},y_{21},y_{22},y_{31},y_{32},y_{41},y_{42})\) to obtain player-I's Blackwell optimal strategy:
\[\max R=2x_{11}+3x_{12}+0.5x_{22}+5x_{31}+11x_{41}\]
subject to
\[3x_{11}+6x_{12}-2x_{21}-3x_{41}=0 \tag{5.15}\] \[-3x_{11}-6x_{12}+6x_{21}+6x_{22}=0\] (5.16) \[-4x_{21}-6x_{22}+3x_{31}-3x_{41}=0\] (5.17) \[-3x_{31}+6x_{41}=0\] (5.18) \[6x_{11}+6x_{12}+3y_{11}+6y_{12}-2y_{2}1-3y_{41}=6\beta_{1}\] (5.19) \[2x_{21}+2x_{22}-3y_{11}-6y_{12}+6y_{21}+6y_{22}=2\beta_{2}\] (5.20) \[6x_{31}-4y_{21}-6y_{22}+3y_{31}-3y_{41}=12\beta_{3}\] (5.21) \[2x_{41}-x_{31}+2x_{41}=2\beta_{4}\] (5.22) \[x,y\geq 0. \tag{5.23}\]
We fix \(\beta_{1}=\beta_{2}=\beta_{3}=\beta_{4}=\frac{1}{4}\). The solution of the above linear programming problem bt dual-simplex method is given below:
\(\max R=5.6875\), \(x=(0,0.1250,0,0.1250,0.5000,0.2500)\), \(y=(0,0.1250,0,0.2500,0,0)\).
Now by the method to compute optimal pure stationary strategy described in section 4, we get that \(f_{0}=\{(0,1),(0,1),1,1\}\) is the optimal pure stationary strategy for player-I in \(\Gamma(g_{0})\). Now we fix this strategy for player-I. Thus we get a resultant MDP as follows:
\begin{tabular}{|c|c|c|c|c|} \hline State-1: & \begin{tabular}{c} 3 \\ (0,1,0,0) \\ \end{tabular} & State-2: & \begin{tabular}{c} 0.5 \\ (0,0,1,0) \\ \end{tabular} & State-3: & \begin{tabular}{c} 5 \\ (0,0,\(\frac{1}{2},\frac{1}{2}\)) \\ \end{tabular} &
\begin{tabular}{c} 0 \\ (0,0,1,0) \\ \end{tabular} \\ \hline \end{tabular}
\begin{tabular}{|c|c|c|c|} \hline State 4: & \begin{tabular}{c} 11 \\ (\(\frac{1}{2},0,\frac{1}{2},0\)) \\ \end{tabular} & \begin{tabular}{c} 12 \\ (\(1,0,0,0\)) \\ \end{tabular} &
\begin{tabular}{c} 10 \\ (1,0,0,0) \\ \end{tabular} \\ \hline \end{tabular}
Now we formulate the following linear programming problem in the variables \(x=(x_{11},x_{12},x_{21},x_{22},\)\(x_{31},x_{32},x_{41},x_{42})\) and \(y=(y_{11},y_{12},y_{21},y_{22},y_{31},y_{32},y_{41},y_{42})\) to obtain player-I's Blackwell optimal strategy:
\[\min R=2x_{11}+3x_{21}+0.5x_{31}+5x_{41}+11x_{42}\]
subject to
\[2x_{11}-x_{41}-2x_{42}=0 \tag{5.24}\] \[-x_{11}+x_{21}=0\] (5.25) \[-2x_{21}+x_{31}-x_{41}=0\] (5.26) \[-x_{31}+2x_{41}+2x_{42}=0\] (5.27) \[2x_{11}+2y_{11}-y_{41}-2y_{42}=2\beta_{1}\] (5.28) \[x_{21}-y_{11}+y_{21}=\beta_{2}\] (5.29) \[2x_{31}+2x_{32}-2y_{21}+y_{31}-y_{41}=2\beta_{3}\] (5.30) \[x_{41}+x_{42}-y_{31}+2y_{41}+y_{42}=2\beta_{4}\] (5.31) \[x,y\geq 0. \tag{5.32}\]
The solution of the above LP by dual-simplex method is given below:
\(\min R^{{}^{\prime}}=2.778\), \(x=(0.1250,0.1250,0.5,0,0.25,0)\), \(y=(0.1250,0.2500,0,0,0,0)\). So by the same method described in section 4, we compute the optimal pure stationary strategy for player-II as: \(g_{0}=\{1,1,(1,0),(0,1)\}\). Thus the algorithm stops in this step and we get the optimal pure (limiting average) stationary strategy \((F^{*},G^{*})=(f_{0},g_{0})\).
|
2308.12261
|
Prompt2Model: Generating Deployable Models from Natural Language
Instructions
|
Large language models (LLMs) enable system builders today to create competent
NLP systems through prompting, where they only need to describe the task in
natural language and provide a few examples. However, in other ways, LLMs are a
step backward from traditional special-purpose NLP models; they require
extensive computational resources for deployment and can be gated behind APIs.
In this paper, we propose Prompt2Model, a general-purpose method that takes a
natural language task description like the prompts provided to LLMs, and uses
it to train a special-purpose model that is conducive to deployment. This is
done through a multi-step process of retrieval of existing datasets and
pretrained models, dataset generation using LLMs, and supervised fine-tuning on
these retrieved and generated datasets. Over three tasks, we demonstrate that
given the same few-shot prompt as input, Prompt2Model trains models that
outperform the results of a strong LLM, gpt-3.5-turbo, by an average of 20%
while being up to 700 times smaller. We also show that this data can be used to
obtain reliable performance estimates of model performance, enabling model
developers to assess model reliability before deployment. Prompt2Model is
available open-source at https://github.com/neulab/prompt2model.
|
Vijay Viswanathan, Chenyang Zhao, Amanda Bertsch, Tongshuang Wu, Graham Neubig
|
2023-08-23T17:28:21Z
|
http://arxiv.org/abs/2308.12261v1
|
# Prompt2Model:
###### Abstract
Large language models (LLMs) enable system builders today to create competent NLP systems through prompting, where they only need to describe the task in natural language and provide a few examples. However, in other ways, LLMs are a step backward from traditional special-purpose NLP models; they require extensive computational resources for deployment and can be gated behind APIs. In this paper, we propose \(\mathrm{Prompt2Model}\), a general-purpose method that takes a natural language task description like the prompts provided to LLMs, and uses it to train a special-purpose model that is conducive to deployment. This is done through a multi-step process of retrieval of existing datasets and pre-trained models, dataset generation using LLMs, and supervised fine-tuning on these retrieved and generated datasets. Over three tasks, we demonstrate that given the same few-shot prompt as input, \(\mathrm{Prompt2Model}\) trains models that outperform the results of a strong LLM, \(\mathrm{gpt-3.5-turbo}\), by an average of 20% while being up to 700 times smaller. We also show that this data can be used to obtain reliable _performance estimates_ of model performance, enabling model developers to assess model reliability before deployment. \(\mathrm{Prompt2Model}\) is available open-source at [https://github.com/neulab/prompt2model.1](https://github.com/neulab/prompt2model.1)
Footnote 1: Our demo video is posted at youtu.be/LYQ_EhG-Q.
## 1 Introduction
Traditionally, building an NLP model from scratch has been a substantial undertaking. An NLP practitioner seeking to solve a new problem would need to define their task scope, find or create data that specifies the intended system behavior, choose a suitable model architecture, train the model, assess its performance through evaluation, and then deploy it for real-world usage Paleyes et al. (2022).
LLMs like GPT-3 Brown et al. (2020); Liu et al. (2023) offer a lighter-weight paradigm for NLP system construction through "prompting" Reynolds and McDonell (2021). Practitioners can now write a prompt specifying the intended system behavior (optionally with a few demonstrations), and ask an LLM to generate a desired output via text completion. This makes it possible to prototype NLP systems rapidly for a variety of applications without writing a single line of code Floridi and Chiriatti (2020).
However, there is still a gap between proof-of-concept prototyping -- showing LLMs can be prompted for a particular task -- and practical deployment. Prompting LLMs can be expensive as they require either a significant amount of computing or access to commercial APIs, and their reliance on the input prompt quality makes them unstable compared to trained models Min et al. (2022); Bubeck et al. (2023). Because practitioners usually do not have enough annotated validation data to measure their system performance, it is also more challenging for them to debug their systems before deployment Jiang et al. (2022). Additionally,
Figure 1: \(\mathrm{Prompt2Model}\) is a framework for generating a small yet accurate model from a prompt.
LLM-prompted systems pose usability challenges. Practitioners have expressed concerns about the high serving cost and slow prediction time associated with using LLMs (Park et al., 2022), and those working in high-stakes domains cannot rely on commercial LLM APIs due to privacy concerns. For instance, sharing user data with LLM service providers is illegal for many applications in the US (Sezgin et al., 2022).
In this work, we present \(\mathrm{Prompt2Model}\), a system that retains the ability to specify system behavior in a light-weight way through _prompting_, while still resulting in a _deployable special-purpose model_, maintaining all the advantages thereof. \(\mathrm{Prompt2Model}\) is designed as an automated pipeline that extracts essential task information from users' prompts and then automatically collects and synthesizes task-specific knowledge through three channels:
* _Dataset retrieval_: Whenever possible, we collect training data by retrieving task-relevant annotated data (Farber and Leisinger, 2021; Viswanathan et al., 2023).
* _Dataset generation_: We distill knowledge from an LLM ("teacher model") by employing it to generate a pseudo-labeled dataset. Prior work has demonstrated that such a dataset can be used to train a smaller "student" model to emulate the behavior of the teacher model (Wang et al., 2021; He et al., 2023; Gudibande et al., 2023).
* _Model retrieval_: Based on the prompt, we identify a pretrained language model whose parametric knowledge is appropriate for the user's intent. This chosen model serves as the student model and is further fine-tuned and evaluated using the generated and retrieved data. \(\mathrm{Prompt2Model}\) is designed to support different instantiations of each of these components. We provide a reference implementation where we demonstrate its utility with a gpt-3.5-turbo-based dataset generator, a dataset retriever based on DataFinder (Viswanathan et al., 2023), and a model retriever using BM25. We evaluate on three tasks covering both traditional NLP benchmarks and novel applications and find that, empirically, \(\mathrm{Prompt2Model}\) sometimes produces small models that outperform gpt-3.5-turbo when using the same prompt as input. On 2 of these 3 tasks, we observe >20 point improvements over the gpt-3.5-turbo baseline, despite the final model produced by \(\mathrm{Prompt2Model}\) being up to 700 times smaller. We also find that we can generate effective evaluation datasets; performance improvements on these synthetic clones of real benchmarks also hold on their real counterparts. We believe that \(\mathrm{Prompt2Model}\) can serve the following purposes for the community:
1. [leftmargin=*]
2. **A tool for quickly building small and competent NLP systems**: \(\mathrm{Prompt2Model}\) can be directly used to produce task-specific models that outperform LLMs in a few hours without any manual data annotation or architecture design. The method bridges the gap between the proof-of-concept LLM prototyping and the practical deployment of the model.
3. **A testbed for end-to-end, prompt-based model training**: Given \(\mathrm{Prompt2Model}\)'s extensible design, it can offer a platform for exploring new techniques in model distillation, dataset generation, synthetic evaluation, dataset retrieval, and model retrieval. Our platform allows studying these components using extrinsic downstream metrics, enabling empirical progress on these research areas.
## 2 Prompt2Model Framework
Our system, \(\mathrm{Prompt2Model}\), provides a platform to automate the components of a machine learning pipeline: data collection, model training, evaluation, and deployment. We illustrate our automated pipeline in Figure 2. At the core is our automatic data collection system, which leverages dataset retrieval and LLM-based dataset generation to obtain labeled data relevant to the user's needs. We then retrieve pretrained models which we finetune on the training splits of the collected datasets. Finally, we evaluate our trained models on the test splits of the same datasets and optionally create a web UI that can be used to interact with the model.
Our general-purpose method is designed to be modular and extensible; each component can be implemented differently or disabled by a practitioner. We give an overview of our framework, then in section 3 we describe our reference implementation.
Prompt ParserAs the primary input to our system, users provide prompts similar to those used for LLMs. These prompts comprise an instruction and, optionally, a few demonstrations of the anticipated behavior. While this open-ended interface is convenient for users, end-to-end ML pipelines may benefit from a _Prompt Parser_ that processes this input, such as segmenting the prompt into an
instruction and individual demonstrations or translating instructions into English.
Dataset RetrieverGiven a prompt, we first try to discover existing manually-annotated data that can support a user's task description. There are several design decisions for the _Dataset Retriever_:
1. What datasets to search against?
2. How to index datasets for search?
3. Which dataset columns are needed for the user's task, and which columns should be ignored?
Prior works by Farber and Leisinger (2021) and Viswanathan et al. (2023) introduced systems for dataset search. We use the latter, called _DataFinder_, in our implementation, as described in SS3.2.
Dataset GeneratorNot all conceivable tasks have any existing annotated data, and many tasks are only somewhat relevant to an existing dataset. To support a wide range of tasks, we introduce a _Dataset Generator_ to produce synthetic training data as per the user-specific requirements parsed by the _Prompt Parser_. This component presents challenges related to cost efficiency, generation speed, example diversity, and quality control. We discuss our suggested solution to these challenges in SS3.3.
Model RetrieverBesides training data, we must identify an appropriate model to finetune. We cast this as a retrieval problem, where each model is represented by user-generated descriptions and metadata such as popularity or tasks supported. The reference implementation of our _Model Retriever_, described in SS3.4, searches against pretrained models on Hugging Face Wolf et al. (2020), but this could instead cover other model repositories such as Model Zoo Koh (2020).
TrainingGiven retrieved and generated datasets and a pretrained model, we use a _Model Trainer_ to finetune the model on a subset of the data. We currently train models by treating all tasks as text-to-text generation Raffel et al. (2020), as described in SS3.5, but emphasize that this component can be extended in the future to support new approaches.
EvaluationAfter training models on a portion of the retrieved and generated datasets, we give the remaining data to an _Model Evaluator_ module. We aim to support a variety of tasks, and selecting the correct task-specific metrics for an arbitrary task is a difficult problem. We describe our suggested strategies for task-agnostic evaluation in SS3.6.
Web App CreationTo enable developers to expose a model to collaborators or users, we include an optional component called the _Demo Creator_ to create a graphical interface to interact with the model. We briefly describe our implementation of this component in SS3.7.
## 3 Reference Implementation
\(\mathrm{Prompt2Model}\) is designed modularly to support customization of each component in our framework (described in SS2), but we have provided a reference implementation to enable immediate adoption.
### Prompt Parser
We parse the prompt into instruction and demonstrations fields (shown in Figure 2), where
Figure 2: The \(\mathrm{Prompt2Model}\) architecture seeks to automate the core machine learning development pipeline, allowing us to train a small yet accurate model from just a prompt.
the instruction represents the primary task or objective and the demonstrations exemplify the desired behavior. To achieve this, we utilize an LLM with in-context learning to segment user prompts, employing the OpenAI gpt-3.5-turbo-0613 in our experiments. If the instruction provided is identified to be in a language other than English, we translate it to English using the DeepL API.2
Footnote 2: [https://www.deepl.com/en/docs-api](https://www.deepl.com/en/docs-api)
### Dataset Retriever
To retrieve datasets for a prompt, we adapt the _DataFinder_ system introduced by Viswanathan et al. (2023). By extracting user-generated dataset descriptions for each dataset in Hugging Face Datasets (Lhoest et al., 2021), we utilize DataFinder's trained bi-encoder retriever to rank the most relevant datasets. Once a relevant dataset is identified, the next step is to determine which columns of the dataset correspond to the input and the desired output specified by the user. As automatically inducing the correct schema for any dataset can be challenging, we adopt a human-in-the-loop approach. We present the top-\(k\) datasets, where \(k=25\) by default, to the user and allow them to either select the most relevant dataset or to state that none are a good fit for their task. We then ask the user to identify the appropriate columns for input and output from the dataset's schema.
### Dataset Generator
We carefully engineered our dataset generator to enable speed-optimized generation at a low-cost while creating diverse and high-quality examples. Our strategy comprises the following components:
High-Diversity Few-Shot PromptingWe use automated prompt engineering to generate a diverse dataset. We augment the user-provided demonstration examples with a random sample of previously generated examples to promote diversity and avoid generating duplicate examples. Without this strategy, 120 out of 200 generated QA examples were duplicates; with it, only 25 were duplicates.
Temperature AnnealingWe adjust the sampling temperature from low (favoring deterministic outputs) to high (encouraging diverse exploration) proportionally to the number of examples already generated. This modulation helps preserve output quality while gradually encouraging diversity.
Self-Consistency DecodingGiven that LLM may generate non-unique or incorrect outputs for the same inputs, we use _self-consistency_ filtering (Wang et al., 2022) to select pseudo-labels. Specifically, we create a consensus output for each unique input by selecting the most frequent answer; in the case of ties, we heuristically select the shortest answer. This promotes accuracy of the generated dataset while ensuring unique examples.
Asynchronous BatchingAPI requests are parallelized using _zeno-build_(Neubig and He, 2023). We use additional mechanisms, such as dynamic batch size and throttling, to optimize API usage.
### Model Retriever
We need to select an appropriate model to finetune. To support many tasks with a unified model interface, we presently limit ourselves to encoder-decoder architectures on Hugging Face (Wolf et al., 2020), following recent work that shows that encoder-decoder models are more data-efficient for model distillation (Calderon et al., 2023). This restriction still leaves a large set of pretrained models to choose from, e.g. Salesforce/codet5-base for coding-related tasks (Wang et al., 2021) or MaryaAI/opus-mt-ar-en-finetuned-ar-to-en for Arabic-to-English translation (Tiedemann and Thottingal, 2020). We frame the problem of selecting a pretrained model as a search problem. Using the user's instruction as a query, we search against all textual descriptions of models on Hugging Face.
This search task is challenging because Hugging Face model descriptions are sparse and contain lots of templatic text, often with only a few words that signify the content of the model. To address this, we follow the HyDE framework (Gao et al., 2023) and first use gpt-3.5-turbo to create a _hypothetical model description_ given the user's instructions. We show an example of a hypothetical document generated for a question-answering instruction in Figure 3. Using this description as an expanded query, we then apply the BM25 algorithm to compute query-model similarity scores (Robertson et al., 1995). To ensure the ease of deployment of the resulting model, we filter out models whose size (in bytes) exceeds a user-specified threshold (set to 3GB by default). Using the intuition that highly-downloaded models are more likely to be high in quality, we choose the top model after ranking by:
\[BM25(\text{query},\text{model})\cdot\log(\text{\# of Downloads}+1).\]
### Training
Dataset ProcessingWe train the model by leveraging two datasets- one generated and one retrieved. To sidestep the challenge of making schema-specific modeling decisions (e.g. constructing specialized architectures for classification or generation tasks), we treat all datasets as "text-to-text" problems (Raffel et al., 2020). We textualize the input columns of each dataset and prepend the user's instructions to the input to guide the model.
FinetuningWe concatenate the retrieved and generated datasets and shuffle them before training the student model. We use the same default hyperparameters for all tasks.3 We train with the AdamW optimizer with \(\text{lr}=\text{5e-5}\) for 3 epochs, which takes roughly one hour for all tasks.
Footnote 3: We empirically find that these default hyperparameters are effective, but we plan on implementing hyperparameter selection using generated validation data in the future.
### Evaluation
Our _Model Evaluator_ automatically evaluates models for all tasks using three general-purpose metrics: Exact Match, ChrF++ (Popovic, 2015), and BERTScore(Zhang et al., 2019). ChrF++ balances precision and recall to assess text generation quality. Exact Match measures how often the model output perfectly matches the exact reference. BERTScore captures semantic similarities despite different wordings or phrasings by comparing the model output and reference in the embedding space. We use XLM-R (Conneau et al., 2020) as the encoder for BERTScore to support multilingual evaluation.
### Web App Creation
We finally provide an optional step in \(\mathrm{Prompt2Model}\) to automatically create a graphical user interface that allows downstream users to interact with the trained model. This web application, built using Gradio (Abid et al., 2019), can then be easily deployed publicly on a server.
## 4 Experimental Setup
TasksAs a proof-of-concept, we test our system's ability to learn a model for three tasks:
* _Machine Reading Question Answering_: We first consider a common use case where pretrained models and training datasets are plentiful. We use SQuAD (Rajpurkar et al., 2016) as ground truth to evaluate this setting.
* _Japanese NL-to-Code_: Code generation from Japanese-language queries is a challenging scenario where prior work exists but no annotated data or pretrained models are available. We use McNoNaLa (Wang et al., 2023) for evaluation.
* _Temporal Expression Normalization_: We finally consider a task where there are no pretrained models or training datasets of any kind available. Here we use the Temporal dataset of Wu et al. (2023) as ground truth for evaluation.
Though \(\mathrm{Prompt2Model}\) offers automated model evaluation (on generated and retrieved datasets), we use real benchmark datasets here to measure our pipeline's ability to train accurate models.
LLM BaselineA primary goal of our work is to train small models that can match or outperform LLMs. To measure success towards this goal, we report the performance of gpt-3.5-turbo on each benchmark. We provide gpt-3.5-turbo4 the same instruction and demonstrations provided to \(\mathrm{Prompt2Model}\), for fair comparison.
Footnote 4: We used gpt-3.5-turbo-0613, accessed between July 26 and August 6, 2023.
## 5 Experiment Results
### Downstream performance
How effective is \(\mathrm{Prompt2Model}\) at producing a high-quality model? In Table 1, we evaluated models produced by \(\mathrm{Prompt2Model}\), as well as our
Figure 3: For our model retriever, we first construct a hypothetical model description for a query, then compute similarity scores between that hypothetical model description and the descriptions of real models.
baseline LLM gpt-3.5-turbo, on real benchmark datasets for each task -- SQuAD, MCoNaLa, and Temporal. We further examine the effect of removing 2 specific elements of the \(\mathrm{Prompt2Model}\) pipeline -- model retrieval and dataset retrieval.
On 2 of 3 datasets, we find that \(\mathrm{Prompt2Model}\) produces models that are considerably more accurate than gpt-3.5-turbo. This is remarkable because the retrieved model for SQuAD and Temporal is Flan-T5, which, at 250M parameters, is up to 700 times smaller than gpt-3.5-turbo (which is believed to contain 175B parameters).
We observe that \(\mathrm{Prompt2Model}\)'s performance on MCoNaLa's Japanese-to-Python task is significantly worse than gpt-3.5-turbo. One explanation for this is the relatively low diversity in the generated dataset of Japanese queries; 45 of 5000 examples are different ways of saying "find the maximum value in a list of numbers". We do not observe this level of redundancy in our other datasets, suggesting that gpt-3.5-turbo may struggle to generate diverse text for non-English languages. Another reason is the lack of an appropriate student model -- the models found by the model retriever were trained on either on multiple language or code, but not both. The resulting pretrained models may lack the parametric knowledge to represent the Japanese inputs, Python outputs, or both.
### Combining retrieved and generated datasets is powerful
Ideally, generated and retrieved data should be as close to the target domain as possible. In our experimental setting, where we deliberately choose prompts that mimic existing datasets, we can evaluate how well the model performs relative to a model trained on the same amount of data from the true dataset. We use SQuAD as a running example.5 As our prompt is a description of the SQuAD passage-level question answering task (Figure 1), we exclude SQuAD from our retrieved datasets list. Instead, we evaluate models finetuned on:
Footnote 5: We focus on only SQuAD here because our other two tasks have less real training examples than the datasets we generate, making comparison impractical.
1. 3k examples from the closest retrieved dataset6
2. 3k examples generated by \(\mathrm{Prompt2Model}\)
3. The union of the above, which is what the full \(\mathrm{Prompt2Model}\) pipeline uses
4. 3k examples from SQuAD (analogous to the user custom-annotating data for a task).
Footnote 6: The closest dataset retrieved by the dataset retriever for our SQuAD-inspired prompt is The Children’s Book Test Dataset (Hill et al., 2016).
Table 2 shows the results across these four settings. While using retrieved or generated data causes a reduction in performance due to domain shift, the combination of the two methods achieves similar performance to using the true dataset. For this machine reading comprehension task where the user would need to custom-annotate data for their task, \(\mathrm{Prompt2Model}\) allows for _similar performance at less than 1% of the cost_.
### Our generated evaluation data can identify real modeling improvements
High-quality generated data should also allow us to _discriminate_ between multiple candidate models to select a model that will perform well downstream. We finetune various models on a generated dataset and rank their performance according to the generated test data and the test data from the target (real) dataset. We evaluate the Kendall's rank cor
\begin{table}
\begin{tabular}{c|c|c|c} \hline \hline
**Method** & **SQuAD** & **MCoNaLa** & **Temporal** \\ & (EM) & (ChrF++) & (ChrF++) \\ \hline Prompt2Model & 61.5 & 13.1 & 55.2 \\ w/o Model Ret. & 61.5 & 15.8 & 55.2 \\ w/o Data Ret. & 50.2 & 16.6 & N/A \\ \hline gpt-3.5-turbo & 42.1 & 37.3 & 30.7 \\ \hline \hline \end{tabular}
\end{table}
Table 1: We evaluate the model produced by \(\mathrm{Prompt2Model}\) on real benchmarks for each test set, compared to gpt-3.5-turbo, which we used to power our dataset generator. We also examine the effect of removing specific parts of our pipeline — model retrieval and dataset retrieval. There are no relevant datasets available for the Temporal task, so we did not use retrieved data for \(\mathrm{Prompt2Model}\) there.
\begin{table}
\begin{tabular}{c|c|c|c} \hline \hline
**Method** & **\#Train** & **Performance** & **Anno. Cost** \\ \hline Retrieval only & 3,000 & 56.79 & \(\approx\) 5 0 \\ Generation only & 3,000 & 44.20 & \(\approx\) 5 5 \\
**Retrieval+generation** & 6,000 & 61.46 & \(\approx\) 5 5 \\ \hline Custom annotation & 3,000 & 61.64 & \(\approx\) 5 540 \\ \hline \hline \end{tabular}
\end{table}
Table 2: We compare model performance on SQuAD on an annotation-cost basis, using datasets produced by different modules of \(\mathrm{Prompt2Model}\), along with fully-manual annotation. Performance reported for all models is the exact match on the test set,7 which reflects _the true task performance_. Cost of custom annotation is estimated from Rajpurkar et al. (2016) using their reported annotator pay rate of $9/hour and keeping 1,000 validation examples.
relation (Kendall, 1938) between the two rankings to determine if our generated data can effectively determine which models are likely to perform well downstream. This is closely related to the concept of concurrence between benchmarks (Liu et al., 2023); however, we are evaluating whether the generated and real data rank _specific models_ in the same ordering, rather than _modeling approaches_.
Table 3 shows the Kendall's \(\tau\) for each task, computed over a set of reasonable models.8 The generated data shows strong correlation to the true performance on two of the three datasets.
Footnote 8: This set of models consisted of 5 T5-family models, 2 BART-family models, and 1-5 additional retrieved models from the _Model Retriever_, depending on task.
## 6 Discussion and Conclusion
We propose \(\mathrm{Prompt2Model}\), a framework that automatically constructs task-specific models using only natural language prompts. Our proof-of-concept experiments show that, despite using a similar easy-to-use interface as LLMs, \(\mathrm{Prompt2Model}\) delivers small yet accurate models and its generated datasets can be used to estimate real-world performance. Besides our reference implementation providing a ready-to-use tool, \(\mathrm{Prompt2Model}\)'s extensible design and modular implementation makes it a platform for advancing model distillation, dataset generation, synthetic evaluation, dataset retrieval, and model retrieval.
We believe our \(\mathrm{Prompt2Model}\) framework can inspire various novel research questions. We hope that our platform enables future work that looks more deeply into quality assurance on the generated data and the model. Interesting questions include how much data should we generate for downstream model training and how diverse should it be? How do we effectively mix the retrieved and generated dataset such to achieve complementary strengths (e.g. using dataset generation to focus on the expected inputs to the model that the retrieved dataset fails to cover)? Since users often struggle to articulate their needs up front, future extensions should also address the challenge of human-in-the-loop correction - either by offering potential strategies to help humans iteratively refine prompts, or allowing humans to perform post-hoc fixes when the task metadata extraction and generated data do not align with their intentions. We hope to propose explicit challenges and invite the community to contribute novel implementations of various components in our framework.
## Limitations
One of the primary limitations of our system is that our current experiments have all been conducted using the gpt-3.5-turbo API (used for prompt parsing, dataset generation, and model retrieval). This LLM is paid and closed-source, which makes this problematic as a scientific artifact (Rogers et al., 2023). Furthermore, the service provider of this LLM, OpenAI, prohibits the use of their API to create models that may compete with OpenAI, creating potential legal concerns with the use of \(\mathrm{Prompt2Model}\) in commercial applications. We are exploring the integration of open-source LLMs to avoid our reliance on proprietary APIs.
Another limitation of our work is the limited ability of \(\mathrm{Prompt2Model}\) to support tasks that require processing languages other than English. While we have shown the limitations of our system at supporting code generation from Japanese natural language queries, our system is likely to struggle more with lower-resource languages. We use the unpublished gpt-3.5-turbo model for our Dataset Generator in our reference implementation. This model is believed to be similar to GPT-3 (Brown et al., 2020), which was trained on 93% English documents, 1% German documents, 1% French documents, and <5% documents in any other language. Our use of this model may exacerbate existing disparities in language technologies between high-resource languages and low-resource languages.
One potential limitation is that we have only tested our approach on 3 tasks, each with a single dataset and a single evaluation metric. We justify this decision because our focus is on providing an extensible software system rather than establishing state-of-the-art results on many datasets, but we believe that our results suggest broader applicability.
\begin{table}
\begin{tabular}{r|c|c c} \hline \hline
**Dataset** & **Metric** & \(\tau\) & \(p\)**-value** \\ \hline SQuAD & EM & 64.3 & 0.03* \\ Temporal & ChrF++ & 24.2 & 0.31 \\ MCoNaLa (JP) & ChrF++ & 70.9 & 0.00** \\ \hline \hline \end{tabular}
\end{table}
Table 3: We evaluate 10 different models on real test sets and their corresponding generated clones. We compute Kendall’s Tau on the ranked lists of models and find statistically significant correlations for 2 of 3 datasets.
### Ethics Statement
Any system which makes powerful technology more accessible to the public has ethical implications. Widder et al. (2022) discuss ethical issues with open-source packages in relation to software libraries for deepfaking, including the possibility of enabling malicious actors to use technology that they would otherwise not have the technical skills to leverage. This is also a risk for an AutoML system such as \(\mathrm{Prompt2Model}\); however, we believe this risk is outweighed by the benefits of greater accessibility, especially given that a low barrier to entry for generating harmful data already exists in the form of prompted, web-interface models.
While \(\mathrm{Prompt2Model}\) could, if given harmful inputs, generate toxic, offensive, or inaccurate synthetic data, this is no more of a risk with \(\mathrm{Prompt2Model}\) than it is with the underlying prompted model (Bender et al., 2021); indeed, the use of models and supplementary datasets retrieved from Hugging Face may lessen the likelihood of a downstream model replicating harms from the prompted model's outputs, though more investigation is needed. Like all ML models, the models that \(\mathrm{Prompt2Model}\) returns can make mistakes, and we aim to be transparent in our documentation about potential limitations of the system.
We hope that \(\mathrm{Prompt2Model}\) will be broadly useful. Our work is motivated by a desire to increase the accessibility of NLP models to people who are not in the NLP community but would benefit from the community's innovations; particularly, to people who would use NLP models downstream but may not have the domain-specific knowledge to design their own system. \(\mathrm{Prompt2Model}\) may also prove useful for early NLP researchers by providing a starting point for intuitions about baselines for various tasks and enabling the discovery of similarities between a described task and existing work. We open-source \(\mathrm{Prompt2Model}\) and welcome community contributions.
## Acknowledgements
This work was supported in part by a fellowship from NEC Research Laboratories. We are grateful to Alex Cabrera, Will Epperson, Nelson Liu, Arjun Ramani, Zirui Cheng, Zhiyuan Zeng, Tianci Xue, Yanchen Liu, Yi-Hsin Hung and Zhilin Yang for their feedback and guidance. We particularly appreciate Zirui Cheng's video production support for our demo.
|
2307.13922
|
Stability of Multi-Agent Learning: Convergence in Network Games with
Many Players
|
The behaviour of multi-agent learning in many player games has been shown to
display complex dynamics outside of restrictive examples such as network
zero-sum games. In addition, it has been shown that convergent behaviour is
less likely to occur as the number of players increase. To make progress in
resolving this problem, we study Q-Learning dynamics and determine a sufficient
condition for the dynamics to converge to a unique equilibrium in any network
game. We find that this condition depends on the nature of pairwise
interactions and on the network structure, but is explicitly independent of the
total number of agents in the game. We evaluate this result on a number of
representative network games and show that, under suitable network conditions,
stable learning dynamics can be achieved with an arbitrary number of agents.
|
Aamal Hussain, Dan Leonte, Francesco Belardinelli, Georgios Piliouras
|
2023-07-26T02:45:02Z
|
http://arxiv.org/abs/2307.13922v1
|
# Stability of Multi-Agent Learning:
###### Abstract
The behaviour of multi-agent learning in many player games has been shown to display complex dynamics outside of restrictive examples such as network zero-sum games. In addition, it has been shown that convergent behaviour is less likely to occur as the number of players increase. To make progress in resolving this problem, we study Q-Learning dynamics and determine a sufficient condition for the dynamics to converge to a unique equilibrium in any network game. We find that this condition depends on the nature of pairwise interactions and on the network structure, but is explicitly independent of the total number of agents in the game. We evaluate this result on a number of representative network games and show that, under suitable network conditions, stable learning dynamics can be achieved with an arbitrary number of agents.
Machine Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning Multi-Agent Learning, Multi-Agent Learning Multi-Agent Learning, Multi-Agent Learning Multi-Agent Learning, Multi-Agent Learning Multi-Agent Learning, Multi-Agent Learning Multi-Agent Learning, Multi-Agent Learning Multi-Agent Learning, Multi-Agent Learning, Multi-Agent Learning Multi-Agent Learning, Multi-Agent Learning Multi-Agent Learning, Multi-Agent Learning Multi-Agent Learning, Multi-Agent Learning Multi-Agent Learning, Multi-Agent Learning Multi-Agent Learning, Multi-Agent Learning Multi-Agent Learning, Multi-Agent Learning Multi-Agent Learning, Multi-Agent Learning Multi-Agent Learning, Multi-Agent Learning Multi-Agent Learning, Multi-Agent Learning Multi-Agent Learning, Multi-Agent Learning Multi-Agent Learning, Multi-Agent Learning Multi-Agent Learning, Multi-Agent Learning Multi-Agent Learning, Multi-Agent Learning Multi-Agent Learning, Multi-Agent Learning Multi-Agent Learning, Multi-Agent Learning Multi-Agent Learning Multi-Agent Learning, Multi-Agent Learning Multi-Agent Learning, Multi-Agent
Model and ContributionsIn light of this, we study learning in _network games_, where interactions between agents can be constrained. On this model, we study the _Q-Learning_ dynamic (Sato & Crutchfield, 2003; Tuyls et al., 2006), a well studied learning dynamic captures the balance between agents who explore their state space whilst maximising their reward.
Our main result tightens the requirement of sufficient exploration found by (Hussain et al., 2023) to achieve convergence to a unique equilibrium in any network game. In particular we find that the amount of exploration depends on the nature of the interaction between agents and, more importantly, the structure of the network. We examine how our bound explicitly depends on the total number of agents in the system and find that, for certain networks, there is no explicit dependence. This enables a higher number of agents to be introduced in the system without compromising stability. In addition, our result applies to all network games, and not only network zero sum games. In fact, we show how our results relate to existing statements in the literature. Finally, we validate our findings on a number of representative classes of games and networks.
Related WorkThe theory of evolutionary game dynamics models multi-agent interactions in which agents improve their actions through _online learning_(Shalev-Shwartz, 2011). The premise is that popular learning algorithms such as _Hedge_(Krichene et al., 2015), online gradient descent (Kadan & Fu, 2021) and Q-Learning (Sutton & Barto, 2018; Schwartz, 2014) can be approximated in continuous time by a dynamical system (Mertikopoulos & Sandholm, 2016; Krichene, 2016; Tuyls et al., 2006). This enables tools from the study of dynamical systems to be used to analyse the behaviour of the learning algorithm. This approach has yielded a number of successes, most notably in _potential games_(Leonardos & Piliouras, 2022; Candogan et al., 2013; Monderer & Shapley, 1996), which model multi-agent cooperation, and _network zero sum games_(Cai et al., 2016; Abernethy et al., 2021), which models competition. In these settings, it is known that a number of learning dynamics converge to an equilibrium (Kadan & Fu, 2021; Ewerhart & Valkanova, 2020; Leonardos et al., 2021).
Outside of these classes, the behaviour of learning is less certain (Anagnostides et al., 2022). In particular, it is known that learning dynamics can exhibit complex behaviours such as cycles (Mertikopoulos et al., 2018; Imhof et al., 2005; Pangallo et al., 2019; Shapley, 2016) and chaos (van Strien & Sparrow, 2011; Mukhopadhyay & Chakraborty, 2020; Sato et al., 2002; Pangallo et al., 2022). Indeed, (Galla & Farmer, 2013) showed that the Experience Weighted Attraction (EWA) dynamic, which is closely related to Q-Learning (Leonardos et al., 2021) achieves chaos in classes of two-player games. Advancing this result, (Sanders et al., 2018) showed that chaotic dynamics become more prevalent as the number of agents increase, regardless of the exploration rates. Similar to the work in this paper, (Hussain et al., 2023) determine a sufficient condition on the exploration rates for Q-Learning to converge in any game, yet they also find that this condition increases with the number of agents. This presents a strong barrier in placing guarantees on the behaviour in multi-agent systems with many agents, outside of restrictive settings.
Our work also employs a number of tools from the study of variational inequalities in game theory. This is a well studied framework for analysing the structure of equilibrium sets in a game (Melo, 2018; Facchinei & Pang, 2004) and for studying the convergence of algorithms equilibrium seeking algorithms (Tatarenko & Kamgarpour, 2019; Hadikhanloo et al., 2022; Mertikopoulos & Zhou, 2019; Sorin & Wan, 2016). Recent advances in this field begin to consider the properties of network games. Notably, (Parise & Ozdaglar, 2019; Melo, 2018) determine conditions under which the Nash Equilibrium of a network game is unique, and how these relate to properties of the network. Similarly, (Melo, 2021) shows the uniqueness of various formulations of the Quantal Response Equilibrium (QRE) under particular choices of payoff functions. Whilst our results use similar techniques, we do not make such assumptions on the nature of the payoffs, but rather parameterise our final condition on the nature of interactions between agents. In addition, we consider the stability of learning.
In our work, we aim to address the problem of convergence in many-agent systems by considering games which are played on a network (Cai et al., 2016). Extending the work of (Hussain et al., 2023), we are able to find a sufficient condition on exploration rates so that the Q-Learning dynamics converge to a unique equilibrium. Importantly, we show that this is independent of the total number of agents in the system. To our knowledge this is the first work which shows the convergence of Q-Learning in arbitrary network games.
## 2 Preliminaries
We begin in Section 2.1 by defining the network game model, which is the setting on which we study the Q-Learning dynamics, which we describe in Section 2.2.
### Game Model
In this work, we consider _network polymatrix games_(Cai et al., 2016). A Network Game is described by the tuple \(\mathcal{G}=(\mathcal{N},\mathcal{E},(u_{k},\mathcal{A}_{k})_{k\in\mathcal{N}})\), where \(\mathcal{N}\) denotes a finite set of players \(\mathcal{N}\) indexed by \(k=1,\ldots,N\). Each agent can choose from a finite set of actions \(\mathcal{A}_{k}\) indexed by \(i=1,\ldots,n\). We denote the _strategy_\(\mathbf{x}_{k}\) of an agent \(k\) as the probabilities with which they play their actions.
Then, the set of all strategies of agent \(k\) is \(\Delta(\mathcal{A}_{k}):=\{\mathbf{x}_{k}\in\mathbb{R}^{n}:\sum_{i}x_{ki}=1,\,x_{ ki}\geq 0\}\). Each agent is also given a payoff function \(u_{k}\,:\,\Delta(\mathcal{A}_{k})\times\Delta(\mathcal{A}_{-k})\rightarrow \mathbb{R}\) where \(\mathcal{A}_{-k}\) denotes the action set of all agents other than \(k\). Agents are connected via an underlying network defined by \(\mathcal{E}\). In particular, \(\mathcal{E}\) consists of pairs \((k,l)\in\mathcal{N}\times\mathcal{N}\) of connected agents \(k\) and \(l\). An equivalent way to define the network is through an _adjacency matrix_\(G\) so that
\[[G]_{k,l}=\begin{cases}1,\text{ if agents }k,l\text{ are connected}\\ 0,\text{ otherwise}\end{cases}.\]
It is assumed that the network is undirected, so that \(G\) is a symmetric matrix. Each edge \((k,l)\in\mathcal{E}\) corresponds to a pair of payoff matrices \(A^{kl},A^{lk}\). With these specifications, the payoff received by each agent \(k\) is given by
\[u_{k}(\mathbf{x}_{k},\mathbf{x}_{-k})=\sum_{(k,l)\in\mathcal{E}}\mathbf{x}_{k }\cdot A^{kl}\mathbf{x}_{l}. \tag{1}\]
For any \(\mathbf{x}\in\Delta=:\times_{k}\Delta(\mathcal{A}_{k})\), we can define the reward to agent \(k\) for playing action \(i\) as \(r_{ki}(\mathbf{x}_{-k})=\frac{\partial u_{ki}(\mathbf{x})}{\partial x_{ki}}\). Under this notation, \(u_{k}(\mathbf{x}_{k},\mathbf{x}_{-k})=\langle\mathbf{x}_{k},r_{k}(\mathbf{x})\rangle\). With this in place, we can define an equilibrium solution for the game.
**Definition 2.1** (Quantal Response Equilibrium (QRE)).: A joint mixed strategy \(\mathbf{\bar{x}}\in\Delta\) is a _Quantal Response Equilibrium_ (QRE) if, for all agents \(k\) and all actions \(i\in\mathcal{A}_{k}\)
\[\mathbf{\bar{x}}_{ki}=\frac{\exp(r_{ki}(\mathbf{\bar{x}}_{-k})/T_{k})}{\sum_{ j\in\mathcal{A}_{k}}\exp(r_{kj}(\mathbf{\bar{x}}_{-k})/T_{k})}.\]
The QRE (Camerer et al., 2004) is the prototypical extension of the Nash Equilibrium to the case of agents with bounded rationality, parameterised by the _exploration rate_\(T_{k}\). In particular, the limit \(T_{k}\to 0\) corresponds exactly to the Nash Equilibrium, whereas the limit \(T_{k}\rightarrow\infty\) corresponds to a purely irrational case, where action \(i\in\mathcal{A}_{k}\) is played with the same probability regardless of its associated reward. The link between the QRE and the Nash Equilibrium is made stronger through the following result.
**Proposition 2.2** ((Melo, 2021)).: _Consider a game \(\mathcal{G}=(\mathcal{N},\mathcal{E},(u_{k},\mathcal{A}_{k})_{k\in\mathcal{N}})\) and let \(T_{1},\ldots,T_{N}>0\) be exploration rates. Define the perturbed game \(\mathcal{G}^{H}=(\mathcal{N},\mathcal{E},(u_{k}^{H},\mathcal{A}_{k})_{k\in \mathcal{N}})\) with the payoff functions_
\[u_{k}^{H}(\mathbf{x}_{k},\mathbf{x}_{-k})=u_{k}(\mathbf{x}_{k},\mathbf{x}_{-k} )-T_{k}\langle\mathbf{x}_{k},\ln\mathbf{x}_{k}\rangle.\]
_Then \(\mathbf{\bar{x}}\in\Delta\) is a QRE of \(\mathcal{G}\) if and only if it is a Nash Equilibrium of \(\mathcal{G}^{H}\)._
### Learning Model
In this work, we analyse the _Q-Learning dynamic_, a prototypical model for determining optimal policies by balancing exploration and exploitation. In this model, each agent \(k\in\mathcal{N}\) maintains a history of the past performance of each of their actions. This history is updated via the Q-update
\[Q_{ki}(\tau+1)=(1-\alpha_{k})Q_{ki}(\tau)+\alpha_{k}r_{ki}(\mathbf{x}_{-k}( \tau)),\]
where \(\tau\) denotes the current time step. \(Q_{ki}(\tau)\) denotes the _Q-value_ maintained by agent \(k\) about the performance of action \(i\in S_{k}\). In effect \(Q_{ki}\) gives a discounted history of the rewards received when \(i\) is played, with \(1-\alpha_{k}\) as the discount factor.
Given these Q-values, each agent updates their mixed strategies according to the Boltzmann distribution, given by
\[x_{ki}(\tau)=\frac{\exp(Q_{ki}(\tau)/T_{k})}{\sum_{j}\exp(Q_{kj}(\tau)/T_{k})},\]
in which \(T_{k}\in[0,\infty)\) is the _exploration rate_ of agent \(k\).
It was shown in (Tuyls et al., 2006; Sato and Crutchfield, 2003) that a continuous time approximation of the Q-Learning algorithm could be written as
\[\frac{\dot{x}_{ki}}{x_{ki}}=r_{ki}\left(\mathbf{x}_{-k}\right)-\langle\mathbf{x }_{k},r_{k}(\mathbf{x})\rangle+T_{k}\sum_{j\in S_{k}}x_{kj}\ln\frac{x_{kj}}{x_{ ki}},\] (QLD)
which we call the _Q-Learning dynamics_ (QLD). The fixed points of this dynamic coincide with the QRE of the game (Leonardos et al., 2021).
### Variational Inequalities and Game Theory
Our aim in this work is to analyse the Q-Learning dynamics in network games without invoking any particular structure on the payoffs (e.g. zero-sum). To do this, we employ the _Variational Inequality_ approach, which has been successfully applied towards the analysis of network games (Melo, 2018; Parise and Ozdaglar, 2019; Xu et al., 2019) as well as learning in games (Hadikhanloo et al., 2022; Sorin and Wan, 2016; Hussain et al., 2023). In this paper, we connect these areas of literature.
**Definition 2.3** (Variational Inequality).: Consider a set \(\mathcal{X}\subset\mathbb{R}^{d}\) and a map \(F:\mathcal{X}\rightarrow\mathbb{R}^{d}\). The Variational Inequality (VI) problem \(VI(\mathcal{X},F)\) is given as
\[\langle\mathbf{x}-\mathbf{\bar{x}},F(\mathbf{\bar{x}})\rangle\geq 0,\qquad\text{ for all }\mathbf{x}\in\mathcal{X}. \tag{2}\]
We say that \(\mathbf{\bar{x}}\in\mathcal{X}\) belongs to the set of solutions to a variational inequality problem \(VI(\mathcal{X},F)\) if it satisfies (2).
The premise of the variational approach to game theory (Facchinei and Pang, 2004; Rosen, 1965) is that the problem of finding equilibria of games can be reformulated as determining the set of solutions to a VI problem. This is done by choosing associating the set \(\mathcal{X}\) with \(\Delta\) and the map \(F\) with the _pseudo-gradient_ of the game.
**Definition 2.4** (Pseudo-Gradient Map).: The pseudo-gradient map of a game \(\mathcal{G}=(\mathcal{N},\mathcal{E},(u_{k},\mathcal{A}_{k})_{k\in\mathcal{N}})\) is given by \(F(\mathbf{x})=(F_{k}(\mathbf{x}))_{k\in\mathcal{N}}=(-D_{\mathbf{x}_{k}}u_{k}( \mathbf{x}_{k},\mathbf{x}_{-k}))_{k\in\mathcal{N}}\).
The advantage of this formulation is that we can apply results from the study of Variational Inequalities to determine properties of the game. These results rely solely on the form of the pseudo-gradient map and so can generalise results which assume a potential or zero-sum structure of the game (Hussain et al., 2023; Kadan and Fu, 2021).
**Lemma 2.5** ((Melo, 2021)).: _Consider a game \(\mathcal{G}=(\mathcal{N},\mathcal{E},(u_{k},\mathcal{A}_{k})_{k\in\mathcal{N}})\) and for any \(T_{1},\dots,T_{N}>0\), let \(F\) be the pseudo-gradient map of \(\mathcal{G}^{H}\). Then \(\overline{\mathbf{x}}\in\Delta\) is a QRE of \(\mathcal{G}\) if and only if \(\overline{\mathbf{x}}\) is a solution to \(VI(\Delta,F)\)._
With this correspondence in place, we can analyse properties of the pseudo-gradient map and its relation to properties of the game and the learning dynamic. One important property is _monotonicity_.
**Definition 2.6**.: A map \(F\,:\mathcal{X}\rightarrow\mathbb{R}^{d}\) is
1. _Monotone_ if, for all \(\mathbf{x},\mathbf{y}\in\mathcal{X}\), \[\langle F(\mathbf{x})-F(\mathbf{y}),\mathbf{x}-\mathbf{y}\rangle\geq 0.\]
2. _Strongly Monotone_ with constant \(\alpha>0\) if, for all \(\mathbf{x},\mathbf{y}\in\mathcal{X}\), \[\langle F(\mathbf{x})-F(\mathbf{y}),\mathbf{x}-\mathbf{y}\rangle\geq\alpha|| \mathbf{x}-\mathbf{y}||_{2}^{2}.\]
**Definition 2.7** (Monotone Game).: A game \(\mathcal{G}\) is _monotone_ if its pseudo-gradient map is monotone.
A large part of our analysis will be in determining conditions under which the pseudo-gradient map is monotone. Upon doing so, we are able to employ the following results.
**Lemma 2.8** ((Melo, 2021)).: _Consider a game \(\mathcal{G}=(\mathcal{N},\mathcal{E},(u_{k},\mathcal{A}_{k})_{k\in\mathcal{N}})\) and for any \(T_{1},\dots,T_{N}>0\), let \(F\) be the pseudo-gradient map of \(\mathcal{G}^{H}\). \(\mathcal{G}\) has a unique QRE \(\overline{\mathbf{x}}\in\Delta\) if \(F\) is strongly monotone with any \(\alpha>0\)._
**Lemma 2.9** ((Hussain et al., 2023)).: _If the game \(G\) is monotone, then the Q-Learning Dynamics (QLD) converge to the unique QRE with any positive exploration rates \(T_{1},\dots,T_{N}>0\)._
## 3 Convergence of Q-Learning in Network Games
In this section we determine a sufficient condition under which Q-Learning converges to a unique QRE, which is given in terms of the exploration rate and the network game structure. To do this, we determine a sufficient condition on exploration rates \(T_{k}\) such that the perturbed game \(\mathcal{G}^{H}\) is strongly monotone. We find that this condition is dependent on the strength of pairwise interactions in the network, as well as its structure. We then compare our result to that of (Hussain et al., 2023) and show that, under suitable network structures, stability can be achieved with comparatively low exploration rates, even in the presence of many players. This also refines the result of (Sanders et al., 2018) which suggests that learning dynamics are increasingly unstable as the number of players increases, regardless of exploration rate.
To achieve our main result, we first parameterise pairwise interactions in a network game as follows.
**Definition 3.1** (Interaction Coefficient).: Let \(\mathcal{G}=(\mathcal{N},\mathcal{E},(u_{k},\mathcal{A}_{k})_{k\in\mathcal{N}})\) be a network game whose edgeset is associated with the payoff functions \((A^{kl},A^{lk})_{(k,l)\in\mathcal{E}}\). Then, the _interaction coefficient_\(\delta_{S}\) of \(\mathcal{G}\) is given as
\[\delta_{S}=\max_{(k,l)\in\mathcal{E}}\lVert A^{kl}+(A^{lk})^{\top}\rVert_{2}, \tag{3}\]
where \(\lVert M\rVert_{2}=\sup_{||\mathbf{x}||_{2}=1}\lVert M\mathbf{x}\rVert_{2}\) denotes the operator \(2\)-norm (Meiss, 2007).
**Theorem 3.2**.: _Consider a network game \(\mathcal{G}=(\mathcal{N},\mathcal{E},(u_{k},\mathcal{A}_{k})_{k\in\mathcal{N}})\) which has interaction coefficient \(\delta_{S}\) and adjacency matrix \(G\). The Q-Learning Dynamic converges to a unique QRE \(\overline{\mathbf{x}}\in\Delta\) if, for all agents \(k\in\mathcal{N}\),_
\[T_{k}>\frac{1}{2}\delta_{S}\left\lVert G\right\rVert_{\infty}, \tag{4}\]
_where \(\lVert M\rVert_{\infty}=\max_{i}\sum_{j}|[G_{ij}]|\) is the operator \(\infty\)-norm._
We defer the full proof of Theorem 3.2 to the Appendix and illustrate the main ideas here. In order to apply Lemma 2.9, we must show that under (4), the perturbed game \(\mathcal{G}^{H}\) is monotone. To do this, we decompose \(\mathcal{G}^{H}\) into a term which is solely parameterised by exploration rates, and another term corresponding to the payoff matrices and graph structure. We then show that the second term can be decomposed as \(\frac{1}{2}\delta_{S}\left\lVert G\right\rVert_{\infty}\), which allows us to separate terms involving the payoffs and the graph structure. We use the fact that the transformation between a \(\mathcal{G}\) and \(\mathcal{G}^{H}\) is given by \(T_{k}(\mathbf{x}_{k},\ln\mathbf{x}_{k})\), which has a strongly monotone gradient (Melo, 2021) with constant \(T_{k}\). Then, if the exploration rates are high enough to offset \(\frac{1}{2}\delta_{S}\left\lVert G\right\rVert_{\infty}\), the resulting pseudo-gradient is monotone, and Lemma 2.9 can be applied.
The condition of Theorem 3.2 for the convergence asserts that Q-Learning dynamics is convergent in a network game given sufficient exploration. In a similar light to the result of (Hussain et al., 2023), the amount of exploration required depends on the strength of interaction. The main difference is that the condition includes a term \(\left\lVert G\right\rVert_{\infty}\) which encodes the network structure. This term has a natural interpretation as follows. Let \(\mathcal{N}_{k}=\{l\in\mathcal{N}:(k,l)\in\mathcal{E}\}\) be the _neighbours_ of agent \(k\), i.e. all the agents who interact with agent
\(k\) according to the network. Then \(\left\|G\right\|_{\infty}=\max_{k}\left|\mathcal{N}_{k}\right|\), which denotes the maximum number of neighbours across all agents.
A useful point about (4) is that it does not make any assumptions regarding the nature of the interaction between games, but rather parameterises pairwise interactions by \(\delta_{S}\). As such, the result is not limited to restrictive settings such as _network zero sum games_(Leonardos et al., 2021). In fact, the convergence of Q-Learning dynamics in pairwise zero-sum games follows immediately from Theorem 3.2.
**Corollary 3.3**.: _If the network game \(\mathcal{G}\) is a pairwise zero-sum matrix, i.e., \(A^{kl}+(A^{lk})^{\top}=0\) for all \((k,l)\in\mathcal{E}\), then the \(Q\)-Learning dynamics converge to a unique QRE so long as exploration rates \(T_{k}\) for all agents are strictly positive._
_Remark 3.4_.: Corollary 1 is supported by the result of (Leonardos et al., 2021; Hussain et al., 2023) in which it was shown that Q-Learning converges to a unique QRE in all network zero-sum games (even if they are not pairwise zero-sum) so long as all exploration rates \(T_{k}\) are positive.
DiscussionThe main takeaway from Theorem 3.2 is that the condition on sufficient exploration depends on \(\left\|G\right\|_{\infty}\), which is a measure of the network structure. In certain networks, such as the ring network depicted in Figure 0(a), \(\left\|G\right\|_{\infty}\) is independent of the number of agents in the system. Therefore, as the number of agents increase, the bound (4) does not increase. By contrast, in the fully connected network all agents are connected to each other and so \(\left\|G\right\|_{\infty}\) increases with the number of agents. This illustrates the main point that the variation in the stability boundary defined by (4) depends on the structure of the network rather than solely on the total number of agents as previously found by (Hussain et al., 2023; Sanders et al., 2018). We illustrate this further in Figure 2 which plots the stability boundary defined in (Hussain et al., 2023) in various games (which we define in Section 4) as well as (4) for the ring and fully-connected network. Here, it is clear that (4) is a tighter bound than that of (Hussain et al., 2023) particularly for the ring network in all games. The advantage of using (4) is most clear in the example of the Sato game which, in (Sato et al., 2002) was shown to display chaotic behaviour in the two-agent case when exploration rates are uniformly zero. In Figure 2 it can be seen that only a small amount of exploration is required to stabilise the system.
## 4 Experiments
In our experiments, we visualise and exemplify the implications of Theorem 3.2 on a number of games. In particular, we simulate the Q-Learning algorithm described in Section 2.2 and show that Q-Learning asymptotically approaches a unique QRE so long as the exploration rates are sufficiently large. We show, in particular, that the amount of exploration required depends on the structure of the network rather than the total number of agents.
_Remark 4.1_.: In our experiments, we take all agents \(k\) to have the same exploration rate \(T\) and so drop the \(k\) notation. As the bound (4) must hold for all agents \(k\), this assumption does not affect the generality of the results.
Convergence of Q-Learning.We first illustrate the convergence of Q-Learning using two representative examples: the _Network Chakraborty Game_ and the _Mismatching Pennies Game_. The former was first analysed in (Pandit et al., 2018) to characterise chaos in learning dynamics. Formally, the payoff to each agent \(k\) is defined as
\[u_{k}(\mathbf{x}_{k},\mathbf{x}_{-k})=\mathbf{x}_{k}^{\top} \mathbf{Ax}_{l},\;l=k-1\mod N,\] \[A=\begin{pmatrix}1&\alpha\\ \beta&0\end{pmatrix},\;\alpha,\beta\in\mathbb{R}.\]
The latter was first analysed in (Kleinberg et al., 2011) in which it was shown that learning dynamics reach a cycle around the boundary of the simplex. Here, the payoffs to each agent are given by
\[u_{k}(\mathbf{x}_{k},\mathbf{x}_{-k})=\mathbf{x}_{k}^{\top} \mathbf{Ax}_{l},\;l=k-1\mod N,\] \[A=\begin{pmatrix}0&1\\ M&0\end{pmatrix},\;M\geq 1.\]
We visualise the trajectories generated by running Q-Learning in Figure 3 in both games for a three agent network and choosing \(\alpha=7,\beta=8.5,M=2\). It can be seen that, for low exploration rates, the dynamics reach a limit cycle around the boundary of the simplex. However, as exploration increases, the dynamics are eventually driven towards a fixed point for all initial conditions. The higher requirement on exploration in the Chakraborty Game as compared to the Mismatching Game can be seen as stemming from the higher \(\delta_{S}\approx 8.67\) in the former compared to \(\delta_{S}=2\) in the latter.
Network Shapley GameIn the following example, each edge of the network game has associated the same pair of matrices \(A,B\) where
\[A=\begin{pmatrix}1&0&\beta\\ \beta&1&0\\ 0&\beta&1\end{pmatrix},\;B=\begin{pmatrix}-\beta&1&0\\ 0&-\beta&1\\ 1&0&-\beta\end{pmatrix},\]
where \(\beta\in(0,1)\).
This has been analysed in the two-agent case in (Shapley, 2016), where it was shown that the _Fictitious Play_ learning dynamic do not converge to an equilibrium. (Hussain et al., 2023) analysed the network variant of this game for the case of a ring network and numerically showed that convergence can be achieved by Q-Learning through sufficient
exploration. In Figure 4 we examine both a fully connected network and a ring network with 15 agents. Figure 4 depicts the final 2500 iterations of learning for three agents and 35 initial conditions. It can be seen that, as exploration rates increase Q-Learning is driven towards an equilibrium for all initial conditions. Importantly, the boundary at which equilibrium behaviour occurs is higher in the fully connected network, where \(\left\lVert G\right\rVert_{\infty}=14\) than in the ring network, where \(\left\lVert G\right\rVert_{\infty}=2\).
Network Sato GameWe also analyse the behaviour of Q-Learning in a variant of the game introduced in (Sato et al., 2002), where it was shown that chaotic behaviour is exhibited by learning dynamics in the two-agent case. We extend this towards a network game by associating each edge with the payoff matrices \(A,B\) given by
\[A=\begin{pmatrix}\epsilon_{X}&-1&1\\ 1&\epsilon_{X}&-1\\ -1&1&\epsilon_{X}\end{pmatrix},\,B=\begin{pmatrix}\epsilon_{Y}&-1&1\\ 1&\epsilon_{Y}&-1\\ -1&1&\epsilon_{Y}\end{pmatrix},\]
where \(\epsilon_{X},\epsilon_{Y}\in\mathbb{R}\). Notice that for \(\epsilon_{X}=\epsilon_{Y}=0\), this corresponds to the classic Rock-Paper-Scissors game which is zero-sum so that, by Corollary 1, Q-Learning will converge to an equilibrium with any positive exploration rates. We choose \(\epsilon_{X}=0.01,\epsilon_{Y}=-0.05\) in order to stay consistent with (Sato et al., 2002) which showed chaotic dynamics for this choice. The boxplot once again shows that sufficient exploration leads to convergence of all initial conditions. However, the amount of exploration required is significantly smaller than that of the Network Shapley Game. This can be seen as being due to the significantly lower interaction coefficient of the Sato game \(\delta_{S}=0.05\) as compared to the Shapley game \(\delta_{S}=2\).
Stability BoundaryIn these experiments we empirically determine the dependence of the stability boundary w.r.t. the number of agents. For accurate comparison with Figure 2, we consider the Network Sato and Shapley Games in a fully-connected network, star network and ring network. We iterate Q-Learning for various values of \(T\) and determine whether the dynamics have converged. To evaluate convergence, we record the final 2500 iterations and check whether the relative difference between the maximum and minimum strategy components \(x_{ki}\) is less than some tolerance \(l\) for all agents \(k\), actions \(i\) and initial conditions. More formally we aim to determine if
\[\lim_{t\rightarrow\infty}\left(\frac{\max_{t}x_{ki}(t)-\min_{t}x_{ki}(t)}{\max _{t}x_{ki}(t)}\right)<l \tag{5}\]
holds for all \(k\in\mathcal{N}\) and all \(i\in\mathcal{A}_{k}\). In Figure 5 we plot the smallest exploration rate \(T\) for which (5) holds for varying choices of \(N\), using \(l=1\times 10^{-5}\). It can be seen that the prediction of (4) holds, in that the number of agents plays no impact for the ring network whereas the increase in the fully-connected network is linear in \(N\). In addition, it is clear that the stability boundary increases slower in the Sato game than in the Shapley game, owing to the smaller interaction coefficient.
An additional point to note is that the stability boundary for the star network increases slower than the fully-connected network in all games. We anticipate that this is due to the fact that the \(2\)-norm \(\|G\|_{2}\) in the star network is smaller than that of the fully-connected network (c.f.-Figure 1). We therefore conjecture that a tighter lower bound on exploration can be obtained using the \(2\)-norm, which we consider an important avenue for future work.
## 5 Conclusion
In this paper we show that the Q-Learning dynamics is guaranteed to converge in arbitrary network games, independent of any restrictive assumptions such as network zero-sum or potential. This allows us to make a branching statement which applies across all network games.
In particular, our analysis shows that convergence of the Q-Learning dynamics can be achieved through sufficient exploration, where the bound depends on the pairwise interaction between agents and the structure of the network. Overall, compared to the literature, we are able to tighten
Figure 1: Examples of networks with five agents and associated \(\left\lVert G\right\rVert_{\infty}\) and \(\left\lVert G\right\rVert_{2}\).
the bound on sufficient exploration and show that, under certain network interactions, the bound does not increase with the total number of agents. This allows for stability to be guaranteed in network games with many players.
A fruitful direction for future research would be to capture the effect of the payoffs through a tighter bound than the interaction coefficient and to explore further how properties of the network affect the bound. In addition, whilst there is still much to learn in the behaviour of Q-Learning in stateless games, the introduction of the state variable in the Q-update is a valuable next step.
## Acknowledgements
Aamal Hussain and Francesco Belardinelli are partly funded by the UKRI Centre for Doctoral Training in Safe and Trusted Artificial Intelligence (grant number EP/S023356/1). Dan Leonte acknowledges support from the EPSRC Centre for Doctoral Training in Mathematics of Random Systems: Analysis, Modelling and Simulation (EP/S023925/1). This research/project is supported in part by the National Research Foundation, Singapore and DSO National Laboratories under its AI Singapore Program (AISG Award No: AISG2-RP-2020-016), NRF 2018 Fellowship NRF-NRFF2018-07, NRF2019-NRF-ANR095 ALIAS grant, grant PIESGP-AI-2020-01, AME Programmatic Fund (Grant No.A20H6b0151) from the Agency for Science, Technology and Research (A*STAR) and Provost's Chair Professorship grant RGEPPV2101.
|
2303.07278
|
Adaptive Weight Assignment Scheme For Multi-task Learning
|
Deep learning based models are used regularly in every applications nowadays.
Generally we train a single model on a single task. However, we can train
multiple tasks on a single model under multi-task learning settings. This
provides us many benefits like lesser training time, training a single model
for multiple tasks, reducing overfitting, improving performances etc. To train
a model in multi-task learning settings we need to sum the loss values from
different tasks. In vanilla multi-task learning settings we assign equal
weights but since not all tasks are of similar difficulty we need to allocate
more weight to tasks which are more difficult. Also improper weight assignment
reduces the performance of the model. We propose a simple weight assignment
scheme in this paper which improves the performance of the model and puts more
emphasis on difficult tasks. We tested our methods performance on both image
and textual data and also compared performance against two popular weight
assignment methods. Empirical results suggest that our proposed method achieves
better results compared to other popular methods.
|
Aminul Huq, Mst Tasnim Pervin
|
2023-03-10T08:06:08Z
|
http://arxiv.org/abs/2303.07278v1
|
# Adaptive Weight Assignment Scheme For Multi-task Learning
###### Abstract
Deep learning based models are used regularly in every applications nowadays. Generally we train a single model on a single task. However, we can train multiple tasks on a single model under multi-task learning settings. This provides us many benefits like lesser training time, training a single model for multiple tasks, reducing overfitting, improving performances etc. To train a model in multi-task learning settings we need to sum the loss values from different tasks. In vanilla multi-task learning settings we assign equal weights but since not all tasks are of similar difficulty we need to allocate more weight to tasks which are more difficult. Also improper weight assignment reduces the performance of the model. We propose a simple weight assignment scheme in this paper which improves the performance of the model and puts more emphasis on difficult tasks. We tested our methods performance on both image and textual data and also compared performance against two popular weight assignment methods. Empirical results suggest that our proposed method achieves better results compared to other popular methods.
[table]capposition
Human beings have the capability to perform multiple tasks simultaneously without harming performance of any tasks. Humans do this regularly and are able to decide which tasks can be done at the same time. That is why in recent years a lot of focus have been put into multi-task learning using DNN methods. Generally, a single model is devoted to performing a single task. However, performing multiple tasks increases the performance of the model, reduces training time and overfitting [11]. Often we find small insufficient datasets for individual tasks but if the tasks are related somehow then we can use this shared information and build a large enough dataset which will reduce this problem. Currently in the field of mult-task learning, several research work is going on to create new DNN architectures for multi-task learning setting [12, 13], deciding which tasks should be learned together [14], how to assign weights to the loss values [15, 16] etc. In this research work we focus on creating a dynamic weight assignment technique which will assign different weights to the loss values in each epoch during training. In our research work, we propose a new method for assigning weights to all loss values and test it against two datasets which are used in both image and text domain. The contributions of our research work are listed below.
* We propose an intuitive loss weighting scheme for multi-task learning.
* We tested our method against both image and text domain by using two different dataset. We did this to ensure that our method performs well across all domains.
* We compared our method against two popular weight assigning schemes for comparing the performance of our method.
## 2 Research Method
In this section we will provide a discussion about previous research work performed in this field. Next, we will provide our proposed method.
### Literature Review
One of the earliest papers on multi-task learning is provided by R. Caruana [11]. In the manuscript, the author explored the idea of multi-task learning and showed it's effectiveness under different datasets. The author also explained how multi-task learning works and how it can be used in backpropagation. To train a DNN based on multi-task learning setting we need to consider which layers of network are shared among all the tasks and which layers are used for individual tasks. Previously, most of the research work has been focused on the concept of hard parameter sharing concept [17, 18, 19]. In this scenario, the user defines the shareable layers up to a particular point after which all layers are assigned per each task. There is also the concept of soft-parameter sharing where a single column exists for all the tasks in the network. A special mechanism is designed to share the parameter across all the network. Popular approaches for this method is Cross-stitch [13], Sluich [20] etc. A new approach named Ada-share has been proposed recently where the model learns dynamically which layers to share for all tasks and which layers to be used for single tasks [14]. The authors also proposed a new loss function which ensures the compactness of the model as well as the performance of it.
Weight assignment is a very crucial task in the field of multi-task learning. Previously weights either had equal values or some hand-tuned values which was assigned by the researchers [18, 21, 22]. However in scenarios where a large number of tasks existed for the multi-task learning model to perform, such approaches fall short. A method based on uncertainty was proposed by [15]. Later a revised method of this approach was proposed by [12]. In this paper, the authors improved the previous uncertainty based method by adding a positive regularization term. Dynamic weight average method was proposed by [12]. In this method the authors calculated the relative change in loss values in previous two epochs and used softmax function on these values to get the weights. [23] performed a comparative study of different weight assigning scheme. However, they didn't study these methods in any domain other than images. Also, the dataset they used had only 2 tasks.
### Adaptive Weight Assignment
Our proposed method is simple and it takes into account of the loss value of each task in each epoch. Compared to other methods our method is easy to implement. Generally, in multi-task learning settings to train the model we need to sum up all the loss values with their weights and then perform backpropagation for updating the weights of the model. This summation of losses can be expressed as,
\[\sum_{i=1,2,..n}W_{i}L_{i}=W_{1}L_{1}+W_{2}L_{2}+...+W_{n}L_{n}. \tag{1}\]
Here, \(W\) corresponds to the weight of the loss and \(L\) represents the loss for each task. In vanilla multi-task learning setting all the weights are set to 1. However, we must keep in mind that all the tasks are not the same. Some are more difficult than others so we need to provide more weights on difficult tasks to improve performance of the overall multi-task learning system.
```
1:Loss values \(L_{1},L_{2},..,L_{n}\), total no. of tasks \(n\)
2:Total loss
3:for\(t=1,2,\ldots n\)do
4:\(TemLoss\) += \(L_{t}\)
5:endfor
6:for\(t=1,2,\ldots n\)do
7:\(weights_{t}\) = \(L_{t}/TemLoss\)
8:\(TotalLoss\) += \(weights_{t}\) x \(L_{t}\) x \(n\)
9:endfor
```
**Inputs:** Loss values \(L_{1},L_{2},..,L_{n}\), total no. of tasks \(n\)
**Outputs:** Total loss
```
1:for\(t=1,2,\ldots n\)do
2:\(TemLoss\) += \(L_{t}\)
3:endfor
4:for\(t=1,2,\ldots n\)do
5:\(weights_{t}\) = \(L_{t}/TemLoss\)
6:\(TotalLoss\) += \(weights_{t}\) x \(L_{t}\) x \(n\)
7:endfor
```
**Inputs:** Loss values \(L_{1},L_{2},..,L_{n}\), total no. of tasks \(n\)
[MISSING_PAGE_POST]
**s:** Total loss
**Outputs:** Total loss
**Outputs:** Total loss
**Outputs:** Total loss
**s:** Total loss
**Outputs:** Total loss
**s:** Total loss
**Outputs:** Total loss
**s:** Total loss
**Outputs:** Total loss
**s:** Total loss
### Dataset Description
We used two different datasets in our experiment. They are CIFAR-100 [24] and AGNews [25]. The formal one is image based and the later one is text based. Since these datasets are designed for single task learning we created artificial tasks for multi-task learning settings. We created 5 different tasks from CIFAR-100 and 2 tasks from AGNews dataset. All the tasks were created based on the original tasks labels and we grouped different labels together to form multiple tasks. The tasks were created to ensure that no class imbalance exists for all tasks.
### Experimental Setup
We used two different DNN models for our experiment. We used wide resnet-28-10 (WRN) [26] for CIFAR-100 and a custom DNN for AGNews dataset. We split the final layer of the WRN model into 5 output layers for CIFAR-100 and 2 output layers for AGNews dataset. we trained WRN model for 100 epochs using SGD optimizer and set the learning rate to 0.001. We also used one cycle learning rate scheduler [27]. In order to train the AGNews dataset we at first tokenize the dataset and create a vocabulary dictionary based on it. Then we perform embedding of the text which is going to be the input of the model. Our custom DNN consists of two fully connected layers. We trained this model using SGD optimizer. To ensure the effectiveness of our method, we compared our proposed method against two state-of-the-art methods namely dynamic weight average (DWA) and uncertainty method. We also compared against single task learning and vanilla multi-task setting.
### Experimental Results
We will discuss about the performance of our method against two datasets in this section. Table 2 and 3 represents the results of our overall experiment. We have plotted the testing loss curves for both CIFAR-100 and AGNews dataset in Figure 2.
In Table 2, we have the results on running experiments on CIFAR-100 dataset which is an image dataset. At the beginning we have results for all the five tasks in a single task learning settings. That is five different models were trained to get the results of these five tasks. Next under multi-task learning setting we trained four methods for these tasks. In vanilla multi-task learning we have assigned equal weights to each task for each epoch. Other methods Uncertainty, DWA and our method updates weights in each epoch. From this table we can see our proposed method out performs other methods in three out of five tasks. Also our method achieved second best performance in the rest of the two tasks. We can see that multi-task learning models performed better than STL models and also we needed to train only one single model for all five of these tasks.
Figure 1: Flow diagram of our proposed method.
We evaluate our methods performance on AGNews dataset which contains textual data. We have two tasks and at the beginning we train two individual models for these two tasks. After that we train four multi-task learning models with different weight assignment schemes. We can observe from the table that our proposed method performs well under one task and achieves second best score in the other one. Compared to other popular methods we can see that our proposed method is performing much better. If we look closely at the values we will see that other methods fail to achieve the best results. In some cases these approaches even fail to attain better performance than single task learning approach. We believe this is due to the fact the model architecture has a big impact on the performance of multi-task learning settings. In our experiment we focused on uniform DNN architecture for evaluation but some tasks might need a few extra convolutional or fully connected layers. If we put further emphasis on the DNN architecture then the performance of our proposed method would definitely be better in both tasks. We believe that a simpler approach should be taken while assigning weights. As this step is performed in each iteration, too much parameterized and complex approach mind hinder the performance of the model and increase time complexity.
## 4 Conclusion
Understanding and properly executing different hyper-parameters is extremely crucial in training a DNN model for the best results. Multi-task learning settings have the upper-hand on single task learning when it comes to amount of data needed, time to train the model, reducing overfitting and increasing model performance. In multi-task learning settings since not all tasks are of equal difficulties assigning weight to the loss values is important to put more emphasis on difficult task. In this paper, we propose a new weight assignment scheme which aids in
\begin{table}
\begin{tabular}{c c c c c c} \hline & 2 Class & 3 Class & 4 Class & 5 Class & 100 Class \\ & Classification & Classification & Classification & Classification & Classification \\ \hline STL & 74.52 & 75.70 & 74.02 & **72.81** & **76.56** \\ MTL - Vanilla & 79.97 & 74.36 & 70.97 & 67.95 & 60.23 \\ MTL - Uncertainty & 69.47 & 59.52 & 55.42 & 50.21 & 34.91 \\ MTL - DWA & 80.33 & 74.57 & 71.37 & 68.41 & 60.40 \\ MTL - Ours & **81.68** & **77.01** & **74.41** & 72.07 & 66.81 \\ \hline \end{tabular}
\end{table}
Table 2: Accuracy(%) comparison of different methods. Black (bold) marks the best score and red marks the second best score.
Figure 2: Loss vs Epoch curve.
improving the performance of the multi-task learning model. Our proposed method out-performs other state-of-the-art weight assigning schemes in both image and textual domain and boosts the performance of the model.
|
2305.03390
|
Evidence that PUBO outperforms QUBO when solving continuous optimization
problems with the QAOA
|
Quantum computing provides powerful algorithmic tools that have been shown to
outperform established classical solvers in specific optimization tasks. A core
step in solving optimization problems with known quantum algorithms such as the
Quantum Approximate Optimization Algorithm (QAOA) is the problem formulation.
While quantum optimization has historically centered around Quadratic
Unconstrained Optimization (QUBO) problems, recent studies show, that many
combinatorial problems such as the TSP can be solved more efficiently in their
native Polynomial Unconstrained Optimization (PUBO) forms. As many optimization
problems in practice also contain continuous variables, our contribution
investigates the performance of the QAOA in solving continuous optimization
problems when using PUBO and QUBO formulations. Our extensive evaluation on
suitable benchmark functions, shows that PUBO formulations generally yield
better results, while requiring less qubits. As the multi-qubit interactions
needed for the PUBO variant have to be decomposed using the hardware gates
available, i.e., currently single- and two-qubit gates, the circuit depth of
the PUBO approach outscales its QUBO alternative roughly linearly in the order
of the objective function. However, incorporating the planned addition of
native multi-qubit gates such as the global Molmer-Sorenson gate, our
experiments indicate that PUBO outperforms QUBO for higher order continuous
optimization problems in general.
|
Jonas Stein, Farbod Chamanian, Maximilian Zorn, Jonas NüÃlein, Sebastian Zielinski, Michael Kölle, Claudia Linnhoff-Popien
|
2023-05-05T09:37:48Z
|
http://arxiv.org/abs/2305.03390v1
|
# Evidence that PUBO outperforms QUBO when solving continuous optimization problems with the QAOA
###### Abstract.
Quantum computing provides powerful algorithmic tools that have been shown to outperform established classical solvers in specific optimization tasks. A core step in solving optimization problems with known quantum algorithms such as the Quantum Approximate Optimization Algorithm (QAOA) is the problem formulation. While quantum optimization has historically centered around Quadratic Unconstrained Optimization (QUBO) problems, recent studies show, that many combinatorial problems such as the TSP can be solved more efficiently in their native Polynomial Unconstrained Optimization (PUBO) forms. As many optimization problems in practice also contain continuous variables, our contribution investigates the performance of the QAOA in solving continuous optimization problems when using PUBO and QUBO formulations. Our extensive evaluation on suitable benchmark functions, shows that PUBO formulations generally yield better results, while requiring less qubits. As the multi-qubit interactions needed for the PUBO variant have to be decomposed using the hardware gates available, i.e., currently single- and two-qubit gates, the circuit depth of the PUBO approach outscales its QUBO alternative roughly linearly in the order of the objective function. However, incorporating the planned addition of native multi-qubit gates such as the global Malmer-Sorenson gate, our experiments indicate that PUBO outperforms QUBO for higher order continuous optimization problems in general.
Quantum Computing, Continuous Optimization, QAOA, QUBO, PUBO +
Footnote †: journal: Computer Physics Communications
1
Footnote 1: Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].
6
Footnote 1: Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].
+
Footnote †: journal: Computer Physics Communications
1
Footnote 1: Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].
+
Footnote †: journal: Computer Physics Communications
1
Footnote 1: Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].
+
Footnote †: journal: Computer Physics Communications
1
Footnote 1: Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].
+
Footnote †: journal: Computer Physics Communications
1
Footnote 1: Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].
+
Footnote †: journal: Computer Physics Communications
1
Footnote 1: Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].
+
Footnote †: journal: Computer Physics Communications
1
Footnote 1: Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].
+
Footnote †: journal: Computer Physics Communications
1
Footnote 1: Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].
+
Footnote †: journal: Computer Physics Communications
1
Footnote 1: Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].
+
Footnote †: journal: Computer Physics Communications
1
Footnote 1: Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].
+
Footnote †: journal: Computer Physics Communications
1
Footnote 1: Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].
+
Footnote †: journal: Computer Physics Communications
1
Footnote 1: Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].
+
Footnote †: journal: Computer Physics Communications
1
Footnote 1: Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].
+
Footnote †: journal: Computer Physics Communications
1
Footnote 1: Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].
+
Footnote †: journal: Computer Physics Communications
1
Footnote 1: Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].
+
Footnote †: journal: Computer Physics Communications
1
Footnote 1: Permission to make digital or hard copies of all parts of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].
## 1. Introduction
Solving optimization problems is a central task in industries involving domains like production and logistics. Many of these problems concern scheduling, routing, packing and others, which are often NP-hard and thus demand for heuristic solvers. A particularly promising approach to solving such optimization problems is quantum computing, which has already shown results comparable to classical state-of-the-art methods for small problem sizes [1, 1, 2] despite current quantum hardware limitations. For a significant period of time, quantum optimization was driven by D-Wave System's Quantum Annealing devices, which are technically limited to solving problems written in Quadratic Unconstrained Binary Optimization (QUBO) form. This restriction was subsequently lifted in the Quantum Approximate Optimization Algorithm (QAOA) by Farhi et al., which essentially simulates the process of Quantum Annealing on a quantum gate computer and allows for additional generalization using the larger capabilities of a universal quantum computer [1].
One particularly powerful generalization of the QAOA is its ability to solve higher order polynomial problems, i.e., it can natively work with Polynomial Unconstrained Binary Optimization (PUBO) problems. Instead of having to quadratize the a PUBO problem into QUBO form using ancillary qubits as is necessary for D-Wave's Quantum Annealers, needed multi-qubit interactions can be modelled using quantum gates (Nielsen and Chuang, 2010). While current quantum computers generally only support single- and two-qubit gates, e.g., trapped ion quantum computers are expected to implement multi-qubit gates such as the (global) Malmer-Sorenson gate in the future1. Such gates will allow the execution of the qubit interactions necessary to model PUBO problems in constant time without the currently needed decomposition in two- and single-qubit gates (Maslov and Nam, 2018), which scales linearly in the number of qubits involved.
Footnote 1: [https://ionq.com/docs/getting-started-with-native-gates](https://ionq.com/docs/getting-started-with-native-gates)
While some binary, combinatorial optimization problems like Max-Cut or Number Partitioning are formulated in terms of QUBO natively, modelling intrinsically non-binary problems like the TSP for QUBO requires special encoding techniques like the _one-hot encoding_, which increase the search space beyond exigence (Salehi et al., 2022). For problems like these, it has been shown that their PUBO versions generally outperform their QUBO analogues in terms of solution quality as well as the required number of optimization steps and QAOA iterations (Salehi et al., 2022; Tabi et al., 2020).
As many NP-hard problems such as scheduling or packing also involve continuous variables in higher order terms frequently in application (Floudas and Lin, 2005), we set out to compare the performance of PUBO and QUBO formulations for the QAOA on continuous optimization problems. Our two core contributions to this investigation are:
* an implementation of the QAOA capable of solving arbitrary polynomial optimization problems, that allows control over the used bit depth and the domains of the input variables, and
* an in-depth case-study evaluating the performance of PUBO and QUBO problem formulations on two established, continuous optimization benchmark functions.
This paper is structured into five sections. Following this introduction, we visit fundamental background knowledge necessary to comprehend our methodology in section 2. Section 3 subsequently contains a detailed description of the concept used to solve higher order continuous optimization problems with the QAOA. Finally, the established approach is applied to conduct the aspired evaluation in section 4 while concluding with a contextualization of the acquired results in section 5.
## 2. Background
In this section, we describe the overall functionality of the QAOA and its initial motivation to get an overview of all its components possibly influencing the evaluation results.
The QAOA is inspired by Adiabatic Quantum Computing (AQC), which is an alternative paradigm of quantum computing besides the omnipresent Quantum Gate Model (QGM). The main difference of AQC to the QGM resides in its time evolution being inherently continuous instead of iteratively applying discrete gates, as done in the QGM. Drawing upon the _adiabatic theorem_, which essentially states that a physical system stays in its instantaneous eigenstate whenever the time evolution applied to it happens slowly enough and if there is a gap between the corresponding eigenvalue and the rest of the Hamiltonian's spectrum (Born and Fock, 1928), an optimization algorithm can be formulated as:
1. Prepare an initial state \(\ket{\psi}\) that is the ground state of a known Hamiltonian \(\hat{H}_{M}\).
2. Identify a Hamiltonian \(\hat{H}_{C}\) modelling the objective function \(f:\{0,1\}^{n}\rightarrow\mathbb{R}\) where the eigenstates represent possible solutions to the input problem. The eigenvalues that correspond to the eigenstates embody the objective values of the respective solution.
3. Gradually evolve the initial state to the ground state of \(\hat{H}_{C}\) corresponding to the global optimum of \(f\) by applying the Hamiltonian \(\hat{H}(t)=(1-t)\,\hat{H}_{M}+t\hat{H}_{C}\).
The standard choice for the Hamiltonian \(\hat{H}_{M}\) is \(\hat{H}_{M}\coloneqq-\sum_{i=1}^{n}\sigma_{i}^{x}\) which inherits the easy to prepare ground state \(\ket{+}^{\otimes n}\), where \(\sigma_{i}^{x}\) denotes the tensor product of \(n-1\) identity matrices \(I\) with the Pauli operator \(\sigma_{x}\) at the \(i\)-th position. For \(\hat{H}_{C}\), a possible definition is \(\hat{H}_{C}\coloneqq\sum_{x\in\{0,1\}^{n}}f\bra{x}x\ket{x}\bra{x}\) as this trivially matches its requirements stated above.
While Quantum Annealers are built to execute the procedure described in item 3 for any given Ising Hamiltonian2\(\hat{H}_{C}=\sum_{i}h_{i}\sigma_{i}^{z}+\sum_{i<j}J_{ij}\sigma_{i}^{z}\sigma_{j}^{x}\), discretization and Hamiltonian simulation techniques must be used to implement this time evolution in the QGM, which is the fundamental idea of the QAOA. The continuous time evolution of \(\hat{H}(t)\) is discretized by iteratively simulating the time evolution of the Hamiltonians \(\hat{H}(t_{k})\) with equidistant \(t_{k}\in[0,1]\) strictly increasing from \(0\) to \(1\) and \(k\in\{1,...,P\}\).
Footnote 2: Ising Hamiltonians represent the energy spectrum in a specific physical system. This system is described by an _Ising model_, which is a mathematical model of ferromagnetism in statistical mechanics. This Hamiltonian has the convenient property of being isomorphic to the NP-hard quadratic programming problem and hence naturally allows to model many interesting optimization problems with it.
To perfectly approximate the continuous time evolution in the limit for \(P\rightarrow\infty\), each Hamiltonian \(\hat{H}(t_{k})\) is chosen to act for time \(\ket{p}\). However, especially for small \(P\), it is typically unclear how quickly the time evolution should progress at each intermediate Hamiltonian. In this context, it has proven useful to introduce parameters associated with the duration of their time evolution. These parameters can then be used to, i.a., satisfy the conditions of the adiabatic theorem, given that \(P\) is big enough. Notably the concrete implementations proposed for this parameterization use independent parameters for both Hamiltonians: \(\mathcal{H}_{k}\in\mathbb{R}\) for the Hamiltonian \(\hat{H}_{C}\) and \(\beta_{k}\in\mathbb{R}\) for the Hamiltonian \(\hat{H}_{M}\). This allows for increased flexibility, especially in the regime of low \(P\). For the optimization of these parameters, many different approaches have been explored, foremost gradient based techniques like the parameter shift rule in combination with gradient descent (Mitarai et al., 2018), but also other heuristic approaches focused on yielding results very quickly, such as the COBYLA optimizer (Powell, 1994).
The QAOA algorithm can thus be understood as an algorithm, that simulates the time evolution of the Hamiltonian \(\hat{H}(t)\) on gate
-based quantum computers. It does so using parameters guiding the time evolution speed as displayed in figure 1.
## 3. Concept
In this section, we show how the QAOA can be used to solve higher order continuous polynomial optimization problems. More specifically, we employ the following procedures:
1. Discretization of the objective function
2. Translating the objective function into a Hamiltonian
3. Implementing the Hamiltonian using quantum gates
### Discretization of the objective function
For discretizing a given objective function \(f:[a,b]\rightarrow\mathbb{R}\) with \(a<b\in\mathbb{R}\), we need to select a suitable bit encoding. For the sake of simplicity, we choose the _sign-magnitude_ representation which maps any integer to its native binary encoding while initially disregarding its sign, to then finally represent its sign using an extra bit at the start, e.g.: \(3_{10}\mapsto 0\,11_{2}\) and \(-3_{10}\mapsto 1\,11_{2}\). In addition to that simplification, we also restrict the possible domain spaces of each variable to be of the form \(]-2^{n},2^{n}[\) where \(n\in\mathbb{N}\), to alleviate needed precautions for intervals that are unbalanced or away from powers of two. This decision allows us to incorporate numbers beyond the whole numbers in a straightforward manner, i.e., by using standard floating point representation with a freely selectable bit resolution \(m\in\mathbb{N}\). The complete binary encoding of a given \(x\in]-2^{n},2^{n}[\) and bit resolution \(m\in\mathbb{N}\) can thus be described by the following approximation:
\[x\approx(2x_{0}-1)\left(\sum_{i=1}^{n}2^{n-i}x_{i}+\sum_{i=1}^{m}x_{n+i}2^{-i}\right) \tag{1}\]
As desired, this discretization leads to the bit string representation \(x\approx x_{0}\,x_{1}...x_{n},x_{n+1}...x_{n+m}\), so that, e.g., \(]-2^{2},2^{2}[\ni-2,75_{10}\mapsto 1\,10,11_{2}\) for a bit resolution of \(m=3\). Note however, that the borders of the domain space can only be approached when increasing the bit resolution \(m\), while every additional bit contributes with advancement of \(\nicefrac{{1}}{{2}}^{m=1}\). Using this bit encoding, we can also represent functions with higher dimensional input spaces by following the described substitution procedure for every dimension and then concatenating the resulting bit strings.
### Translating the objective function into a Hamiltonian
As described in section 2, there is a native mapping between binary functions \(f:\{0,1\}^{n}\rightarrow\mathbb{R}\) and Hamiltonians, i.e., \(\hat{H}_{\mathcal{C}}\coloneqq\sum_{x\in\{0,1\}^{n}}f(x)\ket{x}\bra{x}\). This method can be very inefficient however, if we only have access to \(f\) as a black box function, because the Hamiltonian can be comprised of exponentially many non-zero terms. Given that we have access to \(f\) in a white box manner, we can conduct this mapping much more efficiently, i.e., by substituting every \(x_{i}\in\{0,1\}\) with a \(s_{i}\in\{-1,1\}\) as in \(x_{i}\mapsto(s_{i}+1)/2\). In the case of \(f\) having higher degree interactions than two in its input bits (i.e., e.g., a term like \(ax_{0}x_{1}x_{2}\) with \(\alpha\in\mathbb{R}\)), inserting a suitable quadratization step is obligatory for the QUBO version. Typically this step is done before translating into the spin configuration domain \(\{-1,1\}\) by adding ancillary bits to the input space and a penalty term to the function \(f\), as exemplified in equation 2. For details on this quadratization step, we reference to the python package qubover, which we used for this step in our implementation3. Notably, finding the optimal quadratization in terms of minimizing the number of needed ancillary qubits is NP-hard, as pointed out in (Boros and Hammer, 2002):
Footnote 3: [https://github.com/jtiosue/qubover](https://github.com/jtiosue/qubover)
\[f(x_{0},x_{1},x_{2}) =ax_{0}x_{1}x_{2} \tag{2}\] \[\mapsto f(x_{0},x_{1},x_{2},z) =axx_{2}+2\alpha\left(x_{0}x_{1}-2\left(x_{0}+x_{1}\right)z+3z\right)\]
In order to translate the resulting function of spin configurations \(f^{\prime}:\{-1,1\}^{n}\rightarrow\mathbb{R}\) into a quantum mechanical Hamiltonian, we can simply substitute all spins \(s_{i}\) with Pauli operators using the trivial map \(s_{i}\mapsto\sigma_{i}^{z}\). (Farhi et al., 2014)
### Implementing the Hamiltonian using quantum gates
To implement the quantum circuit of the QAOA, we have to conduct Hamiltonian simulation of \(\hat{H}_{M}\) and \(\hat{H}_{C}\). While \(\hat{H}_{M}\) can easily be simulated using parameterized \(X\) gates, \(\hat{H}_{C}\) involves higher order terms (as e.g., \(\alpha\sigma_{i}^{z}\sigma_{j}^{z}\sigma_{k}^{z}\) where \(\alpha\in\mathbb{R}\)) for the PUBO variant. As pointed out in (Glos et al., 2022), Hamiltonians of this form can be simulated using the generic architecture shown in figure 2, naturally expanding from the well-know quadratic case \(\alpha\sigma_{i}^{z}\sigma_{j}^{z}\). When having access to a suitable multi-qubit gate such as the (global) Malmer-Sorenson gate, combining the information presented in figure 4.19 in (Nielsen and Chuang, 2010) and figure 5 from (Maslov and Nam, 2018), we can simulate arbitrary degrees of Pauli matrices using one extra ancillary qubit with an overhead of merely two extra circuit operations. As all terms in \(\hat{H}_{C}\) commute, the Hamiltonian simulation simplifies into a concatenation of the gates used to implement all terms in the sum notation of \(\hat{H}_{C}\) as exemplified in figure 3, concluding this section.
### Example
We now demonstrate how all described steps of transforming the objective function into the corresponding QAOA circuit can be done in practice using the following example:
\[f:\,]-2^{2},2^{2}[ \rightarrow\mathbb{R} \tag{3}\] \[x \mapsto x^{2}+2x \tag{4}\]
Choosing a zero bit resolution \(m=0\) for simplicity, the bit encoding is displayed in the following map:
\[x\mapsto\left(2x_{0}-1\right)\left(2^{1}x_{1}+2^{0}x_{2}\right). \tag{5}\]
Therefore, \(f\) can now be written in discretized form as follows:
\[f\left(x_{0},x_{1},x_{2}\right)= \left(\left(2x_{0}-1\right)\left(2^{1}x_{1}+2^{0}x_{2}\right) \right)^{2}\] \[+2\left(2x_{0}-1\right)\left(2^{1}x_{1}+2^{0}x_{2}\right)\] \[= 4\left(4x_{0}x_{1}+x_{0}x_{2}+x_{1}x_{2}\right)\]
This then translates to the spin configuration function \(f^{\prime}\) as described in equation 7 below.
\[f^{\prime}\left(s_{0},s_{1},s_{2}\right)= 4\left(4\frac{s_{0}+1}{2}\frac{s_{1}+1}{2}+\frac{s_{0}+1}{2}\frac{ s_{2}+1}{2}+\frac{s_{1}+1}{2}\frac{s_{2}+1}{2}\right)\] \[= 4\left(s_{0}s_{1}+s_{0}s_{2}+s_{1}s_{2}+2s_{0}+2s_{1}+2s_{2}+3\right) \tag{7}\]
Using the mapping from a spin configuration function to a quantum Hamiltonian as described in section 3.2, we get:
\[\hat{H}_{C}=4\left(\sigma_{0}^{2}\sigma_{1}^{z}+\sigma_{0}^{2}\sigma_{2}^{z}+ \sigma_{1}^{z}\sigma_{2}^{z}+2\sigma_{0}^{z}+2\sigma_{1}^{z}+2\sigma_{2}^{z}+ 3I^{\theta 3}\right) \tag{8}\]
Subsequently, we can use the combination of CNOT gates wrapping a parameterized rotation gate \(R_{2}(\theta)\) applied on the target qubit to construct the circuit simulating the Hamiltonian \(\hat{H}_{C}\), as indicated figure 3.
## 4. Evaluation
To compare the performance of the QAOA for PUBO and QUBO formulations of higher order continuous optimization functions, we run experiments on two established benchmark functions (see figures 3(a) and 3(b)): The 1-Dimensional Styblinski-Tang function \(s(x)=(x^{4}-16x^{2}+5x)/2\) (denoted as 1D-ST) [21], and the 2-Dimensional Rosenbrock function \(r(x,y)=100\left(y-x^{2}\right)^{2}+(x-1)^{2}\) (denoted as 2D-Rb) [12]. These functions where chosen for their different requirements in terms of the number of needed qubits to model them (for details see figure 8) and their hardness4. Having to specify input domain spaces in which the search for the optimal value is to be conducted, we choose the interval \(]-4,4[\) for the 1D-ST function and \(]-4,4[^{2}\) for the 2D-Rb function. These domain spaces allow us to find the global optimum of each function and enable us to investigate many different bit resolutions while staying within reasonable simulation times of a couple of hours. More specifically, these input domains allow exploring bit resolutions of 0 to 3 for the 1D-ST function and 0 to 1 for the 2D-Rb function.
Footnote 4: According to the results from _Global Optimization Benchmarks_ and _AMPGO_ by Andrea Gavana, see [http://infinity77.net/global_optimization/index.html](http://infinity77.net/global_optimization/index.html)
In the following, we explore the performance differences between the PUBO and QUBO approaches in terms of three criteria:
1. The solution quality
2. The parameter training
3. The circuit width and depth
For all of the following experiments, we used Qiskit's qasm simulator, the COBYLA optimizer because of its short runtime, and 1024 shots as a standard for all circuit runs. In addition to that, we initialized all parameters using _ramp initialization_, as it consistently showed the best results in our experiments. Notably, the ramp initialization simply corresponds to the choosing equidistantly spaced intervals for the discretized Hamiltonian simulation described in section 2. Furthermore, we conducted our studies for a very high number of QAOA iterations compared to related work, i.e., \(1\leq P\leq 40\), as this allows for a better performance estimation in terms of scaling.
### Solution quality
To evaluate the solution quality of both approaches (PUBO and QUBO), we now examine their performance at different bit resolutions and varying QAOA iterations \(P\) as exemplified in figure 5.
For figure 4(a), we chose to display a baseline result, i.e., the 1D-ST function at zero bit resolution, as this function is a QUBO problem by nature. With both plots showing very similar behavior, it becomes apparent that PUBO performs completely analogously to the QUBO for quadratic functions.
Examining figures 4(b) and 4(c), we can see that the PUBO approach consistently outperforms the QUBO approach for higher order functions, as the expected value, the median and the overall variance are significantly lower for PUBO. This becomes increasingly apparent for the harder 2D-Rb function displayed in figure 4(c), as we can see that the QUBO approach essentially plateaus for increasing \(P\), while the PUBO performance clearly benefits from higher \(P\). These results are especially promising if this trend does continue for higher \(P\), which is to be explored in future work.
While these plots merely display exemplified results, our full evaluation results clearly substantiate the trends visible in the selected plots.
Figure 1. The general form of the QAOA circuit.
Figure 2. Hamiltonian simulation of the components in the cost Hamiltonian \(\hat{H}_{C}\).
### Parameter Training
Following the recommendation of (Team, 2022), we select the COBYLA optimizer to train the QAOA parameters. This optimizer has a built in stopping criterion, terminating the learning process when the last couple optimization iterations did not increase the objective value above a specific threshold (in our case \(1e-4\)). To prevent this procedure from exceeding a reasonable execution time, the user can also specify a number of maximum possible iterations. We use this functionality by capping the number of optimization steps at 1000, relying on results from preliminary experiments that showed, that almost no problem instances exceeded this number of optimization iterations. This allows us to compare the number of optimization steps between the PUBO and QUBO approaches unimpaired of this hyperparameter, as almost all parameter trainings run until completion.
Examining the number of optimization steps for different \(P\) shown in figure 6, it becomes clear that both approaches need roughly the same number of optimization steps. In general, we can also observe that the number of optimization steps for the Stylbinski-Tang function is generally higher compared to the Rosenbrock function. We suspect this being caused by the flatter landscape of the Rosenbrock function leading to below-threshold training improvements sooner. In addition to that, we can observe that the QUBO approach has a tendency to decrease its ascend in training time earlier when the solution quality is worse than the PUBO (which is the case for the 1D-ST function at a bit resolution of three, as this function has a very similar plot to the one displayed in figure 5(b)).
When simulating quantum circuits using classical hardware, execution times play an important role, as they limit what can be learned about their properties such as scaling behavior using non-quantum hardware. As displayed in figure 7, the training time does not differ significantly if the function only has a small amount of higher order terms involved (see figure 6(a)), while execution
Figure 4. Visualizations of benchmark functions used for the evaluation.
Figure 3. QAOA circuit implementation using single-qubit and CNOT-gates for the example in section 3.4 showing \(P=1\) iterations.
Figure 5: Box plots showing the quality of the solutions found using the QAOA for the QUBO (blue) and PUBO (red) approaches for different numbers of QAOA iterations \(P\). The seeked global minimum for the Rosenbrock function is \(0\) and \(-39.16599\) for the Stylbinksi-Tang.
time increases massively for the QUBO approach the more qubits are needed and the more higher order terms appear. Notably, that difference is mostly dominated by the number of qubits involved (13 for the QUBO formulation of 1D-ST at a bit resolution of three versus the 17 qubits needed for the QUBO formulation of the 2D-Rb function at a bit resolution of 1). This clearly demonstrates the performance of PUBO for simulation on classical hardware, also allowing for a deeper scaling analyses, which are very valuable in practice.
### Circuit width and depth
For the execution of the proposed approaches on real hardware, two criteria are essential: the circuit width (i.e., the number of qubits) and the circuit depth (i.e., the number of subsequent gate operations). Figure 8 exemplifies the both using the 1D-ST function, as it allows for a bigger scaling analysis in terms of bit resolution.
Before comparing the number of needed qubits for both approaches, we recall that the number of required qubits is entirely determined by the bit depth and the dimensions of the input domain, according to our chosen discretization. For PUBO, we can easily calculate the number of required qubits by adding up the number of bits used to represent each dimension of the input domain. For QUBO we can calculate this number by determining the number of required ancillary qubits and adding it to the number of qubits required for the PUBO formulation as a result of the quadratization. The number of ancillary qubits however relies heavily on the exact function and the techniques used for quadratization. We used a combination of different techniques based on the python package qubovert5 and boolean algebra simplifications. The resulting number of qubits for the 1D-ST function are displayed in figure 8. Comparing the PUBO and QUBO approaches, we can clearly see a higher number of needed qubits in the QUBO variant, which gradually increases with bit resolution, as the number of qubit and quartic terms accumulate according to the chosen discretization, as described in section 3.1.
Continuing with the circuit depth (also displayed in figure 8), we can observe a clear disadvantage of the PUBO approach when executed on a device that doesn't inherit a suitable gate set: While the QUBO's overall circuit depth at the highest complexities caps at less than 1400, the PUBO's overall depth reaches around 4000. The substantially higher circuit depth for current hardware raises an important potential drawback when deciding on whether to incorporate PUBO on current NISQ devices, where gate-fidelity is a significant constraint. For future quantum computers implementing suitable multi-qubit gates however, the scaling in terms of circuit depth would roughly equal that of the QUBO approach. Possibly, even less gates might be needed, as no interactions with ancillary qubits are needed. Another promising observation is the easier use of classical circuit simulators in PUBO, as they can generally provide arbitrary gate sets and thus allow for even shorter circuits while also needing fewer qubits.
## 5. Conclusion
The conducted experiments clearly indicate that PUBO formulations achieve superior result quality over their quadratized QUBO analogues for continuous polynomial objective functions of a higher order. Until suitable multi-qubit gates become available, this manifests in a trade-off between the number of needed qubits (linearly higher for QUBO) and the circuit depth (linearly higher for PUBO). In terms of parameter training steps, both approaches performed equally. When using a quantum circuit simulator however, the wall-clock times for the PUBO formulations showed much better results, most probably because of the lower number of qubits that need to be simulated. For NISQ hardware, the performance difference is still mostly unclear and should be investigated in future work. We expect a strong dependence on the objective function, the input domain and bit resolution as well as their interplay with the error rates to be decisive. Finally, in the future we plan on exploring the combination of our findings with the existing positive results on using PUBO for combinatorial optimization problems to investigate the performance of PUBO formulations for NP-hard mixed integer problems.
###### Acknowledgements.
This work was partially funded by the German BMWK project _QCHALLenge_ (01MQ22008A). The authors want to thank Johannes Kolb for his contributions to this research.
Figure 6. Number of parameter training iterations for different numbers of QAOA iterations \(P\).
|
2303.09255
|
Security of discrete-modulated continuous-variable quantum key
distribution
|
Continuous variable quantum key distribution with discrete modulation has the
potential to provide information-theoretic security using widely available
optical elements and existing telecom infrastructure. While their
implementation is significantly simpler than that for protocols based on
Gaussian modulation, proving their finite-size security against coherent
attacks poses a challenge. In this work we prove finite-size security against
coherent attacks for a discrete-modulated quantum key distribution protocol
involving four coherent states and heterodyne detection. To do so, and contrary
to most of the existing schemes, we first discretize all the continuous
variables generated during the protocol. This allows us to use the entropy
accumulation theorem, a tool that has previously been used in the setting of
discrete variables, to construct the finite-size security proof. We then
compute the corresponding finite-key rates through semi-definite programming
and under a photon-number cutoff. Our analysis provides asymptotic rates in the
range of $0.1-10^{-4}$ bits per round for distances up to hundred kilometres,
while in the finite case and for realistic parameters, we get of the order of
$10$ Gbits of secret key after $n\sim10^{11}$ rounds and distances of few tens
of kilometres.
|
Stefan Bäuml, Carlos Pascual-García, Victoria Wright, Omar Fawzi, Antonio Acín
|
2023-03-16T12:14:07Z
|
http://arxiv.org/abs/2303.09255v4
|
# Security of discrete-modulated continuous-variable quantum key distribution
###### Abstract
Continuous variable quantum key distribution with discrete modulation has the potential to provide quantum physical security using widely available optical elements and existing telecom infrastructure. While their implementation is significantly simpler than that for protocols based on Gaussian modulation, proving their finite-size security against coherent attacks poses a challenge. In this work we apply the entropy accumulation theorem, a tool that has previously been used in the setting of discrete variables, to prove finite-size security against coherent attacks for a discrete-modulated quantum key distribution protocol involving four coherent states and heterodyne detection. To do so, and contrary to previous approaches, we consider a protocol in which all the information is discretized. We first bound its asymptotic rate under a realistic photon number cutoff assumption. This bound is then upgraded into a finite-size security proof using entropy accumulation. Our analysis provides non-trivial key rates for \(n=10^{12}\) rounds.
###### Contents
* I Introduction
* II Preliminaries
* II.1 Basic notations
* II.2 Security definition
* III The QKD protocol
* III.1 The hypothetical QKD protocol
* III.2 The physical QKD protocol
* IV Security of the QKD protocol
* IV.1 Soundness
* IV.1.1 Reduction to Collective Attacks via Entropy Accumulation
* IV.2 Completeness
* IV.3 The Min-Tradeoff Function
* IV.3.1 Removing the dependence on the \(\hat{E}\) subsystem
* IV.3.2 Finding an affine crossover min-tradeoff function
* IV.3.3 Optimisation of the crossover min-tradeoff function
* IV.4 Asymptotic Rates
* V Numerical implementation and results
* VI Discussion
* A Proof of Lemma 3
* B Proof of Lemma 4
* C Upper bounding the classical smooth max entropy
Introduction
Arguably one of the most technologically advanced applications of quantum information theory nowadays is quantum key distribution (QKD), which allows two honest parties, Alice and Bob, to obtain a cryptographic key, the security of which is guaranteed by the laws of quantum physics. Whereas QKD was originally conceived in a setting involving discrete variables [1; 2; 3], e.g. requiring the generation, or at least approximation, of states in a Hilbert space of finite dimension, there exists a number of protocols based on continuous variable systems, such as squeezed or coherent states [4; 5; 6; 7]. These protocols, known as continuous variable quantum key distribution (CVQKD), provide a number of advantages over discrete variable quantum key distribution (DVQKD) in terms of implementation using present day telecom infrastructure.
The security of DVQKD has been proven both in theory and in realistic implementations using diverse approaches, see for instance [8; 9; 10; 11; 12]. Different security proofs have also been provided for CVQKD, many of which make use of a particular feature of the protocol, namely that the quantum states sent from Alice to Bob are chosen according to a Gaussian distribution. Such protocols are also known as Gaussian modulated CVQKD protocols. An important ingredient when proving security of Gaussian modulated CVQKD against collective attacks is the extremality of Gaussian states [13]. Gaussian extremality implies that, for a given covariance matrix of Alice and Bob's system, the maximum over the Holevo quantity in the Devetak-Winter formula for the key rate [14], which involves an optimisation over Eve's full Fock space, is attained by the corresponding Gaussian state. Combining this with the fact that, in the case of Gaussian modulation, the covariance matrix of Alice and Bob's system can be directly computed from the observed statistics [15], security against collective attacks has been shown for Gaussian modulated coherent and squeezed states protocols involving both homodyne and heterodyne detection [16; 17; 18; 19]. Security against general attacks has been shown for protocols using coherent [19; 20; 21; 22] as well as squeezed states [23; 24]. The main tools that have been used are the de Finetti Theorem [20; 22], postselection techniques [21; 25] and entropic uncertainty relations [23; 24].
Unfortunately, the implementation of CVQKD protocols with Gaussian modulation faces a number of challenges: first, a Gaussian modulation is never achieved in practice, and is in fact often approximated by finite sets of states. A discrete modulation therefore significantly simplifies the preparation of states but also the error correction part, as much simpler reconciliation schemes can be used [26]. Discrete-modulated protocols involve Alice sending coherent states taken from a typically small set, e.g. containing two or four states, according to some distribution, to Bob, who then applies a homo- or heterodyne measurement and discretises his outcome. Despite their simplicity, less is known about the security of such schemes.
The main challenge is that, unlike in the case of Gaussian modulation, the first and second moment of Alice and Bob's state are generally not sufficient to bound Eve's information, as one cannot invoke Gaussian extremality. Nevertheless, security has been shown in a number of scenarios [26; 27; 28; 29; 30; 31; 32; 33; 34; 35]. In [26], security was proven for a limited class of transmission channels. For a protocol using Gaussian modulation for parameter estimation and discrete modulation for key generation, which requires decoy states, security against collective attacks and general security in the asymptotic limit has been shown in [27]. The authors of [28] apply an optimisation over possible covariance matrices of Alice and Bob's state as well as a reduction to the Gaussian optimality method to show security against collective attacks in the asymptotic limit. Higher key rates, which are secure against collective attacks in the asymptotic limit, are obtained by [30], which use an optimisation over all possible density matrices of Alice and Bob's state that are compatible with the observed statistics and without invoking the arguments of Gaussian optimality, but using a cutoff assumption that limits the number of photons in the state.
In the setting of collective Gaussian attacks, the security of discrete modulated coherent state protocols with heterodyne detection, for any number of coherent states, has also been proven in the finite-size regime [32]. Finally, finite-size security against general attacks has been shown for a protocol involving a discrete modulation using two coherent states, as well as a combination of homodyne and heterodyne detection, applied in signal and test rounds, respectively [33; 35].
In this work, we consider a protocol involving a discrete modulation using four coherent states, also known as 4-PSK protocol, and heterodyne detection, which is closely related to the protocol presented in [30]. The main difference with respect to previous approaches is that all the information generated by the protocol, for key generation and parameter estimation, is discretised. This allows us to prove security against general attacks, as well as finite block sizes, using the entropy accumulation theorem (EAT) [36; 37], which has previously been used to prove the security of device independent quantum key distribution against general attacks [38; 39; 40].
The EAT is a powerful tool that allows one to lower bound the conditional smooth min-entropy, a quantity that can be used to quantify the amount of secret key obtainable from a (generally unstructured) classical-quantum (cq) state by means of privacy amplification, using hash functions [11]. This is in fact the relevant situation in QKD protocols, since a cq-state is produced where Alice and Bob hold classical information, resulting in our case from Alice's preparation and Bob's measurements, whereas Eve's system remains quantum. The EAT requires the cq-state being the result of a sequence of maps, known as EAT channels, each of which provides classical outputs and side
information, while also passing on a quantum system to the next map. The lower bound on the conditional smooth min-entropy is in terms of a so-called'min-tradeoff function', mapping the observed statistic of classical outputs of the EAT channels to a real number which cannot exceed the single round conditional von Neumann entropies of any of the EAT channels.
A major challenge when applying the EAT in security proofs for QKD is that the EAT channels need to fulfill a Markov condition, ensuring that in each round, given all past-side information, there are no new correlations between previous outcomes and the new side information. As information used for parameter estimation is obtained from measurements by Alice and Bob on systems which Eve could potentially have correlated in a way incompatible with the Markov condition, the EAT cannot be applied to the QKD protocol directly. Rather, a hypothetical EAT process is introduced which produces the same marginal states on the subsystems relevant to the security proof, and the smooth min-entropy of the QKD protocol is lower bounded using a combination of chain rules, as well as a min-tradeoff function corresponding to the EAT process [38; 39; 40].
As was recently pointed out by the authors of [41], another issue arises when applying the EAT in device dependent prepare-and-measurement protocols. Such protocols can be translated into entanglement based protocols, where Alice, instead of randomly sending states, prepares an entangled state, part of which is sent to Bob via an insecure channel, while the remaining part is kept in Alice's lab. Alice and Bob then perform measurements on their respective parts. The issue which arises is that the statistics obtained from the measurements is not sufficient to certify that the state between Alice and Bob is entangled, requiring additional constraints on Alice's marginal in the final key rate optimisation, which are incompatible with the EAT. We overcome this issue by adding an additional tomography performed by Alice in randomly chosen rounds, thus ensuring that Alice and Bob's measurement statistics are sufficient to certify entanglement between Alice and Bob.
Having overcome these challenges, we are able to derive a min-tradeoff function using the numerical approach presented in [42], which, as said, requires a cutoff assumption. It involves a linearisation of the objective function and the use of duality, finally reducing the problem to a semi-definite programming optimisation, which can be efficiently handled numerically. Our numerical analysis also suggests that the values of the key rate do not significantly vary once the cutoff becomes large enough. Using this approach, we are able to obtain non-trivial key rates in finite-size setting of \(n=10^{12}\) rounds.
After most of the work that went into this result was completed, a generalised version of the EAT has been presented [43; 44], which offers an alternative method of overcoming the challenges to prove the security of device-dependent prepare-and-measure protocols mentioned in the previous two paragraphs. In another recent result, the authors of [31] have overcome the photon number cutoff assumption on Bob's state needed to compute the min-tradeoff function that defines the asymptotic rates by means of adding an additional energy test, as well as a dimension reduction technique presented in [45]. Their proof also works in the finite setting, albeit only against collective attacks.
## II Preliminaries
### Basic notations
In this section we introduce some definitions and concepts we use throughout the paper. For a Hilbert space \(\mathcal{H}_{A}\), we denote by \(\mathcal{D}(\mathcal{H}_{A})\) the set of density operators, i.e. positive semidefinite operators with unit trace, \(\rho_{A}\), acting on quantum system \(A\). Sometimes it will be convenient to consider subnormalised states, i.e. states with \(\mathrm{Tr}[\rho]\leq 1\), in which case we use the notation \(\mathcal{D}_{\leq}(\mathcal{H}_{A})\). The notation \(\mathcal{H}_{AB}\) denotes a tensor product Hilbert space \(\mathcal{H}_{A}\otimes\mathcal{H}_{B}\), and \(\rho_{AB}\) the corresponding bipartite density operator. Classical random variables \(X\), taking values \(\{x\}\) according to the distribution \(\{p_{x}\}\) can be expressed as density operators as \(\rho_{X}=\sum_{x}p_{x}\left|x\right\rangle\left\langle x\right|_{X}\). By \(XY\) we denote the Cartesian product of random variables \(X\) and \(Y\). Further, we will be using the notation \(A_{1}^{n}=A_{1}A_{2}...A_{n}\) and \(X_{1}^{n}=X_{1}X_{2}...X_{n}\) for quantum and classical systems. We express \(\mathrm{cq}\) states using the notation \(\rho_{XA}=\sum_{x}p_{x}\left|x\right\rangle\left\langle x\right|_{X}\otimes \rho_{A}^{x}\). For a \(\mathrm{cq}\) state \(\rho_{CQ}=\sum_{c}p(c)\left|c\right\rangle\left\langle c\right|\otimes\rho_{c}\), an event \(\Omega\) is defined as a subset of the elements \(\{c\}\). The conditional state is then given by \(\rho_{CQ}|_{\Omega}=\frac{1}{\mathrm{Tr}_{\rho_{A}}}\sum_{c\in\Omega}p(c) \left|c\right\rangle\left\langle c\right|\otimes\rho_{c}\) where \(\mathrm{Pr}_{\rho}[\Omega]:=\sum_{c\in\Omega}p(c)\). When the state \(\rho\) is clear from the context, we use \(\mathrm{Pr}[\Omega]\) in place of \(\mathrm{Pr}_{\rho}[\Omega]\).
For two subnormalised states \(\rho,\sigma\in\mathcal{D}_{\leq}(\mathcal{H}_{A})\), we define the generalised fidelity as
\[F(\rho,\sigma)=\left(\mathrm{Tr}\left|\sqrt{\rho}\sqrt{\sigma}\right|+\sqrt{( 1-\mathrm{Tr}[\rho])(1-\mathrm{Tr}[\sigma])}\right)^{2}, \tag{1}\]
the generalised trace distance as
\[\Delta(\rho,\sigma)=\frac{1}{2}\|\rho-\sigma\|_{1}+\frac{1}{2}\left|\mathrm{ Tr}[\rho-\sigma]\right|, \tag{2}\]
as well as the purified distance
\[P(\rho,\sigma)=\sqrt{1-F(\rho,\sigma)}. \tag{3}\]
The generalised trace distance and the purified distance are metrics on \(\mathcal{D}_{\leq}(\mathcal{H}_{A})\). They are related by the Fuchs van de Graaf inequality
\[\Delta(\rho,\sigma)\leq P(\rho,\sigma)\leq\sqrt{2\Delta(\rho,\sigma)-\Delta( \rho,\sigma)^{2}}\leq\sqrt{2\Delta(\rho,\sigma)}. \tag{4}\]
In this work we make use of a number of entropic quantities. In addition to the well known von Neumann entropy \(H(\rho)=-\operatorname{Tr}[\rho\log\rho]\), the conditional von Neumann entropy \(H(A|B)_{\rho}=H(AB)_{\rho}-H(B)_{\rho}\), as well as the Umegaki relative entropy,
\[D(\rho||\sigma)=\begin{cases}\frac{1}{\operatorname{Tr}[\rho]} \operatorname{Tr}\left[\rho(\log\rho-\log\sigma)\right]&\text{ if }\operatorname{supp}(\rho)\subset\operatorname{supp}(\sigma)\\ \infty&\text{ otherwise,}\end{cases} \tag{5}\]
for positive semidefinite \(\rho\) and \(\sigma\), we make use of \(\min\) and \(\max\) conditional entropies, defined for a subnormalised quantum state \(\rho_{AB}\in\mathcal{D}_{\leq}(\mathcal{H}_{AB})\) by [46],
\[H_{\min}(A|B)_{\rho} =\sup_{\sigma_{B}\in\mathcal{D}_{\leq}(\mathcal{H}_{B})}\sup \left\{\lambda\in\mathbb{R}:\rho_{AB}\leq\exp(-\lambda)\mathbb{1}_{A}\otimes \sigma_{B}\right\}, \tag{6}\] \[H_{\max}(A|B)_{\rho} =\max_{\sigma_{B}\in\mathcal{D}_{\leq}(\mathcal{H}_{B})}\log F \left(\rho_{AB},\mathbb{1}_{A}\otimes\sigma_{B}\right). \tag{7}\]
For \(\epsilon\geq 0\), we can then define the smooth \(\min\) and \(\max\) entropies as [46],
\[H_{\min}^{\epsilon}(A|B)_{\rho} =\max_{\bar{\rho}\in\mathcal{B}^{\epsilon}(\rho_{AB})}H_{\min}(A| B)_{\bar{\rho}}, \tag{8}\] \[H_{\max}^{\epsilon}(A|B)_{\rho} =\min_{\bar{\rho}\in\mathcal{B}^{\epsilon}(\rho_{AB})}H_{\max}(A| B)_{\bar{\rho}}, \tag{9}\]
where \(\mathcal{B}^{\epsilon}(\rho_{A})\) is the \(\epsilon\)-ball around a state \(\rho_{A}\) in terms of purified distance, i.e. the set of subnormalised states \(\tau\in\mathcal{D}_{\leq}(\mathcal{H}_{A})\) such that \(P(\tau,\rho)\leq\epsilon\). For parameter \(a\in(1,2)\), let us further define the sandwiched Renyi divergence [47; 48] for a quantum state \(\rho\) and positive semidefinite \(\sigma\) as
\[D_{a}(\rho||\sigma)=\begin{cases}\frac{1}{a-1}\log\operatorname{Tr}\left[ \left(\sigma^{-\frac{a-1}{2a}}\rho\sigma^{-\frac{a-1}{2a}}\right)^{a}\right]& \text{ if }\operatorname{supp}(\rho)\subset\operatorname{supp}(\sigma)\\ \infty&\text{ otherwise}\end{cases} \tag{10}\]
and the conditional Renyi entropy as
\[H_{a}^{\dagger}(A|B)_{\rho}=\inf_{\sigma_{B}\in\mathcal{D}_{\leq}(\mathcal{H} _{B})}D_{a}(\rho_{AB}||\mathbb{1}_{A}\otimes\sigma_{B}). \tag{11}\]
### Security definition
When two parties, Alice and Bob, wish to communicate in perfect secrecy in the presence of a quantum eavesdropper Eve, they need to perform a QKD protocol, typically consisting of \(n\) rounds of quantum communication and local measurements, followed by classical post-processing steps involving parameter estimation, error correction and privacy amplification. An instance of a QKD protocol may be aborted if certain tests included in the protocol, such as parameter estimation, fail, or if a subprotocol, such as error correction aborts. If the protocol does not abort, the goal is to obtain a state close to a so-called perfect ccq state of the form \(\rho_{K_{A}K_{B}E}^{\text{perfect ccq}}=\frac{1}{d}\sum_{x=0}^{d-1}\left|xx \right\rangle\left\langle xx\right|_{K_{A}K_{B}}\otimes\rho_{E}\), where Alice and Bobs's systems are classical, whereas Eve's system may be quantum. Such a state corresponds to \(\log d\) bits of an ideal classical key between Alice and Bob which is secret in that it is completely uncorrelated from Eve, even if Eve is allowed to posses a quantum system. And it is correct in the sense that Alice and Bob's systems are perfectly classically correlated.
A proof of security of a QKD protocol then involves two parts: Firstly, it has to be shown that it results in a state that is secret and correct, i.e. close to a perfect ccq-state. Formally, for \(\epsilon^{\text{sou}}>0\), a QKD protocol is said to be \(\epsilon^{\text{sou}}\)_-sound_, if it results in a state \(\rho_{K_{A}K_{B}E}^{\text{QKD}}\), such that if we condition on the event \(\Omega_{\text{NonAbort}}\) of not aborting the protocol it holds
\[\operatorname{Pr}_{\rho^{\text{QKD}}}[\Omega_{\text{NonAbort}}] \frac{1}{2}\left\|\rho_{K_{A}K_{B}E}^{\text{QKD}}|_{\Omega_{\text{NonAbort}}}- \rho_{K_{A}K_{B}E}^{\text{perfect ccq}}\right\|_{1}\leq\epsilon^{\text{sou}}. \tag{12}\]
As we wish to treat the error correction protocol separately from the the remaining protocol, it is convenient to split the soundness property into a secrecy and correctness part. Namely, let \(\epsilon^{\text{sec}}>0\) and \(\epsilon^{\text{cor}}>0\). A QKD protocol is said to be \(\epsilon^{\text{sec}}\)_-secret_ if
\[\Pr_{\rho^{\text{QKD}}}[\Omega_{\text{NonAbort}}]\frac{1}{2}\left\|\rho_{\bar{ K}_{A}E[\Omega_{\text{NonAbort}}}^{\text{QKD}}-\rho_{K_{A}E}^{\text{perfect ccq}}\right\|_{1}\leq\epsilon^{\text{sec}}. \tag{13}\]
The protocol is further said to be \(\epsilon^{\text{cor}}\)_-correct_ if
\[\Pr_{\rho^{\text{QKD}}}[K_{A}\neq K_{B}\wedge\Omega_{\text{NonAbort}}]\leq \epsilon^{\text{cor}}. \tag{14}\]
If the protocol is both \(\epsilon^{\text{sec}}\)-secret and \(\epsilon^{\text{cor}}\)-correct, it is \(\epsilon^{\text{sec}}+\epsilon^{\text{cor}}\)-sound. The second part of a security proof is to show completeness, meaning that there is an honest implementation, i.e. an implementation without presence of Eve, that does succeed, i.e. does not abort, with high probability. Formally, for \(\epsilon^{\text{com}}>0\), we say that a QKD protocol is \(\epsilon^{\text{com}}\)_-complete_, if
\[1-\Pr_{\text{hon}}[\Omega_{\text{NonAbort}}]\leq\epsilon^{\text{com}}, \tag{15}\]
where the subscript hon refers to the fact that we compute the probability with respect to the honest implementation specified by the protocol.
## III The QKD protocol
The QKD protocol we consider is based on the 4-PSK protocol using heterodyne detection described in [30]. However, we perform a discretisation of Bob's measurement outputs in both key and parameter rounds rather than just key rounds. Our protocol also differs from the one presented in [30] in that we do not include post-selection. In each round of the protocol, Alice prepares one of four coherent states \(\ket{\varphi_{x}}\in\{\alpha,-\alpha,i\alpha,-i\alpha\}\) for some predetermined \(\alpha\in\mathbb{R}\), with probability \(\frac{1}{4}\). The state is then sent to Bob via a noisy channel that is potentially compromised by Eve. Bob then performs a heterodyne measurement.
We will prove security using an equivalent entanglement-based QKD protocol. Such a protocol can be defined by the source replacement scheme [3, 15, 49, 50]. Namely, In each round \(i=1,...,n\), Alice prepares an independent copy of the pure state
\[\ket{\psi}_{AA^{\prime}}=\frac{1}{2}\sum_{x=0}^{3}\ket{x}_{A}\ket{\varphi_{x} }_{A^{\prime}}, \tag{16}\]
where \(\varphi_{x}\in\{\alpha,-\alpha,i\alpha,-i\alpha\}\). Alice sends the \(A^{\prime}\) subsystem to Bob via a noisy quantum channel, keeping the \(A\) subsystem. Alice and Bob then both perform measurements on their respective subsystems.
Whereas this kind of introduction of an entanglement-based protocol is commonly used when proving security of prepare-and-measure protocols, we face an additional challenge when combining this approach with the EAT [41]. Namely, unlike in a device independent setting, the statistics obtained from Alice and Bobs measurement, even in an honest implementation, is not sufficient to certify entanglement of eq. (16). In fact, as in the entanglement-based version Alice only implements one measurement in the computational basis, the statistics produced by the protocol can equally be explained by the separable state
\[\rho_{AA^{\prime}}=\frac{1}{4}\sum_{x=0}^{3}\ket{x}\bra{x}_{A}\otimes\ket{ \varphi_{x}}\bra{\varphi_{x}}_{A^{\prime}}. \tag{17}\]
This is why to derive a positive secret-key rate, one includes a constraint on the marginal of Alice's state in the optimisation for the key rate [30]. Namely, the marginal is required to take the form
\[\rho_{A}=\frac{1}{4}\sum_{x,y=0}^{3}\bra{\varphi_{y}}\ket{\varphi_{x}}\ket{x} \bra{y}_{A}, \tag{18}\]
which is not satisfied by the separable state (17). The challenge then is to express such a constraint in terms of a distribution obtained by statistical analysis, which is required when applying the EAT. We overcome this challenge by considering a hypothetical version of our protocol, where, in randomly chosen rounds, Alice performs tomographic measurements of her marginal on \(A\) and, in the end, verifies whether the obtained statistics are compatible with her
marginal being equal to eq. (18). If this is not the case, the protocol gets aborted. As the data obtained in the tomography rounds is not used for key generation, the key rate obtained in this hypothetical protocol is never larger than the key rate obtained in the physically implemented protocol, where Alice performs no tomography. Also, as we are in a device dependent setting, where we can assume Alice's state preparation to be perfect, the only scenario under which the hypothetical protocol aborts after the tomography test is due to imperfect tomography, the probability of which becomes negligible for large enough \(n\). In the following, we use the term 'hypothetical QKD protocol' when we consider the protocol including tomography and 'physical QKD protocol' when refeing to the protocol that is actually performed by Alice and Bob, which does not include tomography.
When the state eq. (16) is sent from Alice to Bob, we assume that Eve can attack the channel used to send the \(A^{\prime}\) subsystem coherently. This is equivalent to a scenario where Alice initially prepares all \(n\) independent and identically distributed (iid) copies of the state (16), which are then acted upon a channel \(\mathcal{N}_{A^{\prime n}_{1}\to B^{n}_{1}}\). Let \(\mathcal{U}^{\mathcal{N}}_{A^{\prime n}\to B^{n}E}\) be an isometric extension of the channel and let us define
\[\ket{\Psi}_{A^{n}_{1}B^{n}_{1}E}=\mathrm{id}_{A^{n}}\otimes\mathcal{U}^{ \mathcal{N}}_{A^{\prime n}_{1}\to B^{n}_{1}E}\ket{\psi}^{\otimes n}_{A^{ \prime}_{1}A^{\prime n}_{1}}. \tag{19}\]
It has to be assumed that the \(E\) subsystem goes to Eve. Alice and Bob are left with the mixed state
\[\rho_{A^{n}_{1}B^{n}_{1}}=\mathrm{Tr}_{E}[\Psi_{A^{n}_{1}B^{n}_{1}E}]. \tag{20}\]
### The hypothetical QKD protocol
We now describe a round of the hypothetical QKD protocol in detail. Let \(0\leq p^{\mathrm{key}}\leq 1\), \(0\leq p^{\mathrm{PE}}\leq 1\) and \(0\leq p^{\mathrm{tom}}\leq 1\), where \(p^{\mathrm{key}}+p^{\mathrm{PE}}+p^{\mathrm{tom}}=1\), be the respective probabilities for a given round being used for key generation, parameter estimation and tomography of Alice's marginal. For each round \(i=1...n\), Alice and Bob perform the following steps:
(1) _Alice's Measurement:_ Alice uses a random number generator to create a random variable \(R_{i}\), taking values \(R_{i}=0,1,2\) with respective probabilities \(p^{\mathrm{key}}\), \(p^{\mathrm{PE}}\) and \(p^{\mathrm{tom}}\). If \(R_{i}=0\), the round is used for key generation. For \(R_{i}=1\), the round is employed for parameter estimation. In both cases Alice performs a projective measurement \(\{\ket{x}\bra{x}\ket{x}_{x=0}^{3}\) on subsystem \(A_{i}\). If \(R_{i}=2\), Alice performs a tomography, using an informationally complete (IC) measurement defined by a Positive-Operator-Valued-Measure (POVM) \(\{\Gamma_{x^{\prime}}\}_{x^{\prime}=0}^{15}\) on her subsystem. The outcome of Alice's measurement is described by a random variable \(X_{i}\), taking values \(x_{i}\). We define, for the sake of convenience, the random variables
\[\hat{X}_{i} =\begin{cases}x_{i}\text{ if }R_{i}=0,\\ \perp\text{ else.}\end{cases} \tag{21}\] \[\bar{X}_{i} =\begin{cases}x_{i}\text{ if }R_{i}=1,\\ \perp\text{ else.}\end{cases}\] (22) \[X^{\prime}_{i} =\begin{cases}x_{i}\text{ if }R_{i}=2,\\ \perp\text{ else.}\end{cases} \tag{23}\]
The random variable \(R_{i}\) is then sent to Bob via an authenticated channel.
(2) _Bob's Measurement:_ Bob performs a heterodyne measurement on subsystem \(B_{i}\). From the outcome, Bob obtains a continuous random variable \(Y_{i}\), taking values \(y_{i}\in\mathbb{C}\). Again, it will be convenient to define
\[\hat{Y}_{i} =\begin{cases}y_{i}\text{ if }R_{i}=0,\\ \perp\text{ else.}\end{cases} \tag{24}\] \[\bar{Y}_{i} =\begin{cases}y_{i}\text{ if }R_{i}=1,\\ \perp\text{ else.}\end{cases} \tag{25}\]
(3) _Discretisation:_ Bob discretises his heterodyne outcomes. For key rounds, let \(\hat{y}_{i}=|\hat{y}_{i}|e^{i\hat{\theta}_{i}}\) for \(\hat{\theta}_{i}\in[-\frac{\pi}{4},\frac{7\pi}{4})\). Bob
then creates a random variable
\[\hat{Z}_{i}=\begin{cases}0\text{ if }\hat{\theta}_{i}\in[-\frac{\pi}{4},\frac{\pi}{4}) \\ 1\text{ if }\hat{\theta}_{i}\in[\frac{\pi}{4},\frac{3\pi}{4})\\ 2\text{ if }\hat{\theta}_{i}\in[\frac{3\pi}{4},\frac{5\pi}{4})\\ 3\text{ if }\hat{\theta}_{i}\in[\frac{5\pi}{4},\frac{7\pi}{4})\\ \perp\text{ else},\end{cases} \tag{26}\]
where \(\hat{Z}_{i}=\perp\) is taken for non-key rounds. For parameter estimation rounds, Bob defines a discretisation given by an amplitude \(\Delta\) and modules of length \(\delta\), such that \(\Delta/\delta\in\mathbb{N}\). Let \(j\in\{0,1,2,3\}\) and \(k\in\{0,\ldots,\frac{\Delta}{\delta}-1\}\) and let \(\tilde{y}_{i}=|\tilde{y}_{i}|e^{i\tilde{\theta}_{i}}\). Bob then creates a random variable \(\tilde{Z}_{i}\) according to
\[\tilde{Z}_{i}=\begin{cases}j+4k\text{ if }\tilde{\theta}_{i}\in[\frac{\pi}{4}(2 j-1),\frac{\pi}{4}(2j+1))\wedge|\tilde{y}_{i}|\in[\delta k,\delta(k+1)),\\ j+4\frac{\Delta}{\delta}\text{ if }\tilde{\theta}_{i}\in[\frac{\pi}{4}(2j-1), \frac{\pi}{4}(2j+1))\wedge|\tilde{y}_{i}|\in[\Delta,\infty),\\ \perp\text{ else}.\end{cases} \tag{27}\]
Summarising steps (1) - (3), round \(i\) of the protocol has taken as inputs quantum systems \(A_{i}B_{i}\) of the initial state (20), and created discrete classical random variables \(\hat{X}_{i}\) and \(\hat{Z}_{i}\) for key generation, \(\tilde{X}_{i}\) and \(\tilde{Z}_{i}\) to be used for parameter estimation, as well as \(X_{i}^{\prime}\) to be used for tomography of Alice's marginal state. Let us define \(O_{i}:=\tilde{X}_{i}X_{i}^{\prime}\hat{Z}_{i}\tilde{Z}_{i}\) as the 'output' and \(S_{i}:=R_{i}\) as the'side information'. Let us further define \(C_{i}=\tilde{X}_{i}\tilde{Z}_{i}X_{i}^{\prime}\) as all the information used in statistical analysis. The reason we define \(O_{i}\), \(S_{i}\) and \(C_{i}\) in this way is that we will later use these random variables when applying the EAT. The EAT requires the statistical analysis variable \(C_{i}\) to be obtainable from a simple read-out of 'output' and'side-information' variables \(O_{i}\) and \(S_{i}\). On the other hand, we cannot include \(\tilde{X}_{i}\), \(\hat{Z}_{i}\) or \(X_{i}^{\prime}\) into \(S_{i}\) because of the Markov condition, eq. 54. We therefore have to include them into \(O_{i}\) despite the fact that \(\hat{X}_{i}\) has to be communicated classically, and treat \(\hat{X}_{i}\) as additional side-information when applying Proposition 1.
Any round \(i\) of the protocol can then be described by a channel
\[\mathcal{M}^{\text{QKD}}:A_{i}B_{i}\rightarrow\hat{X}_{i}O_{i}S_{i}C_{i}. \tag{28}\]
After \(n\) rounds, the relevant systems of Alice, Bob and Eve are in the state
\[\sigma^{\text{QKD}}_{\tilde{X}_{1}^{n}O_{1}^{n}C_{1}^{n}S_{1}^{n}E}=\text{id }_{E}\otimes\mathcal{M}^{\text{QKD}\otimes n}\left(\Psi_{A_{1}^{n}B_{1}^{n}E}\right) \tag{29}\]
The next step is to perform parameter estimation. To that purpose, Alice sends \(\tilde{X}_{1}^{n}\) to Bob, who then performs the parameter estimation protocol deciding whether the protocol gets aborted or not. Further, Alice uses \({X_{1}^{\prime}}^{n}\), obtained from her tomographic measurement, to reconstruct her marginal state. If the reconstructed state is not equal to the
Figure 1: Discretisations of phase space by Bob for parameter estimation rounds (left) and key generation rounds (right). In this figure, the modulation for parameter estimation in phases and amplitudes is given by \(\Delta/\delta=2\), with the outmost modules extending to the infinity.
expected one up to a certain margin of confidence, the protocol is aborted. Alice informs Bob of her decision, and in the latter case the protocol is aborted.
In order to formalise the decision to abort, we need to introduce some notation. Let us denote by \(\mathcal{C}\) the alphabet of all possible values of \(c_{i}=(\tilde{x}_{i},\tilde{z}_{i},x^{\prime}_{i})\) that can occur in the protocol. Such values are \(c_{i}=(\bot,\bot,\bot)\) in key rounds, \((x,z,\bot)\) with \(x\in\{0,...,3\}\) and \(z\in\{0,...,S-1\}\), where \(S=4\Delta/\delta+4\), in parameter estimation rounds, as well as \((\bot,\bot,x^{\prime})\) with \(x^{\prime}\in\{0,...,15\}\) in tomography rounds. It will also be convenient to define by \(\tilde{\mathcal{C}}\) the alphabet of all possible values \(c_{i}\) can take in parameter estimation and tomography rounds, only. For a given string \(c_{1}^{n}\in\mathcal{C}^{n}\), we denote by \(\mathrm{freq}_{c_{1}^{n}}\in\mathcal{P}_{\mathcal{C}}\) the probability distribution corresponding to the frequency of symbols \(c\in\mathcal{C}\) in \(c_{1}^{n}\), defined by \(\mathrm{freq}_{c_{1}^{n}}(c)=|\{i:c_{i}=c\}|/n\).
In order to decide whether or not to abort, Alice and Bob need to benchmark their obtained statistics, given by a frequency distribution \(\mathrm{freq}_{c_{1}^{n}}\), against a distribution \(p_{0}\in\mathcal{P}_{\mathcal{C}}\), which can be obtained in an honest implementation of the protocol. Let \(p_{0}^{\mathrm{sim}}\) be the distribution of parameter estimation random variables \((\tilde{X},\tilde{Z})\) in the honest setting with no attack and \(p_{0}^{\mathrm{tom}}\) be the distribution of the tomography random variable \(X^{\prime}\). Now, we define
\[p_{0}(x,z,\bot)=p^{\mathrm{PE}}p_{0}^{\mathrm{sim}}(x,z), \tag{30}\] \[p_{0}(\bot,\bot,x^{\prime})=p^{\mathrm{tom}}p_{0}^{\mathrm{tom} }(x^{\prime}),\] (31) \[p_{0}(\bot,\bot,\bot)=1-\sum_{xz}p_{0}(x,z,\bot)-\sum_{x^{\prime }}p_{0}(\bot,\bot,x^{\prime}), \tag{32}\]
for \(x\in\{0,...,3\}\), \(z\in\{0,...,S-1\}\), and \(x^{\prime}\in\{0,...,15\}\). We will provide an explicit form of the \(p_{0}^{\mathrm{sim}}\) and \(p_{0}^{\mathrm{tom}}\) we use in Section V.
In order to compare the two distributions \(\mathrm{freq}_{c_{1}^{n}}\) and \(p_{0}\in\mathcal{P}_{\mathcal{C}}\), we need to introduce figures of merit, which quantifies the suitability of the distributions for key generation. For now, let us only assume that these figures of merit is given by affine functions \(f^{\mathrm{PE}}:\mathcal{P}_{\mathcal{C}}\rightarrow\mathbb{R}\) and \(f^{\mathrm{tom}}:\mathcal{P}_{\mathcal{C}}\rightarrow\mathbb{R}\) of the form
\[f^{\mathrm{PE}}(p)=\sum_{x=0}^{3}\sum_{z=0}^{S-1}h_{x,z,\bot}p(x,z,\bot), \tag{33}\] \[f^{\mathrm{tom}}(p)=\sum_{x^{\prime}=0}^{15}h_{\bot,\bot,x^{ \prime}}p(\bot,\bot,x^{\prime}), \tag{34}\]
for some coefficients \(h_{x,z,\bot},h_{\bot,\bot,x^{\prime}}\in\mathbb{R}\). We will provide an explicit form of the functions later. We then define the respective sets of distributions for which we do not abort after parameter estimation or tomography as
\[\mathcal{P}_{\Omega_{\mathrm{PE}}}:=\left\{p\in\mathcal{P}_{ \mathcal{C}}:f^{\mathrm{PE}}(p)\geq f^{\mathrm{PE}}(p_{0})-\delta_{\mathrm{ PE}}^{\mathrm{tol}}\right\}, \tag{35}\] \[\mathcal{P}_{\Omega_{\mathrm{tom}}}:=\left\{p\in\mathcal{P}_{ \mathcal{C}}:f^{\mathrm{tom}}(p)\geq f^{\mathrm{tom}}(p_{0})-\delta_{\mathrm{ tom}}^{\mathrm{tol}}\right\}, \tag{36}\]
for some \(\delta_{\mathrm{PE}}^{\mathrm{tol}},\delta_{\mathrm{tom}}^{\mathrm{tol}}>0\). Let us also define \(\delta^{\mathrm{tol}}=\delta_{\mathrm{PE}}^{\mathrm{tol}}+\delta_{\mathrm{ tom}}^{\mathrm{tol}}\), as well as the events of passing the parameter estimation and the tomography test as
\[\Omega_{\mathrm{PE}}:=\left\{c_{1}^{n}\in\mathcal{C}:\mathrm{freq }_{c_{1}^{n}}\in\mathcal{P}_{\Omega_{\mathrm{PE}}}\right\}, \tag{37}\] \[\Omega_{\mathrm{tom}}:=\left\{c_{1}^{n}\in\mathcal{C}:\mathrm{freq }_{c_{1}^{n}}\in\mathcal{P}_{\Omega_{\mathrm{tom}}}\right\},\] (38) \[\Omega_{\mathrm{EA}}=\Omega_{\mathrm{PE}}\cap\Omega_{\mathrm{tom}}. \tag{39}\]
Assuming Alice and Bob do not abort after parameter estimation or tomography, they perform an error correction protocol using reverse reconciliation. The information exchanged between Alice and Bob in this step is denoted \(L\), and at the end of this step Alice computes a string \(\bar{X}_{1}^{n}\). In order to check that the error correction was successful, Bob chooses a random hash function \(H\) and sends to Alice a description of \(H\) as well as the value \(H^{\prime}=H(\hat{Z}_{1}^{n})\). Whenever, \(H(\hat{Z}_{1}^{n})\neq H(\bar{X}_{1}^{n})\), the protocol is aborted. Let us denote by \(H\) and \(H^{\prime}\) the register containing the description and value of the hash function, respectively. Formally, we define the event of passing the error correction step as
\[\Omega_{\mathrm{EC}}=\big{[}H(\hat{Z}_{1}^{n})=H(\bar{X}_{1}^{n})\big{]}. \tag{40}\]
We assume that there is a small probability, upper bounded by \(\epsilon_{\mathrm{EC}}>0\), of the error correction being passed by mistake. For any \(\hat{z}_{1}^{n}\neq\bar{x}_{1}^{n}\), \(\mathrm{Pr}[H(\hat{z}_{1}^{n})=H(\bar{x}_{1}^{n})]\leq\epsilon_{\mathrm{EC}}\), where \(\mathrm{Pr}\) here is over the choice of \(H\). Further we
assume that the probability of not passing the error correction in an honest implementation is upper bounded by \(\Pr_{\mathrm{hon}}\left[H(\tilde{Z}_{1}^{n})\neq H(\bar{X}_{1}^{n})\right]\leq \epsilon_{\mathrm{EC}}^{\epsilon}\), for some \(\epsilon_{\mathrm{EC}}^{\epsilon}>0\). Finally, we define the event of not aborting the protocol after either parameter estimation, tomography or error correction as
\[\Omega_{\mathrm{NonAbort}}=\Omega_{\mathrm{EA}}\cap\Omega_{\mathrm{EC}}. \tag{41}\]
We note that \(\Omega_{\mathrm{EA}}\) only depends on the \(C_{1}^{n}\) registers, whereas \(\Omega_{\mathrm{EC}}\) depends on the \(HH^{\prime}\bar{X}_{1}^{n}\) registers. The description of the protocol together with an attack of Eve leads to a state
\[\sigma^{\mathrm{QKD}}_{\bar{X}_{1}^{n}\Omega_{1}^{n}\bar{X}_{1}^{n}CHH^{\prime }LE}=\Pr[\Omega_{\mathrm{NonAbort}}]\sigma^{\mathrm{QKD}}_{\bar{X}_{1}^{n} \Omega_{1}^{n}\bar{X}_{1}^{n}CHH^{\prime}LE}|_{\Omega_{\mathrm{NonAbort}}}+(1- \Pr[\Omega_{\mathrm{NonAbort}}])\sigma^{\mathrm{QKD}}_{\bar{X}_{1}^{n}\Omega _{1}^{n}\bar{X}_{1}^{n}CHH^{\prime}LE}|_{\Omega_{\mathrm{NonAbort}}}.\]
### The physical QKD protocol
Finally, let us describe the physical QKD protocol, which is actually performed in the lab. The physical QKD protocol is essentially equal to the hypothetical protocol except for the following two differences: Firstly, in step (1) of the protocol, if \(R_{i}=2\), Alice does not perform tomography. The round is simply discarded. Hence, the random variable \(X_{i}^{\prime}\) will be either \(\perp\) or undefined. Secondly, at the end of the protocol, abortion or non-abortion will only be determined by parameter estimation and error correction, i.e. the event \(\Omega_{\mathrm{NonAbort}}\) will be replaced by \(\Omega_{\mathrm{NonAbort}}^{\mathrm{phys}}=\Omega_{\mathrm{PE}}\cap\Omega_{ \mathrm{EC}}\).
In principle, a more efficient physical protocol can be obtained if no round is discarded, that is, if \(p^{\mathrm{tom}}=0\). This is because the comparison of the two protocols, hypothetical and physical, is much simpler when using the same probabilities \(p^{\mathrm{key}}\), \(p^{\mathrm{PE}}\) and \(p^{\mathrm{tom}}\) for both. It is, nevertheless, worth noting that the value we choose for \(p^{\mathrm{tom}}\) below is very small, so the possible impact on the key rate is not significant.
## IV Security of the QKD protocol
In this section we show the security against coherent attacks of the physical QKD protocol. The proof consists of two parts: Firstly, in Subsection IV.1 we show the soundness of the protocol and provide a lower bound on the key rate. We first show the soundness of the hypothetical protocol, which we then show implies the soundness of the physical protocol. The soundness proof of the hypothetical protocol is based on the entropy accumulation theorem and depends on the choice of a min-tradeoff function of a particular form. Secondly, in Subsection IV.2 we show the completeness of the physical QKD protocol, i.e. that there is a nonzero probability of it not being aborted. Finally, in Subsection IV.3, we show how a suitable min-tradeoff function can be derived from the numerical approach presented by [30; 42].
### Soundness
In this section we provide a lower bound on the achievable key rate \(r^{\mathrm{phys}}=\ell/n\), where \(\ell\) is the length of the key and \(n\) the number of rounds, conditioned on the event \(\Omega_{\mathrm{NonAbort}}^{\mathrm{phys}}\) of not aborting the physical QKD protocol. Such a lower bound can be obtained from the following Proposition.
**Proposition 1**: _[_11; 46_]_ _Let \(\epsilon^{\mathrm{phys}},\epsilon_{\mathrm{EC}}\geq 0\). Let further \(\mathrm{leak}_{\mathrm{EC}}\) be the amount of information lost to Eve during error correction. Then Alice and Bob are able to extract a key of length,_
\[\ell\leq H_{\mathrm{min}}^{\epsilon^{\mathrm{phys}}}(\hat{Z}_{1}^{n}|S_{1}^{n} \bar{X}_{1}^{n}E)_{\sigma^{\mathrm{phys},\mathrm{QKD}}|_{\Omega_{\mathrm{ NonAbort}}^{\mathrm{phys}}}}-\mathrm{leak}_{\mathrm{EC}}-2\log\frac{1}{ \epsilon^{\mathrm{phys}}}, \tag{42}\]
_which is \(3\epsilon^{\mathrm{phys}}+\epsilon_{\mathrm{EC}}\)-sound, in the sense that \(\Pr_{\sigma^{\mathrm{phys},\mathrm{QKD}}}[\Omega_{\mathrm{NonAbort}}^{\mathrm{ phys}}]\frac{1}{2}\|\sigma^{\mathrm{phys},\mathrm{QKD}}\|_{\Omega_{\mathrm{ NonAbort}}^{\mathrm{phys}}}-\sigma^{\mathrm{perfect~{}ccq}}\|_{1}\leq 3 \epsilon^{\mathrm{phys}}+\epsilon_{\mathrm{EC}}\)._
In order to apply Proposition 1, we need to lower bound the smooth min-entropy using the entropy accumulation theorem. However, as noted in the introduction, we are able to apply the EAT directly to our physical QKD protocol, due to the need to characterise Alice's marginal system in a prepare-and-measure scenario. To that purpose we first consider the hypothetical QKD protocol that includes additional tomography measurements. However, due to issues with the Markov condition, we will not be able to directly apply the EAT to our hypothetical protocol, either. Instead,
we will make use of various chain rules for smooth entropies, in order to relate the output of our hypothetical QKD protocol to that of a series of \(n\) EAT channels, which we call the 'EAT process', and then apply the EAT to the EAT process, while dealing with the remaining terms separately.
We begin by considering the hypothetical QKD protocol. Let \(n\in\mathbb{N}\) and \(\epsilon>0\). Conditioned on not aborting, the hypothetical QKD protocol results in the state \(\sigma^{\text{QKD}}|_{\Omega_{\text{NonAbort}}}\). By application of chain rules for smooth entropies (eq. (6.63) and eq. (6.56) in [46]), it holds
\[H^{\epsilon}_{\min}(\hat{Z}_{1}^{n}|S_{1}^{n}\tilde{X}_{1}^{n}E)_{\sigma^{ \text{QKD}}|_{\Omega_{\text{NonAbort}}}}\geq H^{\epsilon/4}_{\min}(\hat{Z}_{1 }^{n}\tilde{X}_{1}^{n}|S_{1}^{n}E)_{\sigma^{\text{QKD}}|_{\Omega_{\text{NonAbort }}}}-H^{\epsilon/4}_{\max}(\tilde{X}_{1}^{n}|S_{1}^{n}E)_{\sigma^{\text{QKD}}| _{\Omega_{\text{NonAbort}}}}-2\Gamma(\epsilon/4), \tag{43}\]
where \(\Gamma(x):=-\log\big{(}1-\sqrt{1-x^{2}}\big{)}\). By another application of a chain rule (eq. (6.57) in [46]), we obtain
\[H^{\epsilon/4}_{\min}(\hat{Z}_{1}^{n}\tilde{X}_{1}^{n}|S_{1}^{n}E)_{\sigma^{ \text{QKD}}|_{\Omega_{\text{NonAbort}}}}\geq H^{\epsilon/16}_{\min}(O_{1}^{n} |S_{1}^{n}E)_{\sigma^{\text{QKD}}|_{\Omega_{\text{NonAbort}}}}-H^{\epsilon/16} _{\max}({X^{\prime}}_{1}^{n}\tilde{Z}_{1}^{n}|\hat{Z}_{1}^{n}\tilde{X}_{1}^{n} S_{1}^{n}E)_{\sigma^{\text{QKD}}|_{\Omega_{\text{NonAbort}}}}-3\Gamma(\epsilon/16). \tag{44}\]
We can now apply the same argument as used in [36] to upper bound the max entropy terms in eqs. (43) and (44). We begin by upper bounding the term \(H^{\epsilon/4}_{\max}(\tilde{X}_{1}^{n}|R_{1}^{n}E)_{\sigma^{\text{QKD}}|_{ \Omega_{\text{NonAbort}}}}\) in eq. (43). We note that by the strong subadditivity of the smooth max entropy [46], it holds
\[H^{\epsilon/4}_{\max}(\tilde{X}_{1}^{n}|R_{1}^{n}E)_{\sigma^{\text{QKD}}|_{ \Omega_{\text{NonAbort}}}}\leq H^{\epsilon/4}_{\max}(\tilde{X}_{1}^{n}|R_{1}^ {n})_{\sigma^{\text{QKD}}|_{\Omega_{\text{NonAbort}}}}, \tag{45}\]
where the r.h.s. only involves classical registers. We further note that \(\tilde{X}_{i}=\perp\), unless \(R_{i}=1\), which happens with probability \(p^{\text{PE}}\), in which case \(\tilde{X}_{i}\) takes a value in \(\{0,...,3\}\). Introducing a binary random variable \(\tilde{R}_{i}\) that takes value \(1\) when \(R_{i}=1\) and value \(0\) when \(R_{1}=0\) or \(R_{i}=2\), we can apply the data processing inequality and Lemma 6 in Appendix C, showing that
\[H^{\epsilon/4}_{\max}(\tilde{X}_{1}^{n}|R_{1}^{n})_{\sigma^{\text{QKD}}|_{ \Omega_{\text{NonAbort}}}}\leq H^{\epsilon/4}_{\max}(\tilde{X}_{1}^{n}|\tilde {R}_{1}^{n})_{\sigma^{\text{QKD}}|_{\Omega_{\text{NonAbort}}}}\leq np^{\text{PE }}\log 5+\sqrt{\frac{n}{2}\ln\frac{32}{\epsilon^{2}\Pr_{\sigma^{\text{QKD}}}[ \Omega_{\text{NonAbort}}]}}\log 5. \tag{46}\]
In a similar way we can provide an upper bound on the term \(H^{\epsilon/16}_{\max}({X^{\prime}}_{1}^{n}\tilde{Z}_{1}^{n}|\tilde{Z}_{1}^{ n}R_{1}^{n}E)_{\sigma^{\text{QKD}}|_{\Omega_{\text{NonAbort}}}}\) in eq. (44). Again, it holds by strong subadditivity,
\[H^{\epsilon/16}_{\max}({X^{\prime}}_{1}^{n}\tilde{Z}_{1}^{n}|\hat{Z}_{1}^{n} \tilde{X}_{1}^{n}R_{1}^{n}E)_{\sigma^{\text{QKD}}|_{\Omega_{\text{NonAbort}}}} \leq H^{\epsilon/16}_{\max}({X^{\prime}}_{1}^{n}\tilde{Z}_{1}^{n}|R_{1}^{n})_{ \sigma^{\text{QKD}}|_{\Omega_{\text{NonAbort}}}}, \tag{47}\]
where the r.h.s. is classical. Further, it holds that \({X^{\prime}}_{i}\tilde{Z}_{i}=\perp\perp\), unless \(R_{i}=1\) or \(R_{i}=2\), which happens with probability \(p^{\text{PE}}+p^{\text{tom}}=1-p^{\text{key}}\). In this case \({X^{\prime}}_{i}\tilde{Z}_{i}\) takes a value in \(\{0,...,15,\perp\}\times\{0,...,S-1,\perp\}\). Let us again introduce a binary random variable \(\tilde{R}_{i}\), taking value \(1\) when \(R_{i}=1\) or \(R_{i}=2\) and value \(0\) when \(R_{i}=0\). We can now apply Lemma 6, identifying the pair \({X^{\prime}}_{i}\tilde{Z}_{i}\) with \(X_{i}\) and the value \(\perp\perp\) with \(\perp\), and obtain
\[H^{\epsilon/16}_{\max}({X^{\prime}}_{1}^{n}\tilde{Z}_{1}^{n}|R_{1 }^{n})_{\sigma^{\text{QKD}}|_{\Omega_{\text{NonAbort}}}} \leq H^{\epsilon/16}_{\max}({X^{\prime}}_{1}^{n}\tilde{Z}_{1}^{n}| \tilde{R}_{1}^{n})_{\sigma^{\text{QKD}}|_{\Omega_{\text{NonAbort}}}} \tag{48}\] \[\leq n(1-p^{\text{key}})\log\left(17(S+1)\right)+\sqrt{\frac{n}{2} \ln\frac{512}{\epsilon^{2}\Pr_{\sigma^{\text{QKD}}}[\Omega_{\text{NonAbort}}]}} \log\left(17(S+1)\right). \tag{49}\]
What remains to be done is to lower bound the term \(H^{\epsilon/16}_{\min}(O_{1}^{n}|S_{1}^{n}E)_{\sigma^{\text{QKD}}|_{\Omega_{ \text{NonAbort}}}}\) in eq. (44) using the EAT.
#### iv.2.1 Reduction to Collective Attacks via Entropy Accumulation
In order to apply the EAT we wish to condition on an event that is only defined on \(C_{1}^{n}\). For that purpose, note that we can write \(\sigma^{\text{QKD}}|_{\Omega_{\text{NonAbort}}}\) as \((\sigma^{\text{QKD}}|_{\Omega_{\text{EA}}})|_{\Omega_{\text{EC}}}\) where the probability of the event \(\Omega_{\text{EC}}\) with respect to the state \(\sigma^{\text{QKD}}|_{\Omega_{\text{EA}}}\) is given by \(\Pr_{\sigma^{\text{QKD}}}[\Omega_{\text{EC}}|\Omega_{\text{EA}}]\). Now we use Lemma B.5 of [36] to show that for \(a\in(1,\infty)\)
\[H^{\epsilon/16}_{\min}(O_{1}^{n}|S_{1}^{n}E)_{\sigma^{\text{QKD}}|_{ \Omega_{\text{NonAbort}}}} \geq H^{\uparrow}_{a}(O_{1}^{n}|S_{1}^{n}E)_{\sigma^{\text{QKD}}|_{ \Omega_{\text{NonAbort}}}}-\frac{\Gamma(\epsilon/16)}{a-1} \tag{50}\] \[\geq H^{\uparrow}_{a}(O_{1}^{n}|S_{1}^{n}E)_{\sigma^{\text{QKD}}|_{ \Omega_{\text{EA}}}}-\frac{\Gamma(\epsilon/16)}{a-1}-\frac{a}{a-1}\log\left( \frac{1}{\Pr_{\sigma^{\text{QKD}}}[\Omega_{\text{EC}}|\Omega_{\text{EA}}]} \right). \tag{51}\]
In order to apply the EAT, we now consider the EAT process, which results in the same marginal state \(\sigma^{\rm QKD}_{O_{1}^{i}S_{1}^{n}E}|_{\Omega_{\rm EA}}\), hence the same value for \(H_{4}^{\uparrow}(O_{1}^{n}|S_{1}^{n}E)_{\sigma^{\rm QKD}|_{\Omega_{\rm EA}}}\), as the hypothetical QKD protocol. The EAT process will be closely related to the hypothetical QKD protocol, however it will not include the output \(\hat{X}_{i}\), which is not necessary in this context. Further, it will not include an error correction or privacy amplification protocol.
We begin by defining our EAT channels. To that purpose, we take the channel \(\mathcal{M}^{\rm QKD}\) defined in eq. (28); however we omit the output of \(\hat{X}_{i}\), resulting in a channel
\[\mathcal{M}^{\rm EAT}:A_{i}B_{i}\to O_{i}C_{i}S_{i}, \tag{52}\]
which performs steps (1) - (3) of the hypothetical QKD protocol, but in the end does not output Alice's key system \(\hat{X}_{i}\). It is easy to see that \(C_{i}\) can be obtained by readout of classical information contained in \(O_{i}\) and \(S_{i}\). As it only contains discretised information, \(O_{i}\) is finite dimensional. Further defining \(Q_{i}:=A_{i+1}^{n}B_{i+1}^{n}\), we can now define a channel \(\mathcal{M}_{i}^{\rm EAT}:Q_{i-1}\to Q_{i}O_{i}S_{i}C_{i}\) by
\[\mathcal{M}_{i}^{\rm EAT}:=\mathrm{id}_{Q_{i}}\otimes\mathcal{M}_{A_{i}B_{i} \to O_{i}S_{i}C_{i}}^{\rm EAT}. \tag{53}\]
In order to apply the EAT, we still have to show that the Markov condition
\[O_{1}^{i-1}\leftrightarrow S_{1}^{i-1}E\leftrightarrow S_{i} \tag{54}\]
or, equivalently \(I(O_{1}^{i-1}:S_{i}|S_{1}^{i-1}E)=0\) is fulfilled for all \(i\in\{1,..,n\}\). In our case this holds trivially as \(S_{i}=R_{i}\) is obtained by a local random number generator, which is used independently in each round.
We can now define our EAT process as a concatenation of EAT channels \(\mathcal{M}_{n}^{\rm EAT}\circ\cdots\circ\mathcal{M}_{1}^{\rm EAT}\), yielding the following state,
\[\sigma^{\rm EAT}_{O_{1}^{n}S_{1}^{n}C_{1}^{n}E} =\mathrm{id}_{E}\otimes\left(\mathcal{M}_{n}^{\rm EAT}\circ\cdots \circ\mathcal{M}_{1}^{\rm EAT}\right)\left(\Psi_{A_{1}^{n}B_{1}^{n}E}\right) \tag{55}\] \[=\mathrm{id}_{E}\otimes\mathcal{M}^{\rm EAT}\otimes^{n}\left( \Psi_{A_{1}^{n}B_{1}^{n}E}\right)\] (56) \[=\mathrm{Tr}_{\hat{X}_{1}^{n}}\left[\mathrm{id}_{E}\otimes \mathcal{M}^{\rm QKD}\otimes^{n}\left(\Psi_{A_{1}^{n}B_{1}^{n}E}\right)\right]. \tag{57}\]
The EAT process then concludes with Alice and Bob using \(C_{1}^{n}\) to perform the tomography test as well as parameter estimation. This is done in the same way as in the hypothetical QKD protocol. Consequently it holds,
\[\sigma^{\rm QKD}_{O_{1}^{n}S_{1}^{n}E}|_{\Omega_{\rm EA}} =\sigma^{\rm EAT}_{O_{1}^{n}S_{1}^{n}E}|_{\Omega_{\rm EA}}, \tag{58}\] \[H_{a}^{\uparrow}(O_{1}^{n}|S_{1}^{n}E)_{\sigma^{\rm QKD}|_{\Omega _{\rm EA}}} =H_{a}^{\uparrow}(O_{1}^{n}|S_{1}^{n}E)_{\sigma^{\rm EAT}|_{\Omega _{\rm EA}}}. \tag{59}\]
Hence, it will be sufficient to lower bound the r.h.s. of eq. (59) using the EAT. We can now define a _min-tradeoff function_ as a function \(f:\mathcal{P}_{\mathcal{C}}\rightarrow\mathbb{R}\) such that for all \(i=1,...,n\) it holds
\[f(p)\leq\inf_{|\rho\rangle\in\Sigma_{i}(p)}H(O_{i}|S_{i}\tilde{E})_{\rho^{\rm EAT,i}}, \tag{60}\]
where \(\tilde{E}\) can be chosen isomorphic to \(Q_{i-1}\), and we have defined
\[\Sigma_{i}(p)=\left\{|\rho\rangle_{Q_{i-1}\tilde{E}}\in\mathcal{H}_{Q_{i-1} \tilde{E}}:\bra{c}\rho^{\rm EAT,i}_{C_{i}}\ket{c}\equiv p(c)\right\}, \tag{61}\]
for a state
\[\rho^{\rm EAT,i}_{O_{i}S_{i}C_{i}Q_{i}\tilde{E}}=\mathrm{id}_{\tilde{E}} \otimes\mathcal{M}_{i}^{\rm EAT}(\rho_{Q_{i-1}\tilde{E}})=\mathrm{id}_{Q_{i} \tilde{E}}\otimes\mathcal{M}_{A_{i}B_{i}\to O_{i}S_{i}C_{i}}^{\rm EAT} \left(\rho_{A_{i}B_{i}Q_{i}\tilde{E}}\right). \tag{62}\]
Here \(\equiv\) stands for equality for all \(c\in\mathcal{C}\). Note that \(|\rho\rangle\) can be chosen pure by strong subadditivity as remarked in [36].
In the following we will consider the case where \(f(p)=f^{\rm PE}(p)+f^{\rm tom}(p)+\rm const\), with affine functions \(f^{\rm PE}\) and \(f^{\rm tom}\) defined by eqs. (33,34), respectively. In that case, it holds for all \(c_{1}^{n}\in\Omega_{\rm EA}\) that \(f(\mathrm{freq}_{c_{1}^{n}})\geq f(p_{0})-\delta^{\rm tol}\). We can then formulate the entropy accumulation theorem, given by Proposition V.3 in [37], in the following way:
**Proposition 2**: _[_37_]_ _Let \(n\in\mathbb{N}\). Let \(p_{0}\) be given by eqs. (30,31). Let \(\Omega_{\rm EA}\) be the event defined by eqs. (35-39) for some \(\delta^{\rm tol}_{\rm PE},\delta^{\rm tol}_{\rm tom}>0\), \(\delta^{\rm tol}=\delta^{\rm tol}_{\rm PE}+\delta^{\rm tol}_{\rm tom}\), and an affine min-tradeoff function \(f\) such that \(f(p)=f^{\rm PE}(p)+f^{\rm tom}(p)+{\rm const}\). Then it holds any for \(a\in(1,2)\),_
\[H_{a}^{\uparrow}(O_{1}^{n}|S_{1}^{n}E)_{\sigma^{\rm EAT}|_{\Omega_{\rm EA}}} \geq nf(p_{0})-n\left(\delta^{\rm tol}+\frac{(a-1)\ln 2}{2}V^{2}\right)-\frac{a}{a-1} \log\frac{1}{{\rm Pr}_{\sigma^{\rm EAT}}[\Omega_{\rm EA}]}-n(a-1)^{2}K_{a}, \tag{63}\]
_where we have defined_
\[V =\sqrt{{\rm Var}(f)+2}+\log(2d_{O}^{2}+1), \tag{64}\] \[K_{a} =\frac{1}{6(2-a)^{3}\ln 2}2^{(a-1)(2\log d_{O}+\max(f)-\min(f))} \ln^{3}\left(2^{2\log d_{O}+\max(f)-\min(f)}+e^{2}\right), \tag{65}\]
_where \(\max(f)=\max_{p\in\mathcal{P}_{c}}f(p)\) and \(\min_{\Sigma}(f)=\min_{p:\Sigma\neq\emptyset}f(p)\) and \({\rm Var}(f)\) denotes the variance of \(f\)._
We will now use Proposition 2 to show the soundness of the hypothetical protocol. For that purpose, we need the following Lemma, which formalises the intuition that in order to upper bound the probability of aborting after tomography, we have to choose the corresponding tolerance parameter large enough.
**Lemma 1**: _Let \(n\in\mathbb{N}\) and \(\epsilon^{\rm tom}\in(0,1)\). Let us assume it holds_
\[\delta^{\rm tol}_{\rm tom}\geq 2\sqrt{\log\left(\frac{n}{\epsilon^{\rm tom}} \right)\sum_{i=1}^{16}\frac{\gamma_{i}^{\prime}c_{i}^{\prime 2}}{n}}+\frac{3D^{ \prime}}{n}\log\frac{2}{\epsilon^{\rm tom}}, \tag{66}\]
_where we have defined for, \(i\in\{1,...,16\}\),_
\[\gamma_{i}^{\prime}:=\frac{\pi_{i}^{\prime}(1-\sum_{j=1}^{i}\pi_{j}^{\prime}) }{1-\sum_{j=1}^{i-1}\pi_{j}^{\prime}},\;c_{i}^{\prime}:=h_{i}^{\prime}-\frac{ \sum_{j=i+1}^{17}h_{j}^{\prime}\pi_{j}^{\prime}}{1-\sum_{j=1}^{i}\pi_{j}^{ \prime}},\;D^{\prime}:=\max_{i,j\in\{1,...,17\}}|h_{i}^{\prime}-h_{j}^{\prime}|, \tag{67}\]
_for \(\pi_{x^{\prime}+1}^{\prime}=p_{0}(\bot,\bot,x^{\prime})\), and \(h_{x^{\prime}+1}^{\prime}=h_{\bot,\bot,x^{\prime}}\), for \(x^{\prime}=0,...,15\), as well as \(\pi_{17}^{\prime}=1-\sum_{i=1}^{16}\pi_{i}^{\prime}\), and \(h_{17}^{\prime}=0\). Then it holds_
\[{\rm Pr}_{\sigma^{\rm QKD}}[\neg\Omega_{\rm tom}]\leq\epsilon^{\rm tom}. \tag{68}\]
**Proof.** As the tomography is performed entirely within Alice's lab, with no influence of Eve or the noisy channel, we can restrict our attention to an honest implementation of the protocol, i.e. \({\rm Pr}_{\sigma^{\rm QKD}}[\neg\Omega_{\rm tom}]={\rm Pr}_{\rm hon}[\neg \Omega_{\rm tom}]\). Let us assume an honest application gives us the distribution
\[p_{0}(\bot,\bot,x^{\prime})=(1-p^{\rm key})\tilde{p}_{0}(\bot,\bot,x^{ \prime})=p^{\rm tom}p_{0}^{\rm tom}(x^{\prime}), \tag{69}\]
for \(x^{\prime}\in\{0,...,15\}\). Let us further assume that Alice and Bob observe some frequency distribution \({\rm freq}_{c_{1}^{n}}\). Recalling the definition of the event \(\Omega_{\rm tom}\), we note that it holds
\[{\rm Pr}_{\rm hon}[\Omega_{\rm tom}]\geq{\rm Pr}_{\rm hon}\left[\left|f^{\rm tom }({\rm freq}_{c_{1}^{n}})-f^{\rm tom}(p_{0})\right|\leq\delta^{\rm tol}_{\rm tom }\right]. \tag{70}\]
An honest implementation of the protocol corresponds to \(n\) independent multinoulli trials with parameter \(p_{0}\). In order to provide lower bounds we can therefore make use of a concentration result provided by Proposition 5.2 of [51]. Namely, it holds with probability \(1-\epsilon^{\rm tom}\) that
\[\left|f^{\rm tom}({\rm freq}_{c_{1}^{n}})-f^{\rm tom}(p_{0})\right|=\left|( \tilde{\pi}^{\prime}-\pi^{\prime})^{T}h^{\prime}\right|\leq 2\sqrt{\log\left( \frac{n}{\epsilon^{\rm tom}}\right)\sum_{i=1}^{16}\frac{\gamma_{i}^{\prime}c_{i }^{\prime 2}}{n}}+\frac{3D^{\prime}}{n}\log\frac{2}{\epsilon^{\rm tom}}, \tag{71}\]
where \(\tilde{\pi}_{i+1}^{\prime}={\rm freq}_{c_{1}^{n}}(\bot,\bot,x^{\prime})\) for \(x^{\prime}=0,...,15\), as well as \(\tilde{\pi}_{17}^{\prime}=1-\sum_{i=1}^{16}\tilde{\pi}_{i}^{\prime}\). Hence, if we choose the tolerance parameter \(\delta^{\rm tol}_{\rm tom}\) fulfilling eq. (66), we obtain the desired bound
\[{\rm Pr}_{\rm hon}\left[\left|f^{\rm tom}({\rm freq}_{c_{1}^{n}})-f^{\rm tom}(p _{0})\right|\leq\delta^{\rm tol}_{\rm tom}\right]\geq 1-\epsilon^{\rm tom}, \tag{72}\]
finishing the proof.
In order to show the soundness of the physical QKD protocol, which does not include tomography of Alice's marginal system, we also need the following Lemma, which relates the smooth min entropies of the physical and hypothetical protocol.
**Lemma 2**: _Let \(\epsilon\in\left(0,1-\sqrt{2\Pr_{\sigma^{\rm QKD}}[\neg\Omega_{\rm tom}|\Omega_{\rm EC }\cap\Omega_{\rm PE}]}\right)\) and let \(\epsilon^{\rm phys}\in\left(\epsilon+\sqrt{2\Pr_{\sigma^{\rm QKD}}[\neg\Omega_{ \rm tom}|\Omega_{\rm EC}\cap\Omega_{\rm PE}]},1\right)\). Then it holds_
\[H_{\rm min}^{\epsilon^{\rm phys}}(\tilde{Z}_{1}^{n}|S_{1}^{n}\tilde{X}_{1}^{n}E)_ {\sigma^{\rm phys},\rm QKD}|_{\alpha^{\rm phys}_{\rm NonAbort}}\geq H_{\rm min }^{\epsilon}(\tilde{Z}_{1}^{n}|S_{1}^{n}\tilde{X}_{1}^{n}E)_{\sigma^{\rm QKD} |_{\Omega_{\rm NonAbort}}}. \tag{73}\]
**Proof.** We begin by noting that Alice's tomography in the hypothetical protocol does not change the \(\tilde{Z}_{1}^{n}S_{1}^{n}\tilde{X}_{1}^{n}E\) subsystems of the final state, i.e.
\[\sigma^{\rm phys,QKD}_{\tilde{Z}_{1}^{n}S_{1}^{n}\tilde{X}_{1}^{n}HH^{\prime} \tilde{X}_{1}^{n}\tilde{Y}_{1}^{n}E}=\sigma^{\rm QKD}_{\tilde{Z}_{1}^{n}S_{1} ^{n}\tilde{X}_{1}^{n}HH^{\prime}\tilde{X}_{1}^{n}\tilde{Y}_{1}^{n}E}. \tag{74}\]
This is by non-signalling. As the events \(\Omega_{\rm EC}\) and \(\Omega_{\rm PE}\), defined by eqs. (38) and (40), respectively, depend only on the systems \(HH^{\prime}\tilde{Z}_{1}^{n}\tilde{X}_{1}^{n}\) and \(\tilde{Y}_{1}^{n}\tilde{X}_{1}^{n}\), it also holds
\[\sigma^{\rm phys,QKD}_{\tilde{Z}_{1}^{n}S_{1}^{n}\tilde{X}_{1}^{n}HH^{\prime} \tilde{X}_{1}^{n}\tilde{Y}_{1}^{n}E}\big{|}_{\Omega_{\rm EC}\cap\Omega_{\rm PE }}=\sigma^{\rm QKD}_{\tilde{Z}_{1}^{n}S_{1}^{n}\tilde{X}_{1}^{n}HH^{\prime} \tilde{X}_{1}^{n}\tilde{Y}_{1}^{n}E}\big{|}_{\Omega_{\rm EC}\cap\Omega_{\rm PE }}. \tag{75}\]
Using the triangle inequality, it follows that
\[\left\|\sigma^{\rm QKD}_{\tilde{Z}_{1}^{n}S_{1}^{n}\tilde{X}_{1}^ {n}E}|_{\Omega_{\rm EC}\cap\Omega_{\rm PE}\cap\Omega_{\rm tom}}-\sigma^{\rm phys,QKD}_{\tilde{Z}_{1}^{n}S_{1}^{n}\tilde{X}_{1}^{n}E}|_{\Omega_{\rm EC}\cap \Omega_{\rm PE}}\right\|_{1} \tag{76}\] \[\leq\left\|\sigma^{\rm QKD}_{\tilde{Z}_{1}^{n}S_{1}^{n}\tilde{X}_{1 }^{n}E}|_{\Omega_{\rm EC}\cap\Omega_{\rm PE}\cap\Omega_{\rm tom}}-\sigma^{\rm QKD }_{\tilde{Z}_{1}^{n}S_{1}^{n}\tilde{X}_{1}^{n}E}|_{\Omega_{\rm EC}\cap\Omega_{ \rm PE}}\right\|_{1}+\left\|\sigma^{\rm QKD}_{\tilde{Z}_{1}^{n}S_{1}^{n}\tilde{ X}_{1}^{n}E}\big{|}_{\Omega_{\rm EC}\cap\Omega_{\rm PE}}-\sigma^{\rm phys,QKD}_{\tilde{Z}_{1}^{n}S_{1}^{n}\tilde{X}_{1}^{n}E}\big{|}_{\Omega_{\rm EC} \cap\Omega_{\rm PE}}\right\|_{1}\] (77) \[\leq 2\Pr_{\sigma^{\rm QKD}}[\neg\Omega_{\rm tom}|\Omega_{\rm EC }\cap\Omega_{\rm PE}]. \tag{78}\]
Hence by eq. (4) it holds
\[P\left(\sigma^{\rm QKD}_{\tilde{Z}_{1}^{n}S_{1}^{n}\tilde{X}_{1}^{n}E}|_{ \Omega_{\rm EC}\cap\Omega_{\rm PE}\cap\Omega_{\rm tom}},\sigma^{\rm phys,QKD}_ {\tilde{Z}_{1}^{n}S_{1}^{n}\tilde{X}_{1}^{n}E}|_{\Omega_{\rm EC}\cap\Omega_{ \rm PE}}\right)\leq\sqrt{2\Pr_{\sigma^{\rm QKD}}[\neg\Omega_{\rm tom}|\Omega_{ \rm EC}\cap\Omega_{\rm PE}]}. \tag{79}\]
Further, by the triangle inequality, the \(\epsilon\)-ball (in terms of purified distance) around \(\sigma^{\rm QKD}_{\tilde{Z}_{1}^{n}S_{1}^{n}\tilde{X}_{1}^{n}E}|_{\Omega_{\rm Non Abort}}\) is contained in the \(\epsilon^{\rm phys}\)-ball around \(\sigma^{\rm QKD}_{\tilde{Z}_{1}^{n}S_{1}^{n}\tilde{X}_{1}^{n}E}|_{\Omega_{ \rm NonAbort}^{\rm phys}}\), implying eq. (73).
We are now ready to prove the soundness physical QKD protocol, which is our main result.
**Theorem 1** (Soundness): _Let \(n\in\mathbb{N}\). Let \(\epsilon^{\rm phys}_{\rm NonAbort},\epsilon^{\rm tom},\epsilon_{EC}\in(0,1)\) such that \(\epsilon^{\rm tom}<\frac{1}{2}\epsilon^{\rm phys}_{\rm NonAbort}\). Let \(\epsilon\in\left(0,1-\sqrt{2\epsilon^{\rm tom}/\epsilon^{\rm phys}_{\rm NonAbort}}\right)\), and define \(\epsilon^{\rm phys}=\epsilon+\sqrt{2\epsilon^{\rm tom}/\epsilon^{\rm phys}_{ \rm NonAbort}}\). Let \(0\leq p^{\rm key},p^{\rm PE},p^{\rm tom}\leq 1\) such that \(p^{\rm key}+p^{\rm PE}+p^{\rm tom}=1\). Let \(f\) be an affine min-tradeoff function of the form \(f(p)=f^{\rm PE}(p)+f^{\rm tom}(p)+{\rm const}\). Let \(p_{0}\) be given by eqs. (30,31). Let \(\delta^{\rm tol}_{\rm PE}>0\) and define_
\[\delta^{\rm tol}_{\rm tom}=2\sqrt{\log\left(\frac{n}{\epsilon^{\rm tom}}\right) \sum_{i=1}^{16}\frac{\gamma_{i}^{\prime}c_{i}^{\prime 2}}{n}+\frac{3D^{\prime}}{n}\log\frac{2}{ \epsilon^{\rm tom}}. \tag{80}\]
_Let further \(\Omega_{\rm PE}\), \(\Omega_{\rm tom}\), \(\Omega_{\rm EC}\), and \(\Omega_{\rm NonAbort}\) be defined by eqs. (37-41). Let \(\text{leak}_{EC}\) be the amount of information leaked during error correction. Then, if \(\Pr_{\sigma^{\rm phys},\rm QKD}[\Omega_{\rm PE}\cap\Omega_{\rm EC}]\geq\epsilon^{ \rm phys}_{\rm NonAbort}\), for any \(a\in(1,2)\), the physical QKD protocol provides an \(3\epsilon^{\rm phys}+\epsilon_{\rm EC}\)-sound key at rate \(r^{\rm phys}=\ell/n\) with_
\[r^{\rm phys}|_{\Omega_{\rm NonAbort}^{\rm phys}} \geq f(p_{0})-\delta^{\rm tol}_{\rm PE}-\delta^{\rm tol}_{\rm tom }-\frac{(a-1)\ln 2}{2}V^{2}-(a-1)^{2}K_{a}-p^{\rm PE}\log 5-(1-p^{\rm key})\log(17(S+1))\] \[-\frac{1}{\sqrt{n}}\left[\sqrt{\frac{1}{2}}\ln\frac{32}{\epsilon ^{(\rm phys}_{\rm NonAbort}-\epsilon^{\rm tom})}\log 5+\sqrt{\frac{1}{2}}\ln\frac{512}{ \epsilon^{2}(\epsilon^{\rm phys}_{\rm NonAbort}-\epsilon^{\rm tom})}\log (17(S+1))\right]\] \[-\frac{1}{n}\left[\frac{\Gamma(\epsilon/16)}{a-1}+\frac{a}{a-1} \log\frac{1}{\epsilon^{\rm phys}_{\rm NonAbort}-\epsilon^{\rm tom}}+\text{ leak}_{\rm EC}+2\log\frac{1}{\epsilon^{\rm phys}}+2\Gamma(\epsilon/4)+3\Gamma( \epsilon/16)\right]. \tag{81}\]
**Proof.** We begin by lower bounding \(H^{\epsilon}_{\min}(\hat{Z}^{n}_{1}|S^{n}_{1}\tilde{X}^{n}_{1}E)_{\sigma^{\text{ QKD}}|_{\Omega_{\text{NonAbort}}}}\) using eqs. (43 - 51). We then note that by eq. (58) it holds \(H^{\tau}_{a}(O^{n}_{1}|S^{n}_{1}E)_{\sigma^{\text{ QKD}}|_{\Omega_{\text{EA}}}}=H^{\tau}_{a}(O^{n}_{1}|S^{n}_{1}E)_{\sigma^{ \text{ QKD}}|_{\Omega_{\text{EA}}}}\), allowing us to apply Proposition 2. Since \(\Pr_{\sigma^{\text{ QKD}}}[\Omega_{\text{EC}}|\Omega_{\text{EA}}]\Pr_{\sigma^{ \text{ QKD}}}[\Omega_{\text{EA}}]=\Pr_{\sigma^{\text{ QKD}}}[\Omega_{\text{NonAbort}}]\), the terms \(\frac{a}{a-1}\log\frac{1}{\Pr_{\sigma^{\text{ QKD}}}[\Omega_{\text{EC}}|\Omega_{\text{EA}}]}\) and \(\frac{a}{a-1}\log\frac{1}{\Pr_{\sigma^{\text{ QKD}}}[\Omega_{\text{EA}}]}\) in eqs (51) and (63), can be merged, resulting in a term that dependents only on \(\Pr_{\sigma^{\text{ QKD}}}[\Omega_{\text{NonAbort}}]\). Formally, we obtain
\[H^{\epsilon}_{\min}(\hat{Z}^{n}_{1}|S^{n}_{1}\tilde{X}^{n}_{1}E) _{\sigma^{\text{ QKD}}|_{\Omega_{\text{NonAbort}}}} \geq\] \[\quad n\left[f(p_{0})-\delta^{\text{tol}}_{\text{PE}}-\delta^{ \text{tol}}_{\text{tom}}-\frac{(a-1)\ln 2}{2}V^{2}-(a-1)^{2}K_{a}-p^{\text{PE}} \log 5-(1-p^{\text{bay}})\log(17(S+1))\right]\] \[\quad-\sqrt{n}\left[\sqrt{\frac{1}{2}}\ln\frac{32}{\epsilon^{2} \Pr_{\sigma^{\text{ QKD}}}[\Omega_{\text{NonAbort}}]}\log 5+\sqrt{\frac{1}{2}}\ln\frac{512}{\epsilon^{2}\Pr_{ \sigma^{\text{ QKD}}}[\Omega_{\text{NonAbort}}]}\log(17(S+1))\right]\] \[\quad-\frac{\Gamma(\epsilon/16)}{a-1}-\frac{a}{a-1}\log\frac{1}{ \Pr_{\sigma^{\text{ QKD}}}[\Omega_{\text{NonAbort}}]}-2\Gamma(\epsilon/4)-3\Gamma(\epsilon/16). \tag{82}\]
Further, it holds
\[\Pr_{\sigma^{\text{ QKD}}}[\Omega_{\text{ NonAbort}}] =\Pr_{\sigma^{\text{ QKD}}}[\Omega_{\text{EA}}\cap\Omega_{\text{EC}}] \tag{83}\] \[=\Pr_{\sigma^{\text{ QKD}}}[\Omega_{\text{PE}}\cap\Omega_{\text{EC}}]-\Pr_{ \sigma^{\text{ QKD}}}[\Omega_{\text{PE}}\cap\Omega_{\text{EC}}\cap\neg\Omega_{\text{ tom}}]\] (84) \[\geq\Pr_{\sigma^{\text{ QKD}}}[\Omega_{\text{PE}}\cap\Omega_{\text{EC}}]-\Pr_{ \sigma^{\text{ QKD}}}[\neg\Omega_{\text{tom}}]. \tag{85}\]
By eq. (80) and Lemma 1, we can bound \(\Pr_{\sigma^{\text{ QKD}}}[\neg\Omega_{\text{tom}}]\leq\epsilon^{\text{ tom}}\). Further, by eq. (74), it holds that \(\Pr_{\sigma^{\text{ phys. QKD}}}[\Omega_{\text{PE}}\cap\Omega_{\text{EC}}]=\Pr_{ \sigma^{\text{ QKD}}}[\Omega_{\text{PE}}\cap\Omega_{\text{EC}}]\). Hence, by assumption, it holds
\[\Pr_{\sigma^{\text{ QKD}}}[\Omega_{\text{ NonAbort}}]\geq\epsilon^{\text{ phys}}_{\text{ NonAbort}}-\epsilon^{\text{ tom}}. \tag{86}\]
Now we can apply Lemma 2 to show that for our choice of \(\epsilon\), and \(\epsilon^{\text{ phys}}=\epsilon+\sqrt{2\epsilon^{\text{ tom}}/\epsilon^{\text{ phys}}_{\text{ NonAbort}}}\), it holds
\[H^{\epsilon^{\text{ phys}}}_{\min}(\hat{Z}^{n}_{1}|S^{n}_{1}\tilde{X}^{n}_{1}E)_{\sigma^{\text{ phys. QKD}}|_{\Omega_{\text{NonAbort}}}}\geq H^{\epsilon}_{\min}(\hat{Z}^{n}_{1}|S^{n}_{1} \tilde{X}^{n}_{1}E)_{\sigma^{\text{ QKD}}|_{\Omega_{\text{ NonAbort}}}} \tag{87}\]
and apply Proposition 1, finishing the proof.
### Completeness
In this section we show that the physical QKD protocol is complete, i.e. we provide a lower bound on the probability \(\Pr_{\text{hon}}[\Omega_{\text{ NonAbort}}^{\text{ phys}}]\) of an honest application not aborting.
**Theorem 2** (Completeness): _Let \(\epsilon^{\text{c}}_{\text{PE}}\in(0,1)\). Let \(\Omega_{\text{PE}}\) be as defined in eq. (37), with_
\[\delta^{\text{tol}}_{\text{PE}}=2\sqrt{\log\left(\frac{n}{\epsilon^{\text{c}}_{ \text{PE}}}\right)\sum_{i=1}^{4S}\frac{\gamma_{i}c^{2}_{i}}{n}}+\frac{3D}{n} \log\frac{2}{\epsilon^{\text{c}}_{\text{PE}}}, \tag{88}\]
_where we have defined an ordering \((x,z,\perp)\to i\) by \((0,0,\perp)\to 1,(0,1,\perp)\to 2,...,(0,S-1,\perp)\to S,(1,0,\perp)\to S+1,(1,1,\perp)\to S+2,...,(3,S-1, \perp)\to 4S\). We then set \(\pi_{i}=p_{0}(x,z,\perp)\), \(\hat{\pi}_{i}=\text{freq}_{\text{c}^{n}_{i}}(x,z,\perp)\)\(h_{i}=h_{x,z,\perp}\) for \(i=1,...,4S\), as well as \(\pi_{4S+1}:=1-\sum_{i=1}^{4S}\pi_{i}\), \(\hat{\pi}_{4S+1}:=1-\sum_{i=1}^{4S}\hat{\pi}_{i}\) and \(h_{4S+1}=0\). Let us further define_
\[\gamma_{i} :=\frac{\pi_{i}\left(1-\sum_{j=1}^{i}\pi_{j}\right)}{1-\sum_{j=1}^{i }\pi_{j}}, \tag{89}\] \[c_{i} :=h_{i}-\frac{\sum_{j=i+1}^{4S+1}h_{j}\pi_{j}}{1-\sum_{j=1}^{i}\pi_{ j}}, \tag{90}\]
_for \(i=1,...,4S\), as well as \(D:=\max_{i,j\in\{1,...,4S+1\}}|h_{i}-h_{j}|\). Let further \(\epsilon^{\text{c}}_{\text{EC}}\in(0,1)\) be a suitable completeness parameter for the error correction protocol used. Then the physical QKD protocol is \(\epsilon^{\text{c}}_{\text{PE}}+\epsilon^{\text{c}}_{\text{EC}}\)-complete, i.e. \(\Pr_{\text{hon}}[\Omega_{\text{ NonAbort}}^{\text{ phys}}]\geq 1-\epsilon^{\text{c}}_{\text{PE}}+\epsilon^{\text{c}}_{\text {EC}}\)._
**Proof.** We consider the following honest implementation: We apply the physical QKD protocol, as described in Section III, where the noisy channel \(\mathcal{N}_{\mathcal{A}^{n}_{1}\to B^{n}_{1}}\) is given by \(n\) iid uses of a phase-invariant Gaussian channel with transmittance \(\eta\) and excess noise \(\xi\), however without an attack by Eve. By simulating this channel we obtain a distribution \(p_{0}^{\rm dim}(x,z)\), which depends on \(\eta\) and \(\xi\) and is given by eq. (153), and set \(p_{0}(x,z,\bot)=p^{\rm PE}p_{0}^{\rm dim}(x,z)\) for \(x\in\{0,...,3\}\), \(z\in\{0,...,S-1\}\). We note that the protocol can abort after parameter estimation or error correction. By the union bound it holds
\[1-{\rm Pr}_{\rm hon}[\Omega_{\rm NonAbort}^{\rm phys}]\leq 1-{\rm Pr}_{\rm hon }[\Omega_{\rm PE}]+1-{\rm Pr}_{\rm hon}[\Omega_{\rm EC}]. \tag{91}\]
We begin by considering abortion after parameter estimation. Let us assume an honest application gives us \(p_{0}(x,z,\bot)\), for \(x\in\{0,...,3\}\), \(z\in\{0,...,S-1\}\) according to eq. (30). Let us further assume that Alice and Bob observe some frequency distribution \({\rm freq}_{c^{n}_{1}}\). Recalling the definition of the event \(\Omega_{\rm PE}\), we note that it holds
\[{\rm Pr}_{\rm hon}[\Omega_{\rm PE}]\geq{\rm Pr}_{\rm hon}\left[ \left|f^{\rm PE}({\rm freq}_{c^{n}_{1}})-f^{\rm PE}(p_{0})\right|\leq\delta_{ \rm PE}^{\rm tol}\right]. \tag{92}\]
An honest implementation of the protocol corresponds to \(n\) independent multinoulli trials with parameter \(p_{0}\). In order to provide lower bounds we can again make use of the concentration result provided by Proposition 5.2 of [51].
Let now \(\epsilon_{\rm PE}^{\rm c}\in(0,1)\). By Proposition 5.2 of [51], it then holds with probability \(1-\epsilon_{\rm PE}^{\rm c}\) that
\[\left|f^{\rm PE}({\rm freq}_{c^{n}_{1}})-f^{\rm PE}(p_{0})\right| =\left|(\hat{\pi}-\pi)^{T}h\right|\leq 2\sqrt{\log\left(\frac{n}{ \epsilon_{\rm PE}^{\rm c}}\right)\sum_{i=1}^{4S}\frac{\gamma_{i}c^{2}_{i}}{n} }+\frac{3D}{n}\log\frac{2}{\epsilon_{\rm PE}^{\rm c}}. \tag{93}\]
Hence, if we choose the tolerance parameter \(\delta_{\rm PE}^{\rm tol}\) as in eq. (88), we obtain the desired completeness bound
\[{\rm Pr}_{\rm hon}\left[\left|f^{\rm PE}({\rm freq}_{c^{n}_{1}})-f ^{\rm PE}(p_{0})\right|\leq\delta_{\rm PE}^{\rm tol}\right]\geq 1-\epsilon_{ \rm PE}^{\rm c}. \tag{94}\]
Finally, we have to consider the error correction. Let \(\epsilon_{\rm EC}^{\rm c}\in(0,1)\), such that \(1-{\rm Pr}_{\rm hon}[\Omega_{\rm EC}]\leq\epsilon_{\rm EC}^{\rm c}\), i.e. the error correction is assumed to abort with probability at most \(\epsilon_{\rm EC}^{\rm c}\), finishing the proof.
For a given empirical distribution \(p_{0}\) and a suitable choice of a min-tradeoff function, Theorems 1 and 2 combined show the security of the finite round physical QKD protocol. In order to obtain the best finite size key rate, for given \(n\in\mathbb{N}\), as well as some choice for the parameters \(\epsilon_{\rm NonAbort}^{\rm phys},\epsilon^{\rm tom},\epsilon_{\rm EC}, \epsilon_{\rm EC}^{\rm c},\epsilon_{\rm EC}^{\rm c}\in(0,1)\), such that \(\epsilon^{\rm tom}\ll\frac{1}{2}\epsilon_{\rm NonAbort}^{\rm phys}\), as well as \(\epsilon\in\left(0,1-\sqrt{2\epsilon^{\rm tom}/\epsilon_{\rm NonAbort}^{\rm phys }}\right)\) and \(\epsilon^{\rm phys}=\epsilon+\sqrt{2\epsilon^{\rm tom}/\epsilon_{\rm NonAbort}^{ \rm phys}}\), we can set the tolerance parameters as in eqs. (80,88), and maximise the key rate given by (81) over probabilities \(0\leq p^{\rm key},p^{\rm PE},p^{\rm tom}\leq 1\) such that \(p^{\rm key}+p^{\rm PE}+p^{\rm tom}=1\), and over \(a\in(1,2)\). We note that in order to get a non-trivial result, we will have to choose \(\epsilon_{\rm PE}^{\rm c}\) and \(\epsilon_{\rm EC}^{\rm c}\) such that the success probability of the honest implementation meets the threshold \(\epsilon_{\rm NonAbort}^{\rm phys}\) used in Theorem 1, i.e. we need \(1-\epsilon_{\rm PE}^{\rm c}-\epsilon_{\rm EC}^{\rm c}>\epsilon_{\rm NonAbort}^{ \rm phys}\).
### The Min-Tradeoff Function
The main task now is to find a min-tradeoff function \(f\) that provides a non-trivial bound for our protocol. As we will choose the number of key rounds to be significantly larger than the number of test rounds (i.e. rounds used for parameter estimation or tomography), it will be convenient to use the infrequent sampling framework introduced in [37], in which the statistical analysis only includes outputs in test rounds. To that purpose, we divide \(\mathcal{M}_{i}^{\rm EAT}\) into a key part, incorporating its action in key rounds and test part, incorporating its action in parameter estimation and tomography rounds, \(\mathcal{M}_{i}^{\rm EAT,key}:Q_{i-1}\to Q_{i}O_{i}S_{i}\) and \(\mathcal{M}_{i}^{\rm EAT,test}:Q_{i-1}\to Q_{i}O_{i}S_{i}C_{i}\), such that
\[\mathcal{M}_{i}^{\rm EAT}(\cdot)=p^{\rm key}\mathcal{M}_{i}^{ \rm EAT,key}(\cdot)\otimes\left|\bot\right\rangle\left\langle\bot\right|_{C_{ i}}+(1-p^{\rm key})\mathcal{M}_{i}^{\rm EAT,test}(\cdot). \tag{95}\]
Let us now define a _crossover min-tradeoff function_[37] as a function \(g:\mathcal{P}_{\mathcal{C}}\rightarrow\mathbb{R}\) such that for all \(i=1,...,n\) and \(\tilde{p}\in\mathcal{P}_{\mathcal{C}}\) it holds
\[g(\tilde{p})\leq\inf_{|\tilde{p}\rangle\in\tilde{\Sigma}_{i}( \tilde{p})}H(O_{i}|S_{i}\tilde{E})_{\rho^{\rm EAT,i}}, \tag{96}\]
where \(\tilde{E}\) can be chosen isomorphic to \(Q_{i-1}\), and we have defined
\[\tilde{\Sigma}_{i}(\tilde{p})=\left\{\left|\rho\right\rangle_{Q_{i-1}\tilde{E}} \in\mathcal{H}_{Q_{i-1}\tilde{E}}:\left\langle c\right|\rho_{C_{i}}^{\text{EXT,test},i}\left|c\right\rangle\equiv\tilde{p}(c)\right\}, \tag{97}\]
for states
\[\rho_{O_{i}S_{i}C_{i}Q_{i}\tilde{E}}^{\text{\text{\rm{EAT}},test},i}=\text{id}_ {\tilde{E}}\otimes\mathcal{M}_{i}^{\text{\rm{EAT}},\text{test}}(\rho_{Q_{i-1} \tilde{E}})=\text{id}_{Q_{i}\tilde{E}}\otimes\mathcal{M}_{A_{i}B_{i}\to O_{i}S _{i}C_{i}}^{\text{\rm{EAT}},\text{test}}\left(\rho_{A_{i}B_{i}Q_{i}\tilde{E}} \right). \tag{98}\]
Further, it holds for all \(i=1,...,n\),
\[\inf_{\left|\rho\right\rangle\in\Sigma_{i}(\tilde{p})}H(O_{i}|S_{i}\tilde{E}) _{\rho^{\text{\rm{EAT}},i}}\geq\inf_{\begin{subarray}{c}\mathcal{H}_{\tilde{ E}}\cong\mathcal{H}_{Q_{1}}\\ \left|\rho\right\rangle\in\Sigma_{\tilde{p}}(\tilde{p})\end{subarray}}H(O|S \tilde{E})_{\rho^{\text{\rm{EAT}}}}, \tag{99}\]
where we have defined the states \(\rho_{OSC\tilde{E}}^{\text{\rm{EAT}}}=\text{id}_{\tilde{E}}\otimes\mathcal{M }_{A\to OSC}^{\text{\rm{EAT}},\text{\rm{EAT}}}\left(\rho_{AB\tilde{E}}\right)\) and \(\rho_{OSC\tilde{E}}^{\text{\rm{EAT}},\text{\rm{test}}}=\text{id}_{\tilde{E}} \otimes\mathcal{M}_{AB\to OSC}^{\text{\rm{EAT}},\text{\rm{test}}}\left(\rho_ {AB\tilde{E}}\right)\), as well as the set \(\tilde{\Sigma}_{\tilde{E}}(\tilde{p})=\left\{\left|\rho\right\rangle_{AB\tilde{ E}}\in\mathcal{H}_{AB\tilde{E}}:\left\langle c\right|\rho_{C}^{\text{\rm{ EAT}},\text{\rm{test}}}\left|c\right\rangle\equiv\tilde{p}(c)\right\}\). We can therefore relax the problem to finding a function \(g:\mathcal{P}_{\tilde{\mathcal{C}}}\rightarrow\mathbb{R}\) such that
\[g(\tilde{p})\leq\inf_{\begin{subarray}{c}\mathcal{H}_{\tilde{E}}\cong\mathcal{ H}_{Q_{1}}\\ \left|\rho\right\rangle\in\Sigma_{\tilde{p}}(\tilde{p})\end{subarray}}H(O|S \tilde{E})_{\rho^{\text{\rm{EAT}}}}. \tag{100}\]
According to Lemma V.5 of [37], we translate our crossover min-tradeoff function \(g\) into a min-tradeoff function \(f\) via the definition
\[f(\delta_{c}) =\max(g)+\frac{1}{1-p^{\text{\rm{key}}}}\left(g(\delta_{c})-\max (g)\right)\;\forall c\in\tilde{\mathcal{C}}, \tag{101}\] \[f(\delta_{(\bot,\bot,\bot)}) =\max(g), \tag{102}\]
where \(\delta_{c}\) denotes the distribution that equals \(1\) for \(c\) and \(0\) everywhere else. Further, \(\max(g)=\max_{\tilde{p}\in\mathcal{P}_{\tilde{\mathcal{C}}}}g(\tilde{p})\) and \(\min(g)=\min_{\tilde{p}\in\mathcal{P}_{\tilde{\mathcal{C}}}}g(\tilde{p})\). If \(p\) is of the form \(p(c)=(1-p^{\text{\rm{key}}})\tilde{p}(c)\) for \(c\in\tilde{\mathcal{C}}\) and \(p(\bot,\bot,\bot)=p^{\text{\rm{key}}}\), it holds \(f((1-p^{\text{\rm{key}}})\tilde{p})=g(\tilde{p})\) for all \(\tilde{p}\in\mathcal{P}_{\tilde{\mathcal{C}}}\). Further it holds
\[\max(f) =\max(g), \tag{103}\] \[\min_{\Sigma}(f) \geq\min(g),\] (104) \[0\leq \text{Var}(f) \leq\frac{1}{1-p^{\text{\rm{key}}}}\left(\max(g)-\min(g)\right)^{2}. \tag{105}\]
Hence we can upper bound the expressions in eqs. (64,65) by
\[V\leq\tilde{V} =\sqrt{\frac{1}{1-p^{\text{\rm{key}}}}\left(\max(g)-\min(g) \right)^{2}+2}+\log(2d_{O}^{2}+1), \tag{106}\] \[K_{a}\leq\tilde{K}_{a} =\frac{1}{6(2-a)^{3}\ln 2}2^{(a-1)(2\log d_{O}+\max(g)-\min(g))}\ln^{3} \left(2^{2\log d_{O}+\max(g)-\min(g)}+e^{2}\right). \tag{107}\]
In what follows, we will provide a crossover minimum tradeoff function for our choice of EAT channels \(\{\mathcal{M}_{i}^{\text{\rm{EAT}}}\}_{i=1}^{n}\), lower bounding the r.h.s. of (96). We begin by noting that, by the chain rule for the von Neumann entropy [52], it holds
\[\inf_{\begin{subarray}{c}\mathcal{H}_{\tilde{E}}\cong\mathcal{H}_{Q_{1}}\\ \left|\rho\right\rangle\in\Sigma_{\tilde{E}}(\tilde{p})\end{subarray}}H(O|S \tilde{E})_{\rho^{\text{\rm{EAT}}}} \geq\inf_{\begin{subarray}{c}\mathcal{H}_{\tilde{E}}\cong\mathcal{H}_{Q_{1 }}\\ \left|\rho\right\rangle\in\Sigma_{\tilde{E}}(\tilde{p})\end{subarray}}\left(H( \hat{Z}|S\hat{E})_{\rho^{\text{\rm{EAT}}}}+H(\tilde{Z}\tilde{X}X^{\prime}| \hat{Z}S\hat{E})_{\rho^{\text{\rm{EAT}}}}\right) \tag{108}\] \[\geq\inf_{\begin{subarray}{c}\mathcal{H}_{\tilde{E}}\cong\mathcal{ H}_{Q_{1}}\\ \left|\rho\right\rangle\in\Sigma_{\tilde{E}}(\tilde{p})\end{subarray}}H(\hat{Z}|S \hat{E})_{\rho^{\text{\rm{EAT}}}}, \tag{109}\]
where we have used that, as \(\tilde{Z}\tilde{X}X^{\prime}\) is classical, there cannot be any entanglement across the \(\tilde{Z}\tilde{X}X^{\prime}:\hat{Z}S\hat{E}\) partition, hence the second term in (108) has to be non-negative. Let us now define \(g:\mathcal{P}_{\tilde{\mathcal{C}}}\rightarrow\mathbb{R}\),
\[g(\tilde{p}):=\inf_{\begin{subarray}{c}\mathcal{H}_{\tilde{E}}\cong\mathcal{H}_{ Q_{1}}\\ \left|\rho\right\rangle\in\Sigma_{\tilde{E}}(\tilde{p})\end{subarray}}H(\hat{Z}|S\hat{E})_{ \rho^{\text{\rm{EAT}}}}, \tag{110}\]
which can serve as a crossover min-tradeoff function for EAT channels \(\{\mathcal{M}_{i}^{\text{\rm{EAT}}}\}\). In order the obtain a efficiently numerically computable crossover min-tradeoff function, we now make use of the framework presented in [42] to remove the dependency on Eves subsystem.
Removing the dependence on the \(\hat{E}\) subsystem
The idea is to consider a coherent version of a round of the protocol leading to Bob's raw key \(\hat{Z}\). Namely, Alice and Bob's measurements are performed in a coherent fashion, i.e. by means of isometries acting on the system to be measured and adding a quantum register containing the quantum information which, once dephased, will provide the measurement result, but not yet dephasing it. Alice and Bob then publicly announce partial information about their measurement outcomes, while keeping part of the information stored coherently. From that information they decide whether they use the round for key generation, parameter estimation or tomography of Alice's part. If they use the round for key generation, in the case of reverse reconciliation, Bob applies a key map to his coherently stored measurement outcomes, which provides a coherent key register. The key can then be obtained by means of a so-called pinching operation, i.e. a measurement that dephases the key register.
As all steps of the protocol before the pinching are performed coherently, we can express the outcome as a pure state, allowing us to apply Theorem 1 in [53], which removes the dependence on the \(\hat{E}\) subsystem. In order to formulate our result, we need to introduce the CP map \(\mathcal{G}:AB\to AB\hat{Z}\) that describes the coherent version of the protocol. This map is given by a single Kraus operator
\[G=\mathbb{1}_{A}\otimes\sum_{z=0}^{3}\sqrt{R_{B}^{z}}\otimes\left|z\right>_{ \hat{Z}}, \tag{111}\]
where we have defined the region operators
\[R_{B}^{z}=\frac{1}{\pi}\int_{0}^{\infty}\int_{\frac{\pi}{4}(2z-1)}^{\frac{\pi }{4}(2z+1)}\gamma\left|\gamma e^{i\theta}\right>\left<\gamma e^{i\theta}\right| d\theta d\gamma, \tag{112}\]
for \(z\in\{0,1,2,3\}\). Furthermore, we define the pinching operation \(\mathcal{Z}:\hat{Z}\rightarrow\hat{Z}\), defined by Kraus operators
\[Z_{j}=\left|j\right>\left<j\right|_{\hat{Z}}\otimes\mathbb{1}, \tag{113}\]
for \(j\in\{0,1,2,3\}\), and the identity is extended to all registers other than \(\hat{Z}\). It then holds
**Lemma 3**: _The crossover min-tradeoff function defined by eq. (110) can be reformulated as follows_
\[g(\tilde{p})=\inf_{\begin{subarray}{c}\mathcal{H}_{\hat{E}}\in\mathcal{H}_{Q _{1}}\\ \left|\rho\right>\in\hat{E}_{\hat{E}}(\tilde{p})\end{subarray}}H(\hat{Z}|S \hat{E})_{\rho^{\text{\tiny RAT}}}=p^{\text{\rm key}}\inf_{\rho\in\Sigma( \tilde{p})}D(\mathcal{G}(\rho_{AB})||\mathcal{Z}(\mathcal{G}(\rho_{AB})), \tag{114}\]
_where we have defined the set \(\Sigma(\tilde{p})=\left\{\rho_{AB}\in\mathcal{D}(\mathcal{H}_{AB}):\left<c \right|\mathcal{M}^{\text{\rm EAT,test}}(\rho)_{C}\left|c\right>\equiv\tilde{ p}(c)\right\}\), which is independent of the reference system._
The proof of Lemma 3 goes along the line of the discussion in [30] and can be found in Appendix A. Let us note that, by definition of the protocol, it holds \(\dim(A)=4\), but the dimension of Bob's space is infinite. As in [30], to circumvent this problem we will truncate Bob's Hilbert space \(B\) by introducing a cutoff \(N_{c}\in\mathbb{N}\), so that \(\dim(B)\) becomes finite, \(\dim(B)=N_{c}\). As explained below, the cutoff will be increased until reaching the situation in which the min-tradeoff functions hardly depends on it. Under the cutoff assumption, the set \(\Sigma(\tilde{p})\) is compact and, as the objective is continuous [42], a minimum is attained in eq. (114). Following [54] we can show
**Lemma 4**: _For a given \(0<p^{\text{\rm key}}\leq 1\), under the photon number cutoff assumption,_
\[g(\tilde{p})=p^{\text{\rm key}}\min_{\rho\in\Sigma(\tilde{p})}D(\mathcal{G}( \rho_{AB})||\mathcal{Z}(\mathcal{G}(\rho_{AB}))) \tag{115}\]
_is a convex function on \(\mathcal{P}_{\hat{\mathcal{C}}}\)._
The proof can be found in Appendix B.
Finding an affine crossover min-tradeoff function
We note that, for any given distribution \(\tilde{p}\in\mathcal{P}_{\mathcal{C}}\), eq. (115) is a convex optimisation problem with semidefinite constraints. As the objective is not affine, however, it is not a semidefinite program (SDP). Also, the dependence of \(g\) on the distribution \(\tilde{p}\) is hidden in the constraints. We will now follow the steps taken in [42] and perform a first order Taylor expansion (around some state \(\tilde{\rho}_{AB}\in\mathcal{D}(\mathcal{H}_{AB})\)) providing a lower bound on the optimisation problem in eq. (115). The resulting expression contains an SDP with a linear objective.
We then consider the dual of the SDP, which for any dual feasible point provides an affine lower bound on the orginal SDP. By the nature of duality, which roughly speaking incorporates the constraints into the objective, the objective of the dual problem will explicitly depend on \(\tilde{p}\) in an affine way, as will the entire expression lower bounding the optimising problem in eq. (115). Thus, for any given state \(\tilde{\rho}_{AB}\in\mathcal{D}(\mathcal{H}_{AB})\), as well any dual feasible point, we can obtain an affine crossover min-tradeoff function.
To begin with, let us explicitly consider the optimisation problem in eq. (115). Let \(S=4\Delta/\delta+4\) represent the total number of modules in Bob's discretisation. For a probability distribution \(\tilde{p}\in\mathcal{P}_{\mathcal{C}}\), the optimisation takes the form
\[\min_{\rho_{AB}}\;D(\mathcal{G}(\rho_{AB})||\mathcal{Z}(\mathcal{ G}(\rho_{AB}))) \tag{116}\] \[\text{s.t.}\;\rho_{AB}\geq 0,\;\text{Tr}[\rho_{AB}]=1,\] \[\forall x\in\{0,1,2,3\},\;\forall z\in\{0,...,S-1\}:\] \[\tilde{p}^{\text{PE}}\,\text{Tr}\left[\left(\left|x\right\rangle \left\langle x\right|_{A}\otimes\tilde{R}_{B}^{z}\right)\rho_{AB}\right]= \tilde{p}(x,z,\bot),\] \[\forall x^{\prime}\in\{0,...,15\}:\] \[\tilde{p}^{\text{tom}}\,\text{Tr}\left[\Gamma_{x^{\prime}}\rho_{ A}\right]=\tilde{p}(\bot,\bot,x^{\prime}),\]
where we have defined \(\tilde{p}^{\text{PE}}=\frac{p^{\text{PE}}}{1-p^{\text{key}}}\) and \(\tilde{p}^{\text{tom}}=1-\tilde{p}^{\text{PE}}\). Further, \(\tilde{R}^{z}\) are region operators defined in analogy to (112), but with the discretisation used for parameter estimation given by eq. (27). Regarding the constraints, the region operators related to parameter estimation add up to the identity matrix, so that there is no need to impose the constraint \(\text{Tr}[\rho_{AB}]=1\). We now closely follow [42] to lower bound eq. (116). For brevity, let us define \(r(\rho):=D(\mathcal{G}(\rho)||\mathcal{Z}(\mathcal{G}(\rho)))\). By the properties of the pinching quantum channel, this expression can be rewritten without loss of generality in terms of von Neumann entropies
\[r(\rho)=H(\mathcal{Z}(\mathcal{G}(\rho))-H(\mathcal{G}(\rho)). \tag{117}\]
Using the methodology of [55], it is possible to apply here a facial reduction to reformulate the maps \(\mathcal{Z}\) and \(\mathcal{G}\) into maps which are strictly positive definite; this does not only assure that the new objective function is differentiable for any \(\rho>0\), but also reduces the dimension of both maps which simplifies the subsequent numerical analysis. This process can be seen as a unitary transformation, such that [56]
\[\mathcal{G}(\rho)=\begin{bmatrix}U&V\end{bmatrix}\begin{bmatrix}\tilde{ \mathcal{G}}(\rho)&0\\ 0&0\end{bmatrix}\begin{bmatrix}U^{\dagger}\\ V^{\dagger}\end{bmatrix}, \tag{118}\]
where \(\tilde{\mathcal{G}}(\rho)>0\) for \(\rho>0\). A similar procedure follows for \(\mathcal{Z}(\mathcal{G}(\rho))\), resulting in a new map \(\tilde{\mathcal{Z}}(\rho)>0\). Hence, by taking advantage of the fact that the von Neumann entropy is invariant under unitary transformations, we arrive at a simpler objective function
\[r(\rho)=H(\tilde{\mathcal{Z}}(\tilde{\mathcal{G}}(\rho))-H(\tilde{\mathcal{G }}(\rho)). \tag{119}\]
With the maps \(\tilde{\mathcal{Z}}\), \(\tilde{\mathcal{G}}\) the matrix gradient \(\nabla r(\rho)\) is now given by
\[\nabla r(\rho)^{T}=[\tilde{\mathcal{G}}^{\dagger}(\log\tilde{\mathcal{G}}( \rho))+\tilde{\mathcal{G}}^{\dagger}(\mathbb{1})]-[\tilde{\mathcal{Z}}^{ \dagger}(\log\tilde{\mathcal{Z}}(\rho))+\tilde{\mathcal{Z}}^{\dagger}( \mathbb{1})]. \tag{120}\]
Let now \(\tilde{p}\in\mathcal{P}_{\tilde{\mathcal{C}}}\) and \(\rho_{\tilde{p}}^{*}\in\Sigma(\tilde{p})\) be the minimiser of (116). For any \(\tilde{\rho}\in\mathcal{D}(\mathcal{H}_{AB})\), it then holds
\[\frac{g(\tilde{p})}{p^{\text{key}}}=r(\rho_{\tilde{p}}^{*}) \geq r(\tilde{\rho})+\text{Tr}\left[(\rho_{\tilde{p}}^{*}-\tilde{ \rho})^{T}\nabla r(\tilde{\rho})\right] \tag{121}\] \[\geq r(\tilde{\rho})-\text{Tr}\left[\tilde{\rho}^{T}\nabla r( \tilde{\rho})\right]+\min_{\sigma\in\Sigma(\tilde{p})}\text{Tr}\left[\sigma^{ T}\nabla r(\tilde{\rho})\right], \tag{122}\]
where the first inequality is due to the fact that \(r\) is a convex, differentiable function over the convex set \(\mathcal{D}(\mathcal{H}_{AB})\), hence it can be lower bounded by its first order Taylor expansion at \(\tilde{\rho}\) (see e.g. [57] p.69), and the second inequality is due to the fact that \(\rho_{\tilde{p}}^{*}\in\Sigma(\tilde{p})\). For any \(\tilde{\rho}\in\mathcal{D}(\mathcal{H}_{AB})\) and \(\tilde{p}\), the optimisation problem in eq. (122) is an SDP in standard form, explicitly given by
\[\min_{\sigma_{AB}} \operatorname{Tr}\left[\sigma^{T}\nabla r(\tilde{\rho})\right]\] (123) s.t. \[\sigma_{AB}\geq 0,\] \[\forall x\in\{0,1,2,3\},\;\forall z\in\{0,...,S-1\}:\] \[\tilde{p}^{\mathrm{PE}}\operatorname{Tr}\left[\left(\left|x \right>\left<x\right|_{A}\otimes\tilde{R}_{B}^{z}\right>\sigma_{AB}\right]= \tilde{p}(x,z,\bot),\] \[\forall x^{\prime}\in\{0,...,15\}:\] \[\tilde{p}^{\mathrm{tom}}\operatorname{Tr}\left[\Gamma_{x^{ \prime}}\sigma_{A}\right]=\tilde{p}(\bot,\bot,x^{\prime}).\]
The dual problem of the SDP (123) takes the form
\[\max_{\tilde{\rho}\in\Sigma_{\tilde{\rho}}^{*}}\ell_{\tilde{p}}(\vec{\nu}), \tag{124}\]
where the dual objective is given by
\[\ell_{\tilde{p}}(\vec{\nu})=\sum_{x=0}^{3}\sum_{z=0}^{S-1}\nu_{xz}\frac{\tilde {p}(x,z,\bot)}{\tilde{p}^{\mathrm{PE}}}+\sum_{x^{\prime}=0}^{15}\nu_{x^{\prime }}^{\prime}\frac{\tilde{p}(\bot,\bot,x^{\prime})}{\tilde{p}^{\mathrm{tom}}}, \tag{125}\]
which is affine with respect to \(\tilde{p}\). Further, the set \(\Sigma_{\tilde{\rho}}^{*}\) is defined as
\[\Sigma_{\tilde{\rho}}^{*}=\left\{\vec{\nu}\in\mathbb{R}^{4S+16}:\nabla r(\rho )-\sum_{x=0}^{3}\sum_{z=0}^{S-1}\nu_{xz}\left(\left|x\right>\left<x\right|_{A} \otimes\tilde{R}_{B}^{z}\right)^{T}-\sum_{x^{\prime}=0}^{15}\nu_{x^{\prime}} ^{\prime}\Gamma_{x^{\prime}}^{T}\geq 0\right\}, \tag{126}\]
which is independent of \(\tilde{p}\). By weak duality it then holds
\[g(\tilde{p})=p^{\mathrm{key}}r(\rho_{\tilde{p}}^{*}) \geq p^{\mathrm{key}}\left(r(\tilde{\rho})-\operatorname{Tr} \left[\tilde{\rho}^{T}\nabla r(\tilde{\rho})\right]+\max_{\tilde{\rho}\in \Sigma_{\tilde{\rho}}^{*}}\ell_{\tilde{p}}(\vec{\nu})\right) \tag{127}\] \[\geq p^{\mathrm{key}}\left(r(\tilde{\rho})-\operatorname{Tr} \left[\tilde{\rho}^{T}\nabla r(\tilde{\rho})\right]+\ell_{\tilde{p}}(\vec{ \nu})\right)\] (128) \[=:\tilde{g}_{\vec{\nu},\tilde{\rho}}(\tilde{p}) \tag{129}\]
for any \(\tilde{\rho}\in\mathcal{D}(\mathcal{H}_{AB})\) and any \(\vec{\nu}\in\Sigma_{\tilde{\rho}}^{*}\). We note that for any such choice of \(\tilde{\rho},\vec{\nu}\), the function \(\tilde{g}_{\vec{\nu},\tilde{\rho}}:\mathcal{P}_{\mathcal{C}}\to\mathbb{R}\) is an affine crossover min-tradeoff function.
#### vi.2.3 Optimisation of the crossover min-tradeoff function
In this section we describe how we can numerically obtain almost optimal, i.e. optimal up to numerical imprecision, choices for our parameters \(\tilde{\rho}\) and \(\vec{\nu}\) in the crossover min-tradeoff function (129), for a given distribution \(\tilde{p}_{0}\in\mathcal{P}_{\tilde{\rho}}\). The distribution will be of the form \(\tilde{p}_{0}(x,z,\bot)=\tilde{p}^{\mathrm{PE}}p_{0}^{\mathrm{sim}}(x,z)\), for all \(x\in\{0,1,2,3\}\) and \(z\in\{0,...,S-1\}\), where \(p_{0}^{\mathrm{sim}}(x,z)\) is a distribution obtained by simulating an honest implementation of the of the physical QKD protocol. Similarly, \(\tilde{p}_{0}(\bot,\bot,x^{\prime})=\tilde{p}^{\mathrm{tom}}p_{0}^{\mathrm{tom }}(x^{\prime})\) for all \(x^{\prime}\in\{0,...,15\}\), where \(p_{0}^{\mathrm{tom}}(x^{\prime})\) is the distribution obtained in the hypothetical tomography. For the explicit form of \(p_{0}^{\mathrm{sim}}(x,z)\) and \(p_{0}^{\mathrm{tom}}(x^{\prime})\), given by a simulation of the hypothetical QKD protocol, see Section V.
We note that whereas the choices for \(\tilde{\rho}\) and \(\vec{\nu}\) will only be optimal up to numerical imprecision, it is possible to analytically confirm their feasibility, i.e. that \(\tilde{\rho}\in\mathcal{D}(\mathcal{H}_{AB})\) and \(\vec{\nu}\in\Sigma_{\tilde{\rho}}^{*}\). Thus we can analytically verify that the corresponding function \(g_{\vec{\nu},\tilde{\rho}}\) is indeed a valid crossover min-tradeoff function.
Our numerical method now works as follows: We begin with some \(\tilde{\rho}^{(0)}\in\mathcal{D}(\mathcal{H}_{AB})\) and, for \(i=1,...,n^{\mathrm{iter}}\), where \(n^{\mathrm{iter}}\in\mathbb{N}\), iteratively compute
\[\Delta\tilde{\rho}^{(i)}=\arg\min_{\sigma_{AB}}\ \operatorname{Tr} \left[\sigma^{T}\nabla r(\tilde{\rho}^{(i-1)})\right] \tag{130}\] \[\text{s.t.}\ \sigma_{AB}\geq 0,\] \[\forall x\in\{0,1,2,3\},\ \forall z\in\{0,...,S-1\}:\] \[\operatorname{Tr}\left[\left(\left|x\right\rangle\left\langle x \right|_{A}\otimes\tilde{R}_{B}^{z}\right)\sigma_{AB}\right]=p_{0}^{\text{sim} }(x,z),\] \[\forall x^{\prime}\in\{0,...,15\}:\] \[\operatorname{Tr}\left[\Gamma_{x^{\prime}}\sigma_{A}\right]=p_{0 }^{\text{tom}}(x^{\prime}).\]
Once this SDP is solved and \(\Delta\tilde{\rho}^{(i)}\) is known, the value of the relative entropy is minimized according to
\[\min_{\kappa\in(0,1)}r(\tilde{\rho}^{(i-1)}+\kappa\Delta\tilde{\rho}^{(i)}). \tag{131}\]
Such minimization can be approximately computed in MATLAB with the function fminbnd. Then, we set a new density matrix \(\tilde{\rho}^{(i)}=\tilde{\rho}^{(i-1)}+\kappa^{*}\Delta\tilde{\rho}^{(i)}\), with the optimal coefficient \(\kappa^{*}\), and repeat the optimisation (130). After \(n^{\text{iter}}\) we set \(\tilde{\rho}_{0}=\tilde{\rho}^{(n^{\text{iter}})}\).
The numerical computation of the dual of (123) requires to take into account the difference in the numerical representation of the states and operators with respect to their analytical values, which leads to a violation of the constraints due to the computational limitations of the computers. According to Theorem 3 of [42], this error may be taken into account by introducing a new parameter \(\varepsilon^{\prime}\), which takes the absolute value of the maximal such error, and expands the feasible set to provide a lower bound while preserving the reliability of the approach. With this methodology, the dual takes the form [57],
\[\max_{(\vec{\nu},\vec{\mu})\in\tilde{\Sigma}^{*}_{\tilde{\rho}_{0}}}\ell_{ \tilde{\rho}_{0},\varepsilon^{\prime}}^{0}(\vec{\nu},\vec{\mu}), \tag{132}\]
where the dual objective is given by
\[\ell_{\tilde{\rho}_{0},\varepsilon^{\prime}}^{0}(\vec{\nu},\vec{\mu})=\sum_{ x=0}^{3}\sum_{z=0}^{S-1}\nu_{xz}p_{0}^{\text{sim}}(x,z)+\sum_{x^{\prime}=0}^{15} \nu_{x^{\prime}}^{\prime}p_{0}^{\text{tom}}(x^{\prime})-\varepsilon^{\prime} \sum_{z^{\prime}=1}^{4S+16}\mu_{z^{\prime}}, \tag{133}\]
with the set \(\tilde{\Sigma}^{*}_{\tilde{\rho}_{0}}\) defined as
\[\tilde{\Sigma}^{*}_{\tilde{\rho}_{0}}=\left\{(\vec{\nu},\vec{\mu})\in(\mathbb{ R}^{4S+16},\mathbb{R}^{4S+16}):-\vec{\mu}\leq\vec{\nu}\leq\vec{\mu},\nabla r (\tilde{\rho}_{0})-\sum_{x=0}^{3}\sum_{z=0}^{S-1}\nu_{xz}\left(\left|x\right\rangle \left\langle x\right|_{A}\otimes\tilde{R}_{B}^{z}\right)^{T}-\sum_{x^{\prime} =0}^{15}\nu_{x^{\prime}}^{\prime}\Gamma_{x^{\prime}}^{T}\geq 0\right\}.\]
From this maximization, as well as a fixed value \(\varepsilon^{\prime}_{0}\) taken according to the maximal numerical error at the constraints, we obtain \(\vec{\nu}_{0}\), and note that \(\vec{\nu}_{0}\in\Sigma^{*}_{\tilde{\rho}_{0}}\). This allows us to define our crossover min-tradeoff function as
\[\tilde{g}_{0}(\tilde{p}) :=\tilde{g}_{\tilde{\nu}_{0},\tilde{\rho}_{0}}(\tilde{p}) \tag{134}\] \[=p^{\text{key}}\left(r(\tilde{\rho}_{0})-\operatorname{Tr}\left[ \tilde{\rho}_{0}^{T}\nabla r(\tilde{\rho}_{0})\right]+\ell_{\tilde{p}}(\tilde{ \nu}_{0})\right)\] \[=p^{\text{key}}\left(G_{0}+\sum_{x=0}^{3}\sum_{z=0}^{S-1}\nu_{0, xz}\frac{\tilde{p}(x,z,\bot)}{\tilde{p}^{\text{PE}}}+\sum_{x^{\prime}=0}^{15} \nu_{0,x^{\prime}}^{\prime}\frac{\tilde{p}(\bot,\bot,x^{\prime})}{\tilde{p}^ {\text{tom}}}\right), \tag{135}\]
with a constant
\[G_{0}:=r(\tilde{\rho}_{0})-\operatorname{Tr}\left[\tilde{\rho}_{0}^{T}\nabla r (\tilde{\rho}_{0})\right]. \tag{136}\]
In order to compute the higher order terms of the EAT, we need to find \(\max(\tilde{g}_{0})=\max_{\tilde{p}\in\mathcal{P}_{\mathcal{C}}}\tilde{g}_{0}( \tilde{p})\) and \(\min(\tilde{g}_{0})=\min_{\tilde{p}\in\mathcal{P}_{\mathcal{C}}}\tilde{g}_{0}( \tilde{p})\). We note that, as \(\mathcal{P}_{\mathcal{C}}\) is convex and \(\tilde{g}_{0}\) is affine, we can restrict to the extreme points of \(\mathcal{P}_{\mathcal{C}}\). Namely we get
\[\max(\tilde{g}_{0})=p^{\text{key}}G_{0}+p^{\text{key}}\underbrace{\max \left\{\frac{\nu_{0,xz}}{\tilde{p}^{\text{PE}}}:x=0,\dots,3,z=0,\dots,S-1 \right\}\cup\left\{\frac{\nu_{0,x^{\prime}}^{\prime}}{\tilde{p}^{\text{tom}}}: x^{\prime}=0,\dots 15,\right\}}_{=:\max(\nu_{0})}, \tag{137}\]
\[\min(\tilde{g}_{0})=p^{\text{key}}G_{0}+p^{\text{key}}\underbrace{\min\left\{ \frac{\nu_{0,xz}}{\tilde{p}^{\text{PE}}}:x=0,\dots,3,z=0,\dots,S-1\right\}\cup \left\{\frac{\nu_{0,x^{\prime}}^{\prime}}{\tilde{p}^{\text{tom}}}:x^{\prime}=0, \dots,15\right\}}_{=:\min(\nu_{0})}. \tag{138}\]
In the case where the minimisers are non-positive and and the maximisers are non-negative, we can upper bound
\[\max(\tilde{g}_{0})-\min(\tilde{g}_{0})\leq\max(\nu_{0})-\min(\nu_{0}), \tag{139}\]
which is independent of \(p^{\rm key}\). Finally, we can introduce the min-tradeoff function induced by our crossover min-tradeoff function \(\tilde{g}_{0}\), given by eq. (135), via eqs. (101,102).
\[f(p) =\sum_{c\in\tilde{\mathcal{C}}}p(c)\left(\max(\tilde{g}_{0})+ \frac{1}{1-p^{\rm key}}\left(\tilde{g}_{0}(\delta_{c})-\max(\tilde{g}_{0}) \right)\right)+p(\bot,\bot,\bot)\max(\tilde{g}_{0}) \tag{140}\] \[=\max(\tilde{g}_{0})+\sum_{c\in\tilde{\mathcal{C}}}\frac{p(c)}{1 -p^{\rm key}}\left(\tilde{g}_{0}(\delta_{c})-\max(\tilde{g}_{0})\right)\] (141) \[=\underbrace{p^{\rm key}\left(G_{0}+\max(\nu_{0})\right)}_{=:{ \rm const}}+\sum_{x=0}^{3}\sum_{z=0}^{S-1}\underbrace{p^{\rm key}\frac{\nu_{0,xz}/\tilde{p}^{\rm PE}-\max(\nu_{0})}{1-p^{\rm key}}}_{=:h_{z,x,\bot}}p(x,z, \bot)+\sum_{x^{\prime}=0}^{15}\underbrace{p^{\rm key}\frac{\nu_{0,x^{\prime}} ^{\prime}/\tilde{p}^{\rm tom}-\max(\nu_{0})}{1-p^{\rm key}}}_{=:h_{\bot,\bot, x^{\prime}}}p(\bot,\bot,x^{\prime})\] (142) \[={\rm const}+f^{\rm PE}(p)+f^{\rm tom}(p), \tag{143}\]
where
\[f^{\rm PE}(p)=\sum_{x=0}^{3}\sum_{z=0}^{S-1}h_{x,z,\bot}p(x,z, \bot), \tag{144}\] \[f^{\rm tom}(p)=\sum_{x^{\prime}=0}^{15}h_{\bot,\bot,x^{\prime}} p(\bot,\bot,x^{\prime}), \tag{145}\]
which are of the form of eqs. (33,34), allowing us to use \(f\) as a min-tradeoff function in Theorem 1. When applying Theorem 1, the term that goes into the bound on the key rate is \(f(p_{0})\), which can be reformulated in a more convenient form as follows:
\[f(p_{0}) =p^{\rm key}\left(G_{0}+\max(\nu_{0})+\sum_{x=0}^{3}\sum_{z=0}^{S -1}\left(\nu_{0,xz}-\tilde{p}^{\rm PE}\max(\nu_{0})\right)p_{0}^{\rm sim}(x,z )+\sum_{x^{\prime}=0}^{15}\left(\nu_{0,x^{\prime}}^{\prime}-\tilde{p}^{\rm tom }\max(\nu_{0})\right)p_{0}^{\rm tom}(x^{\prime})\right) \tag{146}\] \[=p^{\rm key}\left(G_{0}+\sum_{x=0}^{3}\sum_{z=0}^{S-1}\nu_{0,xz} p_{0}^{\rm sim}(x,z)+\sum_{x^{\prime}=0}^{15}\nu_{0,x^{\prime}}^{\prime}p_{0}^{ \rm tom}(x^{\prime})\right)\] (147) \[=\tilde{g}_{0}(\tilde{p}_{0}). \tag{148}\]
### Asymptotic Rates
With our choice of a min-tradeoff function \(f(p)=f^{\rm PE}(p)+f^{\rm tom}(p)+{\rm const}\) given by eqs. (143 - 145), we can now compute the asymptotic key rate in Theorem 1, and show that we can achieve soundness and completeness in the asymptotic limit.
Let \(n\in\mathbb{N}\). We begin by noting that, for fixed \(S\in\mathbb{N}\), our numerically obtained values \(\nu_{0,xz}\), for \(x=0,...,3,z=0,...,S-1\), and \(\nu_{0,x^{\prime}}^{\prime}\), for \(x^{\prime}=0,...,15\), are constant, i.e. independent of \(n\). Let us consider some fixed values for the parameters \(\epsilon_{\rm NonAbort}^{\rm phys},\epsilon^{\rm tom},\epsilon_{\rm EC}, \epsilon_{\rm EC}^{c},\epsilon_{\rm PE}^{c}\in(0,1)\), such that \(\epsilon^{\rm tom}<\frac{1}{2}\epsilon_{\rm NonAbort}^{\rm phys}\), as well as \(\epsilon\in\left(0,1-\sqrt{2\epsilon^{\rm tom}/\epsilon_{\rm NonAbort}^{\rm phys }}\right)\), \(\epsilon^{\rm phys}=\epsilon+\sqrt{2\epsilon^{\rm tom}/\epsilon_{\rm NonAbort}^{ \rm phys}}\). Let us also consider some constant \(0\leq\tilde{p}^{\rm PE}\leq 1\), and \(\tilde{p}^{\rm tom}=1-\tilde{p}^{\rm PE}\).
Let us now choose \(a=1+n^{-3/4}\), as well as \(p^{\rm key}=1-n^{-\frac{1}{2}}\) and \(p^{\rm PE}=\tilde{p}^{\rm PE}n^{-\frac{1}{2}}\), implying \(p^{\rm tom}=\tilde{p}^{\rm tom}n^{-\frac{1}{2}}\). It then holds that \(h_{x,z,\bot}=\mathcal{O}(n^{\frac{1}{2}})\) and \(h_{\bot,\bot,x^{\prime}}=\mathcal{O}(n^{\frac{1}{2}})\), as well as \(p_{0}(x,z,\bot)=\mathcal{O}(n^{-\frac{1}{2}})\) and \(p_{0}(\bot,\bot,x^{\prime})=\mathcal{O}(n^{-\frac{1}{2}})\), for all \(x\in\{0,...,3\}\), \(z\in\{0,...,S-1\}\) and \(x^{\prime}\in\{0,...,15\}\). Hence, for all \(i=1,...,4S\), the quantities defined in eqs. (89-90) scale as follows: \(\gamma_{i}=\mathcal{O}(n^{-\frac{1}{2}})\), \(c_{i}=\mathcal{O}(n^{\frac{1}{2}})\), and \(D=\mathcal{O}(n^{\frac{1}{2}})\). Consequently, in order to fulfill eq. (88), we have to choose \(\delta_{\rm IO}^{\rm pol}=\mathcal{O}((\log n)^{\frac{1}{2}}n^{-\frac{1}{4}})\). Similarly, it can be shown that that we need \(\delta_{\rm IO}^{\rm tol}=\mathcal{O}((\log n)^{\frac{1}{2}}n^{-\frac{1}{4}})\), in order to satisfy eq. 80).
As for the remaining higher order terms in eq. (81), we note that by eqs. (106,107,139) it holds
\[V \leq\tilde{V}\leq\sqrt{\frac{1}{1-p^{\rm{key}}}\left(\max(\nu_{0})- \min(\nu_{0})\right)^{2}+2}+\log\left(2d_{O}^{2}+1\right)=\mathcal{O}\left(n^{ \frac{1}{4}}\right), \tag{149}\] \[K_{a} \leq\tilde{K}_{a}=\mathcal{O}(1). \tag{150}\]
Hence, all remaining higher order terms, except \(\frac{1}{n}{\rm leak}_{\rm EC}\), which we keep open, scale as \(\mathcal{O}(n^{-\frac{1}{4}})\) or less. Further, in the term \(f(p_{0})\) in Theorem 1, given by eq. (147), only depends on \(n\) via the prefactor \(p^{\rm{key}}\). In summary, we can obtain the following bound on the asymptotic rate
**Theorem 3** (Asymptotic rate): _For the above mentioned values of the parameters, it holds_
\[r^{\rm{phys}}\big{|}_{\Omega_{\rm{Nonabort}}^{\rm{phys}}} \geq G_{0}+\sum_{x=0}^{3}\sum_{z=0}^{S-1}\nu_{0,xz}p_{0}^{\rm{sim} }(x,z)+\sum_{x^{\prime}=0}^{15}\nu^{\prime}_{0,x^{\prime}}p_{0}^{\rm{tom}}(x^ {\prime})-\frac{1}{n}{\rm leak}_{\rm EC}+\mathcal{O}((\log n)^{\frac{1}{2}}n ^{-\frac{1}{4}}) \tag{151}\] \[\lim_{n\to\infty}r^{\rm{phys}}\big{|}_{\Omega_{\rm{Nonabort}}^{ \rm{phys}}} \geq G_{0}+\sum_{x=0}^{3}\sum_{z=0}^{S-1}\nu_{0,xz}p_{0}^{\rm{sim} }(x,z)+\sum_{x^{\prime}=0}^{15}\nu^{\prime}_{0,x^{\prime}}p_{0}^{\rm{tom}}(x^ {\prime})-\lim_{n\to\infty}\frac{1}{n}{\rm leak}_{\rm EC}. \tag{152}\]
## V Numerical implementation and results
In order to show that our approach produces non-trivial key rates in a realistic implementation, we consider the same scenario that was used in [30]. Namely, we simulate an experiment in which Alice and Bob are linked by an optical fibre of length \(D\) with excess noise \(\xi\), transmittance \(\eta=10^{-\omega D/10}\) and an attenuation of \(\omega=0.2\) dB/km. This provides us with a simulated distribution that can be computed efficiently using MATLAB
\[p_{0}^{\rm{sim}}(x,z)=\int_{\tilde{\mathcal{R}}_{z}}\frac{\gamma\exp\left( \frac{-|\gamma e^{i\theta}-\sqrt{\eta}\varphi_{x}|^{2}}{1+\eta\xi/2}\right)}{ 4\pi(1+\eta\xi/2)}d\theta d\gamma, \tag{153}\]
where \(\tilde{\mathcal{R}}_{z}\) represents the fragment of the phase space corresponding to each module \(z\in\{0,...,S-1\}\), defined according to the intervals described in (27), and \(\varphi_{x}\in\{\alpha,i\alpha,-\alpha,-i\alpha\}\) are the coherent state amplitudes used by Alice with \(\alpha\in\mathbb{R}\). The region operators for the constraints \(\tilde{R}_{B}^{z}\) are given by the same intervals as in (153)
\[\tilde{R}_{B}^{z}=\frac{1}{\pi}\int_{\tilde{\mathcal{R}}_{z}}\gamma\left| \gamma e^{i\theta}\right>\left<\gamma e^{i\theta}\right|d\theta d\gamma, \tag{154}\]
while their numerical implementation requires to switch to the Fock basis, using the inner product [58]
\[\left<\gamma e^{i\theta}\mid m\right>=\frac{\gamma^{m}e^{-\gamma^{2}/2}e^{-im \theta}}{\sqrt{m!}}. \tag{155}\]
For the hypothetical tomography, we choose an IC POVM \(\{\Gamma_{x^{\prime}}\}_{x=0}^{15}\), which completely describes Alice's marginal with a probability distribution
\[p_{0}^{\rm{tom}}(x^{\prime})=\frac{1}{4}\sum_{x,y=0}^{3}\left<\varphi_{y}| \varphi_{x}\right>{\rm Tr}\left[\Gamma_{x^{\prime}}\left|x\right>\left<y|_{A} \right]. \tag{156}\]
For the representation of operators and states, we take the photon number cutoff \(N_{c}=15\) since it provides a good balance between the execution time of the solver and the reliability of the numerics according to [28; 30; 55], where it is stated that \(N_{c}\geq 10\) is generally enough to reduce the error given by the truncated representation.
With all the elements of the optimisation defined, we minimize the SDP (130) according to the Frank-Wolfe algorithm. For this process we use the toolbox YALMIP [59] together with the interior point solver SDPT3 [60; 61]. Once the suboptimal bound is obtained, we compute the dual (132) using the optimization software CVX [62; 63], since it provides slightly better results than YALMIP and SDPT3. For the iterative process of the Frank-Wolfe algorithm, we set a stopping criterion based on calculating the lower bound (127) every 15 iterations of the minimization (130); if the relative difference between the upper bound given by minimizing (119) and the reliable lower bound is smaller than a 2%, the algorithm stops the optimization. If this margin is not reached, the algorithm continues until a total
of 300 iterations are performed. Using this approach we obtain \(\tilde{\rho}_{0}\) and \(\vec{v}_{0}\), the feasibility of which can be checked analytically. This allows us to obtain a crossover min-tradeoff function \(\tilde{g}_{0}\) via eq. (134). By Theorem 3, we observe the asymptotic rate
\[r_{\infty}\geq G_{0}+\sum_{x=0}^{3}\sum_{z=0}^{S-1}\nu_{0,xz}p_{0}^{\rm sim}(x, z)+\sum_{x^{\prime}=0}^{15}\nu_{0,x^{\prime}}^{\prime}p_{0}^{\rm tom}(x^{ \prime})-\lim_{n\to\infty}\frac{1}{n}\text{leak}_{\rm EC}. \tag{157}\]
For the classical information leaked during error correction, we can assume an honest, iid implementation of the protocol. Assuming a perfect implementation of an \(\epsilon_{\rm EC}\)-secure and robust error correction, it holds [11]
\[\frac{1}{n}\text{leak}_{\rm EC}\leq\chi_{\rm EC}H(\hat{Z}|\hat{X})+(1-\chi_{\rm EC })H(\hat{Z})+\sqrt{\frac{3\log(2/\epsilon_{\rm EC})}{n}}\log(|\hat{Z}|+3), \tag{158}\]
where \(\hat{X}\), \(\hat{Z}\) represent the key string bits (after removing the symbol \(\perp\)) of Alice and Bob respectively, and \(\chi_{\rm EC}\) is the error correction efficiency, where \(\chi_{\rm EC}=1\) corresponds to the Shannon limit. \(H(\hat{Z}|\hat{X})\) can be computed numerically according to the distribution (153) adapted for the modulation of the key rounds, namely
\[p_{0}^{\rm EC}(x,z)=\int_{0}^{\infty}\int_{\frac{\pi}{4}(2z-1)}^{\frac{\pi}{4 }(2z-1)}\frac{\gamma\exp\left(\frac{-|\gamma e^{i\theta}-\sqrt{\eta}\tilde{ \rho}_{x}|^{2}}{1+\eta\xi/2}\right)}{4\pi(1+\eta\xi/2)}d\theta d\gamma. \tag{159}\]
With the error correction cost, it is not only possible to calculate the asymptotic secret key rate, but also optimise the amplitude \(\alpha\) that Alice chooses for her coherent states, which is not attainable with only the results from the SDP. The outcomes of such optimisation in amplitudes are included for completeness in Figure 2, which indicates in particular how \(\alpha=0.71\) becomes the optimal amplitude for distances beyond \(D=65\) km in the case of \(\chi_{\rm EC}=1\).
To test the accuracy of our approach, we compared our results for different values of the cutoff, as well as with the so far standard method of computing the asymptotic key rate based on performing parameter estimation with moments of the quadrature operators [30]. The comparison can be found in Figure 3 where one can see that, while using moments to constrain the state shared by Alice and Bob produces better rates for distances shorter than 15 km, both approaches provide comparable results for larger lengths. Note that computing moments and coarse-grained
Figure 2: Asymptotic key generation rate per pulse in terms of the amplitude of the coherent states \(\alpha\) for multiple distances. The plots were obtained for an excess noise \(\xi=1\%\) and a perfect error correction efficiency \(\chi_{\rm EC}=1\).
probabilities are different ways of discretising the information contained in a CV distribution - these results show that taking moments is better for short distances, albeit the two approaches lead to almost the same values when losses become large. Moreover, in both cases the asymptotic key rates seem to saturate when increasing the cutoff value. We verified such hypothesis at the inset of Figure 3, where it can be observed that the curves for our modulation converge to the same values. The results are intuitive, in the sense that the amplitude of the states detected by Bob decrease with the losses at the channel, which eventually means a decreasing average number of photons. Based on these numerical results, we conjecture that the asymptotic key rates will be the same if computed without the cutoff assumption.
Figure 4 shows the asymptotic key rates according to eq. (157), where a modulation \((\Delta,\delta)=(0.9,0.9)\) was employed together with a cutoff \(N_{c}=15\). In order to obtain a more numerically stable set of dual points for the finite analysis, the amplitude of the coherent states was optimised for the Shannon limit, and then a correction was applied to reach the scenario \(\chi_{\rm EC}=0.95\). For distances below \(150\) km, the algorithm typically needs \(120\) or less iterations to converge. For larger lengths, it was necessary to reach the limit of \(300\) iterations before using eq. (124) to obtain the reliable lower bound. On the other hand, we observed a numerical error at the constraints \(\varepsilon^{\prime}\) typically between \(10^{-10}\) for small lengths and \(10^{-15}\) for very long distances, which ensures both the reliability of the code and the tightness of the key rates.
Switching to the finite-size regime, we can use our crossover min-tradeoff function \(\tilde{g}_{0}\) in Theorem 1 together with the asymptotic rate \(4\) to observe the finite key generation rates. Choosing the parameters \(\epsilon=10^{-5}\), \(\epsilon_{\rm NonAbort}^{\rm phys}=10^{-2}\), \(\epsilon^{\rm tom}=10^{-8}\), \(\epsilon^{\rm phys}=\epsilon+\sqrt{2\epsilon^{\rm tom}/\epsilon_{\rm NonAbort} ^{\rm phys}}\), \(\epsilon_{\rm PE}^{c}=10^{-6}\) and \(\epsilon_{\rm EC}=10^{-8}\) together with a grid search optimisation over \(a\) and \(p^{\rm key}\), we were able to obtain non-zero key rates for \(n\geq 10^{12}\) rounds and distances \(D\geq 20\)km. The outcomes of this process are illustrated in Figure 5, where the curves represent different values for the finite key generation rates with respect to the number of rounds taken for the protocol.
## VI Discussion
In this work we have provided a security proof for a discrete modulated CVQKD protocol in which Alice prepares four coherent states and Bob performs heterodyne measurements. The proof exploits the fact that the information used for parameter estimation consists of coarse grained probabilities of the generated continuous measurement outcomes instead of moments, as in previous approaches. As shown, this hardly affects the asymptotic key rates, but significantly simplifies the security analysis, as one can employ methods originally introduced in the context of DVQKD, such as the EAT.
Figure 3: Asymptotic secret key generation rate with \(\xi=2\%\) and \(\chi_{\rm EC}=0.95\) according to the parameter estimation described in Lin et al. [30] and our modulation \((\Delta,\delta)=(0.9,0.9)\), with diverse values for the cutoff \(N_{c}\). The amplitude, taken to be the same for all curves, was optimised with respect to the distance for the modulation employed in [30]. The inset shows the convergence of our modulation.
Despite the simplifications, the application of EAT, in its original form [36, 37], to the considered prepare-and-measure QKD protocol is not straightforward. The challenging aspect of this has been the fact that, in order to describe the QKD protocol as a sequence of EAT channels, where Eve's reference system cannot be updated by the EAT channels, we had to use an entanglement-based version of the protocol when applying the EAT. Whereas any prepare-and-measure QKD protocol can be easily transformed to an entanglement-based protocol using the source replacement scheme, we have been faced with the issue that the minimisation defining our min-tradedf function in eq (60) can be constraint only in terms of the observed statistics from Alice and Bob's measurements in parameter estimation rounds, i.e. by a distribution of the classical output \(C_{1}^{n}\). Such constraints are sufficient to obtain a nonzero key rate in DIQKD protocols, as has been considered in [38, 39, 40]. This is due to the fact that in device independent settings the observed statistics alone has to be sufficient to certify an entangled state between Alice and Bob, which is a prerequisite to obtain secure key. In device dependent settings, such as the one we have considered
here, however, the observed statistics from Alice and Bob's parameter estimation rounds does not necessarily suffice to certify entanglement. Consequently, if we only use statistics from parameter estimation rounds, the bound on the key rate becomes trivial. We have overcome this issue by considering a hypothetical protocol in which Alice uses some randomly chosen rounds to perform a state tomography on her marginal state, the outcome of which is included in \(C_{1}^{n}\). Thus, the observed statistics becomes sufficient to obtain nontrivial bounds on the key rate. The introduction of the hypothetical tomography poses some additional challenges in the finite size security proof. In particular there is a possibility that the tomography test does not pass, in which case the hypothetical protocol would abort. In order to ensure that this only happens with negligible probability, we have introduced a tolerance parameter \(\delta_{\rm{tom}}^{\rm{tol}}\) in Lemma 1, which has to be subtracted from our key rate. Also, in order to prove security of the physical protocol, it is necessary to show that the raw key states obtained in the hypothetical and physical protocol do not differ by too much and adapt the smoothing parameter accordingly. We have done this in Lemma 2, again at the cost of a reduction of the key rate.
As mentioned in the introduction, after much of the work going into this result was finished, a generalised version of the EAT has been presented in [43], known as Generalised EAT (GEAT). In contrast to the original EAT, the new version allows for Eve's reference system to be updated, while also relaxing the Markov condition to a non-signalling condition. Using GEAT, it is possible to express a prepare-and-measure QKD protocol directly into a sequence of EAT channels, without the need to use an entanglement based version of the protocol, as was shown in [44]. When using this new method, there is no need to introduce a hypothetical tomography, hence our Lemmas 1 and 2 would not be needed and higher key rates may be expected. The only caveat when using GEAT is that it makes an additional assumption on Eve's attack, namely only allowing her to have one quantum system at a time. This condition can be enforced by Alice waiting for Bob to confirm he has received a state before sending the next one [44], which might not always be practical. Our method of applying the old EAT does not need this assumption. We therefore believe that our current method of using an hypothetical protocol with tomography of Alice's marginal, is of interest not only for discrete modulated CVQKD, but for proving security of device dependent QKD in settings where the condition that Eve only holds one system at a time is not practical.
Our security analysis can be improved in several directions. As mentioned, being a prepare-and-measure protocol, it is natural to consider the application of GEAT. This may not only provide larger finite-key generation rates, but also allow one to study variants of the protocol using homodyne measurements, which we were unable to accommodate within our security analysis. Another related question is to analyse using GEAT how the obtained rates vary with the number of states prepared by Alice and, in particular, how they approximate the rates of Gaussian modulated protocols. A second relevant improvement is to study how to remove the cutoff assumption in the computation of the asymptotic key rates. This has been achieved in the standard case in which the information used in parameter estimation is made of moments of Bob's quadratures [31, 45], but seems trickier when dealing with coarse-grained probabilities. In our view, our numerical results provide strong evidence that the rates without the cutoff assumption will be practically the same as those reported in our work for large enough value of the cutoff, but this deserves to be confirmed.
To conclude, we provide a security proof for CVQKD protocols in which the information in parameter estimation consists of coarse-grained probabilities instead of moments, as done so far. The analysis consist of two main ingredients: (i) the computation of the asymptotic key rates using the formalism of [30] under a cutoff assumption and (ii) the application of EAT including a local tomography process to derive the finite-key rates. Our work therefore shows that use use of coarse-grained probabilities in parameter estimation opens new avenues to prove the security of discrete modulated CVQKD protocols, as well-established methods developed in DVQKD can be applied in a rather straightforward way without any significant impact on the obtained key rates.
###### Acknowledgements.
We would like to thank Rotem Arnon-Friedman, Ian George, Min-Hsiu Hsieh, Anthony Leverrier, Rotem Liss, Gelo Noel Tabia, Enky Oudot, Stefano Pironio, Ernest Tan and Thomas Van Himbeeck for insightful discussions. We would also like to thank two anonymous referees, for conferences Qcrypt and QIP, for their insightful comments. This work is supported by the ERC (AdG CERQUTE, gran agreement No. 834266, and StG AlgoQIP, grant agreement No. 851716), the Government of Spain (FIS2020-TRANQI, NextGen Funds and Severo Ochoa CEX2019-000910-S), Fundacio Cellex, Fundacio Mir-Puig, Generalitat de Catalunya (CERCA, QuantumCAT and the postdoctoral fellowships programme Beatriu de Pinos), the AXA Chair in Quantum Information Science, European Union's Horizon 2020 research and innovation programme under grant agreements No. 820466 (project CiviQ), No. 101114043 (project QSNP), No. 101017733 (project Veriqtas) within the QuantERA II Programme, and No. 801370 (2019 BP 00097) within the Marie Sklodowska-Curie Programme.
## Appendix A Proof of Lemma 3
Let \(\mathcal{H}_{\hat{E}}\) be a Hilbert space. We begin with a pure state \(\ket{\rho}_{AB\hat{E}}\in\mathcal{H}_{AB\hat{E}}\). Alice and Bob's measurements are then performed coherently by means of a series of isometries. Alice's measurement, as well as the random number generator determining what the round is used for, are described by
\[W_{ARX\gets A}^{\mathrm{A}}=\sum_{r=0}^{2}\sqrt{p_{r}}\left(\sum_{x_{r}} \sqrt{P_{A}^{x_{r}}}\otimes\ket{r}_{R}\otimes\ket{x_{r}}_{X}\right), \tag{104}\]
where \(p_{0}=p^{\mathrm{key}},p_{1}=p^{\mathrm{PE}},p_{2}=p^{\mathrm{tom}}\), and \(P_{A}^{x_{r}}\) denotes the \(x\)-th POVM element applied by Alice when her random bit provides an outcome \(r\)
\[\{P_{A}^{x_{0}}\}_{x_{0}} =\{P_{A}^{x_{1}}\}_{x_{1}}=\{\ket{x}\bra{x}_{A}\}_{x=0}^{3},\] \[\{P_{A}^{x_{2}}\}_{x_{2}} =\{\Gamma_{A}^{x_{1}}\}_{x=0}^{15}.\]
Note that the POVM elements are given by a square root to preserve the isometric characteristics of the measurement. Register \(R\) will announce whether the bit will be used for the generation of the key, parameter estimation or tomography, whereas \(X\) stores the result of Alice's measurement. Bob will perform a heterodyne measurement, which he will later discretise according to the goal of the round. Such measurement is given by the isometry
\[W_{BY\gets B}^{\mathrm{B}}=\int d^{2}y\sqrt{\frac{\ket{y}\bra{y}_{B}}{ \pi}}\otimes\ket{y}_{Y}. \tag{105}\]
Where the integral is given by the fact that coherent states form a continuous basis. The classical communication of \(R\) between Alice and Bob, which is wiretapped by Eve, can be expressed coherently by adding ancillary subsystems followed by CNOTs.
\[V_{[R]\gets R}^{\mathrm{c1}}=U_{R:R^{\prime}R^{\prime\prime}}^{\mathrm{ CNOT}}\ket{00}_{R^{\prime}R^{\prime\prime}}, \tag{106}\]
where \(R^{\prime}\) is distributed to Bob and \(R^{\prime\prime}\) to Eve, and we have introduced the simplifying notation \([R]:=RR^{\prime}R^{\prime\prime}\). Furthermore, \(U_{R:RR^{\prime\prime}}^{\mathrm{CNOT}}\) is the unitary describing a double CNOT taking \(R\) as control and \(R^{\prime}\) and \(R^{\prime\prime}\) as targets. Now that Bob and Alice have made their public announcements, we have to apply a new isometry where Bob discretises the key,
\[V_{RY\hat{Z}\gets RY}^{\mathrm{K}}=\ket{0}\bra{0}_{R}\otimes\sum_{z=0}^{ 3}\sqrt{R_{Y}^{z}}\otimes\ket{z}_{\hat{Z}}+\left(\ket{1}\bra{1}_{R}+\ket{2} \bra{2}_{R}\right)\otimes\mathbb{1}_{Y}\otimes\ket{\bot}_{\hat{Z}}. \tag{107}\]
Here, the set \(\{R_{Y}^{z}\}_{z=0}^{3}\) represents the region operators for the discretisation in key rounds, whose definitions are given in (112). The state that results after applying all the isometries is then given by (where we have omitted identities on systems not involved)
\[\ket{\omega}_{ABXY\hat{Z}[R]\hat{E}}=V^{K}V^{\mathrm{c1}}W^{\mathrm{B}}W^{ \mathrm{A}}\ket{\rho}_{AB\hat{E}}. \tag{108}\]
Finally, the key register is dephased by a pinching map \(\mathcal{Z}^{\prime}:\hat{Z}\rightarrow\hat{Z}\), defined with the Kraus operators
\[Z_{j}=\ket{j}\bra{j}_{\hat{Z}}\otimes\mathbb{1}, \tag{109}\]
for \(j\in\{0,1,2,3,\bot\}\). Note that this is same definition as in (113), albeit here with the symbol \(\bot\) included. We can now apply Theorem 1 from [53], to show that
\[H(\hat{Z}|R^{\prime\prime}\hat{E})_{\mathcal{Z}(\omega)}=D(\omega_{ABXY\hat{Z} RR^{\prime}}||\mathcal{Z}^{\prime}(\omega_{ABXY\hat{Z}RR^{\prime}})) \tag{110}\]
Now, the r.h.s. does no longer depend on \(\hat{E}\). Let us also observe that in the marginal \(\omega_{ABXY\hat{Z}RR^{\prime}[P]}\) registers \(RR^{\prime}\) have decohered due to traceout of \(R^{\prime\prime}\). We can then reformulate it as
\[\omega_{ABXY\hat{Z}RR^{\prime}}=p^{\mathrm{key}}\ket{00}\bra{00}_{RR^{ \prime}}\otimes\omega_{ABXY\hat{Z}}^{\mathrm{key}}+(1-p^{\mathrm{key}})\left( \ket{11}\bra{11}_{RR^{\prime}}+\ket{22}\bra{22}_{RR^{\prime}}\otimes\omega_{ ABXY}^{\bot}\otimes\ket{\bot}\bra{\bot}_{\hat{Z}}, \tag{111}\]
where \(p^{\rm key}\) denotes the probability that Alice will use the round for the generation of the key. The state (100) has a classical-quantum (cq) structure, so that by the properties of the relative entropy on \({\rm{cq}}\)-states [52], we can simplify (101) by splitting the state according to the classical registers \(RR^{\prime}\). Moreover, the state \(\omega^{\perp}_{ABXY}\otimes\left|\perp\right\rangle\left\langle\perp\right|_{ \hat{Z}}\) is invariant under the pinching, so that the relative entropy for such state is zero. The whole process adds up to the equality
\[D(\omega_{ABXYRR^{\prime}\hat{Z}}||\mathcal{Z}^{\prime}(\omega_{ ABXYRR^{\prime}\hat{Z}}))=p^{\rm key}D(\omega^{\rm key}_{ABXY\hat{Z}}|| \mathcal{Z}(\omega^{\rm key}_{ABXY\hat{Z}})), \tag{102}\]
where we have also substituted \(\mathcal{Z}^{\prime}\) for \(\mathcal{Z}\) since we have removed \(\perp\) from the key register \(\hat{Z}\). The explicit form of the key state \(\omega^{\rm key}_{ABXY\hat{Z}}\) is then given by
\[\omega^{\rm key}_{ABXY\hat{Z}}=\frac{1}{p^{\rm key}}\operatorname{Tr}_{R^{ \prime\prime}\hat{E}}\left[\left\langle 00\right|_{RR^{\prime}}\omega_{ ABXY[R]\hat{Z}\hat{E}}\left|00\right\rangle_{RR^{\prime}}\right]. \tag{103}\]
Following the arguments provided in Appendix A of [30], we can further simplify (102). First of all, the reduction of the state (101) according to the properties of the relative entropy for \({\rm{cq}}\)-states has suppressed the sum over \(r\) at (100), leaving only the term related to the key generation, namely \(r=0\). Hence, Alice's operator for the key state is given by
\[W^{\prime\rm{A}}_{AX\gets A}=\sum_{x=0}^{3}\left|x\right\rangle\left\langle x \right|_{A}\otimes\left|x\right\rangle_{X}, \tag{104}\]
where we used the fact that Alice's POVM elements in \(A\) are projectors, so that the square root can be removed. This operator now merely copies and projects the information stored in \(A\) to the new register \(X\), which effectively represents an isometry that is invariant under the pinching (since both registers are not related to the key register \(\hat{Z}\)). Hence we can simplify this isometry by removing \(X\), and the final operator for Alice will be a mere identity in \(A\).
As for the key rounds, Bob only needs to obtain the discretised key variable \(\hat{Z}\). Thus, he can group the POVM elements corresponding to a particular value of \(\hat{Z}\), forming a _coarse grained_ POVM \(\{R^{\prime}_{B}\}_{i=0}^{3}\) that acts directly on register \(B\), and is given by the region operators defined in (112). Hence, register \(Y\) is not necessary and Bob's measurement and discretisation will thus be given by
\[W^{\prime\rm{B}}_{B\hat{Z}\gets B}=\sum_{z=0}^{3}\sqrt{R^{z}_{B}}\otimes \left|z\right\rangle_{\hat{Z}}. \tag{105}\]
Now, the simplified maps for Alice and Bob are combined to provide the CP map \(\mathcal{G}:AB\to AB\hat{Z}\) that represents the postprocessing, which as shown in (111) is given by the superoperator
\[G=W^{\prime\rm{A}}\otimes W^{\prime\rm{B}}=\mathbb{1}_{A}\otimes\sum_{z=0}^{3 }\sqrt{R^{z}_{B}}\otimes\left|z\right\rangle_{\hat{Z}}. \tag{106}\]
We can now conclude with the redefinition of the relative entropy at (102) in terms of the postprocessing map,
\[D(\omega^{\rm key}_{ABXY\hat{Z}}||\mathcal{Z}(\omega^{\rm key}_{ABXY\hat{Z}}) )=D(\mathcal{G}(\rho_{AB})||\mathcal{Z}(\mathcal{G}(\rho_{AB}))). \tag{107}\]
By definition, register \(R^{\prime\prime}\) is identical to register \(S\) in (110) and for any \(\mathcal{H}_{\hat{E}}\) and any \(\left|\rho\right\rangle_{AB\hat{E}}\in\mathcal{H}_{AB\hat{E}}\) it holds for \(\mathcal{Z}(\omega)_{\hat{Z}\hat{S}\hat{E}}=\left(\mathrm{id}_{\hat{E}} \otimes\mathcal{M}^{\rm{EAT}}(\rho)\right)_{\hat{Z}S\hat{E}}\), for \(\omega\) defined as in (101). Combining the equations (101,102,107), we obtain that for all \(\tilde{p}\in\mathcal{P}_{\tilde{\mathcal{G}}}\),
\[g(\tilde{p})=\inf_{\rho\in\Sigma(\tilde{p})}H(\hat{Z}|R^{\prime\prime}\hat{E} )_{\mathcal{Z}(\omega)}=p^{\rm key}\inf_{\rho\in\Sigma(\tilde{p})}D(\mathcal{ G}(\rho_{AB})||\mathcal{Z}(\mathcal{G}(\rho_{AB}))), \tag{108}\]
which finishes the proof.
## Appendix B Proof of Lemma 4
Let \(p,q\in\mathcal{P}_{\mathcal{C}}\), \(0\leq\lambda\leq 1\). Without loss of generality we can assume that \(\Sigma(p)\) is not empty. Then there exist states \(\rho_{AB}\in\Sigma(p)\) and \(\tau_{AB}\in\Sigma(q)\) such that
\[p^{\text{key}}D(\mathcal{G}(\rho_{AB})||\mathcal{Z}(\mathcal{G}( \rho_{AB})))=g(p), \tag{101}\] \[p^{\text{key}}D(\mathcal{G}(\tau_{AB})||\mathcal{Z}(\mathcal{G}( \tau_{AB})))=g(q). \tag{102}\]
Let us now consider the flag state
\[\omega_{ABF}=\lambda\rho_{AB}\otimes\left|0\right\rangle\left\langle 0\right|_{F} +(1-\lambda)\tau_{AB}\otimes\left|1\right\rangle\left\langle 1\right|_{F}. \tag{103}\]
It then holds [52]
\[p^{\text{key}}D(\mathcal{G}\otimes\mathrm{id}_{F}(\omega_{ABF} )||\mathcal{Z}\circ\mathcal{G}\otimes\mathrm{id}_{F}(\omega_{ABF})) =\lambda p^{\text{key}}D(\mathcal{G}(\rho_{AB})||\mathcal{Z}( \mathcal{G}(\rho_{AB})))+(1-\lambda)p^{\text{key}}D(\mathcal{G}(\tau_{AB})|| \mathcal{Z}(\mathcal{G}(\tau_{AB})))\] \[=\lambda g(p)+(1-\lambda)g(q), \tag{104}\]
As tracing out the flag system \(F\) cannot increase the relative entropy, it holds
\[D(\mathcal{G}(\omega_{AB})||\mathcal{Z}(\mathcal{G}(\omega_{AB})))\leq D( \mathcal{G}\otimes\mathrm{id}_{F}(\omega_{ABF})||\mathcal{Z}\circ\mathcal{G} \otimes\mathrm{id}_{F}(\omega_{ABF})). \tag{105}\]
Let now \(c\in\tilde{\mathcal{C}}\). It then holds
\[\left\langle c\right|\mathrm{Tr}_{OS}\left[\mathcal{M}^{\text{ EAT,test}}(\omega_{AB})\right]\left|c\right\rangle =\lambda\left\langle c\right|\mathrm{Tr}_{OS}\left[\mathcal{M}^{ \text{EAT,test}}(\rho_{AB})\right]\left|c\right\rangle+(1-\lambda)\left\langle c \right|\mathrm{Tr}_{OS}\left[\mathcal{M}^{\text{EAT,test}}(\tau_{AB})\right] \left|c\right\rangle\] \[=\lambda p(c)+(1-\lambda)q(c). \tag{106}\]
This implies that \(\omega_{AB}\in\Sigma\left(\lambda p+(1-\lambda)q\right)\). By definition of \(g\), and eqs. (105) and (104), it then holds
\[g(\lambda p+(1-\lambda)q) \leq p^{\text{key}}D(\mathcal{G}(\omega_{AB})||\mathcal{Z}( \mathcal{G}(\omega_{AB})))\] \[\leq p^{\text{key}}D(\mathcal{G}\otimes\mathrm{id}_{F}(\omega_{ ABF})||\mathcal{Z}\circ\mathcal{G}\otimes\mathrm{id}_{F}(\omega_{ABF}))\] \[=\lambda g(p)+(1-\lambda)g(q), \tag{107}\]
finishing the proof.
## Appendix C Upper bounding the classical smooth max entropy
Let \(n\in\mathbb{N}\), and for \(1=1,...,n\), let \(Y_{i}\) be a binary classical random variable such that \(P_{Y_{i}}(1)=p\) and \(P_{Y_{i}}(0)=1-p\). Further, define classical random variable \(X_{i}\) such that \(X_{i}=\perp\) if \(Y_{i}=0\). Otherwise the values are chosen from an alphabet \(\mathcal{X}\) such that \(\left|\mathcal{X}\cup\left\{\perp\right\}\right|=d\). We use the operator representation to describe the joint state as
\[\rho_{X_{1}^{n}Y_{1}^{n}}=\sum_{x_{1},...,x_{n}\in\mathcal{X}\cup\left\{\perp \right\}}\sum_{y_{1},...,y_{n}=0}^{1}P_{X_{1}^{n}Y_{1}^{n}}(x_{1},...,x_{n},y_{ 1},...,y_{n})\left|x_{1},...,x_{n}\right\rangle\left\langle x_{1},...,x_{n} \right|_{X_{1}^{n}}\otimes\left|y_{1},...,y_{n}\right\rangle\left\langle y_{1},...,y_{n}\right|_{Y_{1}^{n}}, \tag{108}\]
etc.
**Lemma 5**: _For any \(\epsilon>0\) it holds_
\[H_{\max}^{\epsilon}(X_{1}^{n}|Y_{1}^{n})_{\rho}\leq np\log d+\sqrt{\frac{n}{2 }\ln\frac{2}{\epsilon^{2}}}\log d. \tag{109}\]
**Proof.**
Let \(\epsilon>0\) and define \(\delta:=\left(\frac{\ln 2-2\ln\epsilon}{2n}\right)^{\frac{1}{2}}\).
We can divide the sum in eq. (108) into a part with up to \(\left|n(p+\delta)\right|\) terms with \(Y_{i}=1\), hence non-trivial \(X_{i}\), and a part with more than \(\left|n(p+\delta)\right|\) such terms, \(\rho_{X_{1}^{n}Y_{1}^{n}}=\rho_{X_{1}^{n}Y_{1}^{n}}^{\prime}+\rho_{X_{1}^{n}Y_ {1}^{n}}^{\prime\prime}\), where
\[\rho_{X_{1}^{n}Y_{1}^{n}}^{\prime}=\sum_{x_{1},...,x_{n}\in\mathcal{X}\cup \left\{\perp\right\}}\sum_{\begin{subarray}{c}y_{1},...,y_{n}\in\left\{0,1 \right\}^{n}\\ \sum_{i}y_{i}\leq\left|n(p+\delta)\right|\end{subarray}}P_{X_{1}^{n}Y_{1}^{n}} (x_{1},...,x_{n},y_{1},...,y_{n})\left|x_{1},...,x_{n}\right\rangle\left\langle x _{1},...,x_{n}\right|_{X_{1}^{n}}\otimes\left|y_{1},...,y_{n}\right\rangle \left\langle y_{1},...,y_{n}\right|_{Y_{1}^{n}}, \tag{110}\]
\[\rho_{X_{1}^{n}Y_{1}^{n}}^{\prime\prime}=\sum_{x_{1},...,x_{n}\in\mathcal{X}\cup \left\{\perp\right\}}\sum_{\begin{subarray}{c}y_{1},...,y_{n}\in\left\{0,1 \right\}^{n}\\ \sum_{i}y_{i}>\left|n(p+\delta)\right|\end{subarray}}P_{X_{1}^{n}Y_{1}^{n}}(x_{1},...,x_{n},y_{1},...,y_{n})\left|x_{1},...,x_{n}\right\rangle\left\langle x _{1},...,x_{n}\right|_{X_{1}^{n}}\otimes\left|y_{1},...,y_{n}\right\rangle \left\langle y_{1},...,y_{n}\right|_{Y_{1}^{n}}, \tag{111}\]
Let us define
\[\kappa:=\mathrm{Tr}\left[\rho^{\prime\prime}_{X_{1}^{n}Y_{1}^{n}}\right]=\sum_{k= \lfloor n(p+\delta)\rfloor+1}^{n}p^{k}(1-p)^{n-k}\binom{n}{k}, \tag{100}\]
and note that by Hoeffding's inequality, it holds \(\kappa\leq e^{-2n\delta^{2}}\leq\frac{\epsilon^{2}}{2}\). By [46], Lemma 3.17, it then holds for the purified distance
\[P(\rho_{X_{1}^{n}Y_{1}^{n}},\rho^{\prime}_{X_{1}^{n}Y_{1}^{n}})\leq\sqrt{ \left\|\rho_{X_{1}^{n}Y_{1}^{n}}-\rho^{\prime}_{X_{1}^{n}Y_{1}^{n}}\right\|_{1 }+\mathrm{Tr}\left(\rho_{X_{1}^{n}Y_{1}^{n}}-\rho^{\prime}_{X_{1}^{n}Y_{1}^{n} }\right)}=\sqrt{2\kappa}\leq\epsilon \tag{101}\]
hence \(\rho^{\prime}\) is in the \(\epsilon\)-ball around \(\rho\). Consequently, as only the non trivial \(X_{i}\) contribute to the max entropy, it holds
\[H_{\max}^{\epsilon}(X_{1}^{n}|Y_{1}^{n})_{\rho}\leq H_{\max}(X_{1}^{n}|Y_{1}^ {n})_{\rho^{\prime}}\leq\log d^{\lfloor n(p+\delta)\rfloor}. \tag{102}\]
Inserting our choice for \(\delta\) completes the proof.
Now, let's add conditioning on an event \(\Omega\) that occurs with probability \(p_{\Omega}>0\). We can express the state (100) as \(\rho_{X_{1}^{n}Y_{1}^{n}}=\mathrm{Pr}[\Omega]\rho_{X_{1}^{n}Y_{1}^{n}}|_{ \Omega}+(1-\mathrm{Pr}[\Omega])\rho_{X_{1}^{n}Y_{1}^{n}}|_{\Omega}\).
**Lemma 6**: _For any \(\epsilon>0\) and \(0<p_{\Omega}\leq 1\) it holds_
\[H_{\max}^{\epsilon}(X_{1}^{n}|Y_{1}^{n})_{\rho|\alpha}\leq np\log d+\sqrt{ \frac{n}{2}\ln\frac{2}{\epsilon^{2}\Pr[\Omega]}}\log d. \tag{103}\]
**Proof.**
Let \(\epsilon>0\) and define \(\delta:=\left(\frac{\ln 2-\ln p_{\Omega}-2\ln\epsilon}{2n}\right)^{\frac{1}{2}}\). Again, we divide \(\rho_{X_{1}^{n}Y_{1}^{n}}|_{\Omega}=\rho^{\prime}_{X_{1}^{n}Y_{1}^{n}}|_{ \Omega}+\rho^{\prime\prime}_{X_{1}^{n}Y_{1}^{n}}|_{\Omega}\), where
\[\rho^{\prime}_{X_{1}^{n}Y_{1}^{n}}|_{\Omega} =\sum_{x_{1},...,x_{n}\in\mathcal{X}\cup\{\bot\}}\sum_{\begin{subarray} {c}y_{1},...,y_{n}\in\{0,1\}^{n}\\ \sum_{i}y_{i}\leq\lfloor n(p+\delta)\rfloor\end{subarray}}P_{X_{1}^{n}Y_{1}^{ n}}(x_{1},...,x_{n},y_{1},...,y_{n}|\Omega)\left|x_{1},...,x_{n}\right\rangle \left\langle x_{1},...,x_{n}\right|_{X_{1}^{n}}\otimes\left|y_{1},...,y_{n} \right\rangle\left\langle y_{1},...,y_{n}\right|_{Y_{1}^{n}},\] \[\rho^{\prime\prime}_{X_{1}^{n}Y_{1}^{n}}|_{\Omega} =\sum_{x_{1},...,x_{n}\in\mathcal{X}\cup\{\bot\}}\sum_{\begin{subarray} {c}y_{1},...,y_{n}\in\{0,1\}^{n}\\ \sum_{i}y_{i}>\lfloor n(p+\delta)\rfloor\end{subarray}}P_{X_{1}^{n}Y_{1}^{n}}(x _{1},...,x_{n},y_{1},...,y_{n}|\Omega)\left|x_{1},...,x_{n}\right\rangle \left\langle x_{1},...,x_{n}\right|_{X_{1}^{n}}\otimes\left|y_{1},...,y_{n} \right\rangle\left\langle y_{1},...,y_{n}\right|_{Y_{1}^{n}}. \tag{104}\]
Let us define
\[\kappa:= \mathrm{Tr}\left[\rho^{\prime\prime}_{X_{1}^{n}Y_{1}^{n}}|_{ \Omega}\right] \tag{105}\] \[=\sum_{k=\lfloor n(p+\delta)\rfloor+1}^{n}\mathrm{Pr}\left(|\{i: Y_{i}=1\}|=k|\Omega\right)\] (106) \[=\frac{1}{\mathrm{Pr}[\Omega]}\sum_{k=\lfloor n(p+\delta) \rfloor+1}^{n}\mathrm{Pr}\left(|\{i:Y_{i}=1\}|=k\cap\Omega\right)\] (107) \[\leq\frac{1}{\mathrm{Pr}[\Omega]}\sum_{k=\lfloor n(p+\delta) \rfloor+1}^{n}p^{k}(1-p)^{n-k}\binom{n}{k}\] (108) \[=\frac{1}{\mathrm{Pr}[\Omega]}\Pr[k>n(p+\delta)]. \tag{109}\]
By Hoeffding's inequality, it holds \(\kappa\leq\frac{e^{-2n\delta^{2}}}{p_{\Omega}}\leq\frac{\epsilon^{2}}{2}\). By [46], Lemma 3.17, it then holds for the purified distance
\[\mathrm{Pr}(\rho_{X_{1}^{n}Y_{1}^{n}}|_{\Omega},\rho^{\prime}_{X_{1}^{n}Y_{1}^{ n}}|_{\Omega})\leq\sqrt{\left\|\rho_{X_{1}^{n}Y_{1}^{n}}|_{\Omega}-\rho^{\prime}_{X_{1}^{n}Y_{1}^ {n}}|_{\Omega}\right\|_{1}+\mathrm{Tr}\left(\rho_{X_{1}^{n}Y_{1}^{n}}|_{\Omega}- \rho^{\prime}_{X_{1}^{n}Y_{1}^{n}}|_{\Omega}\right)}=\sqrt{2\kappa}\leq\epsilon \tag{110}\]
hence \(\rho^{\prime}|_{\Omega}\) is in the \(\epsilon\)-ball around \(\rho|_{\Omega}\). Consequently, as only the non-trivial \(X_{i}\) contribute to the max entropy, it holds
\[H_{\text{max}}^{\epsilon}(X_{1}^{n}|Y_{1}^{n})_{\rho|_{\Omega}}\leq H_{\text{ max}}(X_{1}^{n}|Y_{1}^{n})_{\rho^{\prime}|_{\Omega}}\leq\log d^{\lfloor n(p+\delta)\rfloor}. \tag{102}\]
Inserting our choice for \(\delta\) completes the proof.
|
2308.04403
|
Properties of sequence of linear functionals on $BV$ with applications
|
This paper is devoted to investigating the sequence of some linear
functionals in the space $BV$ of finite variation functions. We prove that
under certain conditions this sequence is bounded. We also prove that this
result is sharp. In particular, the obtained results can be used to study
convergence of some general Fourier series. Moreover, the obtained conditions
seem to be new and useful also for classical orthonormal systems.
|
L-E. Persson, V. Tsagareishvili, G. Tutberidze
|
2023-08-03T06:24:23Z
|
http://arxiv.org/abs/2308.04403v1
|
# Properties of sequence of linear functionals on \(Bv\) with applications
###### Abstract.
This paper is devoted to investigating the sequence of some linear functionals in the space \(BV\) of finite variation functions. We prove that under certain conditions this sequence is bounded. We also prove that this result is sharp. In particular, the obtained results can be used to study convergence of some general Fourier series. Moreover, the obtained conditions seem to be new and useful also for classical orthonormal systems.
**2020 Mathematics Subject Classification.** 42C10, 46B07
**Key words and phrases:** Sequence of linear functionals, Banach spaces, Fourier coefficients, Fourier series, Orthonormal series.
## 1. Introduction
In order not to disturb the discussion in this introduction and the proofs of our main result we have collected all notations, definitions and other preliminaries in Section 2.
In this paper we prove a new convergence result for a special sequence of linear functionals \(\{U_{n}\}=\{U_{n}(f)\},\) defined by (2)-(4) and where usually \(f\in BV\) on \((0,1).\) See Theorem 1. We also prove that this result is, in a sense, sharp. See Theorem 2.
The study of functionals has a rich history and many powerful and interesting results are obtained, see e.g. the monographs [1, 2, 4, 5, 12, 13, 30] and the references therein. And this interest seems only to increase and one reason is absolutely the fact that such developments are powerful for various applications.
For our investigation it is important to remind about the fact that from Banach's Theorem (see e.g.[2]) it follows that if \(f\in L_{2}(0,1),\ (f\nsim 0)\,,\) then there exists an ONS such that the Fourier series of this function \(f\) is not convergent on \([0,1]\) with respect to this system. Thus, it is clear that the Fourier coefficients of functions of bounded variation do not in general satisfy condition in Theorem B (the Menchov-Rademacher Theorem).
Another motivation for this paper is to use our main result to obtain some new results concerning convergence/divergence of general Fourier series. Some other results for this case can be found in [6, 7, 8, 9, 10, 14, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29]. See also the monograph [11].
The main results Theorems 1 and 2 are presented and proved in Section 3. The new applications concerning convergence of general Fourier series can be found Section 4. See Theorem 3, Corollary 1, Theorem 4 and Theorem 5.
## 2. Preliminaries
Let \(\left\{\varphi_{n}\right\}\) be an orthonormal system (ONS) on \(\left[0,1\right].\)
We denote by \(BV\) the class of all functions of bounded variation on \(\left(0,1\right)\) and write \(V\left(f\right)\) for the total variation of a function \(f\) on \(\left[0,1\right]\).
By \(A\) we denote the Banach space of absolutely convergent functions with the norm \(\left\|f\right\|_{A}\) defined by
\[\left\|f\right\|_{A}\,:=\,\left\|f\right\|_{C}\,+\,\int\limits_{0}^{1}\left| \frac{df}{dx}\right|dx. \tag{1}\]
We will investigate the linear functionals \(\left\{U_{n}(f)\right\}\) defined by
\[U_{n}(f):=\int_{0}^{1}f(x)Q_{n}(d,a,x)dx, \tag{2}\]
where \(f\in L_{2},\)\(a=\left\{a_{n}\right\}\in l_{2}\) is an arbitrary sequence of numbers and
\[Q_{n}(d,a,x):=\sum_{k=1}^{n}d_{k}a_{k}{\log k\varphi_{k}(x)}. \tag{3}\]
Here \(\left\{d_{n}\right\}\) denote a sequence of real number such that
\[d_{n}=O\left(\frac{\sqrt{n}}{{\log^{2}(n+1)}}\right). \tag{4}\]
For this investigation of the functionals \(\left\{U_{n}(f)\right\}\) we need the following important Lemma (see [9]).
**Lemma 1**.: _If \(f\in L_{2}\left(0,1\right)\) takes only finite values on \(\left[0,1\right]\) and \(h\in L_{2}\left(0,1\right)\) is an arbitrary function, then_
\[\int_{0}^{1}f\left(x\right)h\left(x\right)dx = \sum_{i=1}^{n-1}\left(f\left(\frac{i}{n}\right)-f\left(\frac{i+1 }{n}\right)\right)\int_{0}^{i/n}h\left(x\right)dx\] \[+ \sum_{i=1}^{n}\int_{(i-1)/n}^{i/n}\left(f\left(x\right)-f\left( \frac{i}{n}\right)\right)h\left(x\right)dx\] \[+ f\left(1\right)\int_{0}^{1}h\left(x\right)dx.\]
We denote
\[B_{n}\left(d,a\right)=\max_{1\leq i<n}\left|\int_{0}^{i/n}Q_{n}\left(d,a,x \right)dx\right|. \tag{6}\]
We say that the sequence of functionals \(\{U_{n}(f)\}\) is bounded on the space \(V,\) if, for any \(\{a_{n}\}\in l_{2},\)
\[\limsup_{n\rightarrow\infty}|U_{n}(f)|<+\infty.\]
We also need the following result of S.Banach (see e.g. [2]):
**Theorem A**.: _Let \(f\in L_{2}\) be an arbitrary (non-zero) function. Then there exists an ONS \(\{\varphi_{n}\}\) such that_
\[\limsup_{n\rightarrow\infty}\left|\sum_{k=1}^{n}C_{k}(f)\varphi_{k}(x) \right|=+\infty\ \ \text{a.e. on}\ \ [0,1],\]
_where \(C_{k}(f)\) are the Fourier coefficients of the function \(f\in L_{2}\) with respect to the system \(\{\varphi_{k}\}\) and defined as follows_
\[C_{k}(f):=\int\limits_{0}^{1}f(x)\varphi_{k}(x)dx. \tag{7}\]
Moreover, we recall the following well-known result of Menshov and Rademacher (see e.g. [11] Ch.9, p.332).
**Theorem B**.: _If \(\{\varphi_{n}\}\) is an ONS on \([0,1]\) and a number sequence \(\{c_{n}\}\) satisfies the condition_
\[\sum_{n=1}^{\infty}c_{n}^{2}\log_{2}^{2}n<+\infty,\]
_then the series_
\[\sum_{n=1}^{\infty}c_{n}\varphi_{n}\left(x\right)\]
_converges a.e. on \([0,1]\)._
## 3. The Main Results
Our first main result reads:
**Theorem 1**.: _If, for any \(\{a_{n}\}\in l_{2},\)_
\[B_{n}\left(d,a\right)=O\left(1\right), \tag{8}\]
_then the sequence of functionals \(\{U_{n}(f)\}\) is bounded on the space \(BV\) for every \(f\in BV.\)_
Proof.: By using Lemma 1, when \(h\left(x\right)=Q_{n}\left(d,a,x\right)\) we have
\[\int_{0}^{1}f(x)Q_{n}(d,a,x)dx = \sum_{i=1}^{n-1}\left(f\left(\frac{i}{n}\right)-f\left(\frac{i+1}{n }\right)\right)\int_{0}^{i/n}Q_{n}(d,a,x)dx \tag{9}\] \[+\sum_{i=1}^{n}\int_{(i-1)/n}^{i/n}\left(f\left(x\right)-f\left( \frac{i}{n}\right)\right)Q_{n}(d,a,x)dx\] \[+f\left(1\right)\int_{0}^{1}Q_{n}(d,a,x)dx:=A_{1}+A_{2}+A_{3}.\]
Let \(f\in BV\), then we get (see (6))
\[|A_{1}| \leq\max_{1\leq i<n}\left|\int_{0}^{i/n}Q_{n}(d,a,x)dx\right|\ \cdot\ \sum_{i=1}^{n-1}\left|f\left(\frac{i}{n}\right)-f\left(\frac{i+1}{n}\right)\right|\] \[\leq V(f)B_{n}(d,a).\]
Hence, from condition (8) it follows that
\[|A_{1}|=O(1)V(f).\]
By applying Holder's inequality and (4), we get (since \(\{a_{n}\}\in l_{2}\))
\[|A_{2}| \leq\sum_{i=1}^{n}\sup_{x\in\left[\frac{i-1}{n},\frac{i}{n}\right] }\left|f(x)-f\left(\frac{i}{n}\right)\right|\int_{(i-1)/n}^{i/n}\left|Q_{n}(d,a,x)\right|dx\] \[\leq V(f)\max_{1\leq i\leq n}\int_{(i-1)/n}^{i/n}\left|Q_{n}(d,a, x)\right|dx\] \[\leq V(f)\frac{1}{\sqrt{n}}\left(\int_{0}^{1}Q_{n}^{2}(d,a,x)dx \right)^{1/2}\] \[=\frac{V(f)}{\sqrt{n}}\left(\int_{0}^{1}\left(\sum_{k=1}^{n}d_{k }a_{k}\log k\varphi_{k}(x)\right)^{2}dx\right)^{1/2}\] \[=\frac{V(f)}{\sqrt{n}}\left(\sum_{k=1}^{n}d_{k}^{2}a_{k}^{2}\log ^{2}k\right)^{1/2}\] \[=V(f)\cdot\frac{\max_{1\leq k\leq n}|d_{k}|}{\sqrt{n}}\cdot\log n \left(\sum_{k=1}^{n}a_{k}^{2}\right)^{1/2}=O(1)V(f).\]
By using (6) and Cauchy's inequality, for any \(\{a_{n}\}\in l_{2}\) we find that
\[|A_{3}| =O(1)\left|\int_{0}^{1}Q_{n}(d,a,x)dx\right|\] \[\leq O(1)\left(\left|\int_{0}^{1-1/n}Q_{n}(d,a,x)dx\right|+\left| \int_{1-1/n}^{1}Q_{n}(d,a,x)dx\right|\right)\] \[\leq O(1)\left(B_{n}(d,a)+\frac{1}{\sqrt{n}}\left(\int_{0}^{1}Q_{ n}^{2}(d,a,x)dx\right)^{1/2}\right)=O(1).\]
Taking into consideration in (9) the above estimates of \(|A_{1}|\), \(|A_{2}|\) and \(|A_{3}|\) we have that
\[\left|\int_{0}^{1}f(x)\sum_{k=1}^{n}d_{k}a_{k}\log k\varphi_{k}(x)\right|=O(1).\]
It follows that
\[|U_{n}(f)|\leq M(f), \tag{10}\]
where \(M(f)\) is a constant which does not depend on \(n\) and the proof is complete.
Next we state a result which, in particular, show that the statement in Theorem 1 is, in a sense, sharp.
**Theorem 2**.: _If for some \(\{b_{n}\}\in l_{2}\)_
\[\limsup_{n\to\infty}B_{n}(d,b)=+\infty,\]
_then there exists a function \(g\in A\), such that_
\[\limsup_{n\to\infty}|U_{n}(g)|=+\infty.\]
Proof.: First we suppose that
\[\limsup_{n\to\infty}\left|\int_{0}^{1}Q_{n}\left(d,b,x\right)dx\right|=+\infty.\]
Then, if \(g(x)=1,\ x\in[0,1]\), we have
\[\limsup_{n\to\infty}\left|\int_{0}^{1}g(x)Q_{n}\left(d,b,x\right)dx\right|=+\infty.\]
Obviously \(g\in A.\) Theorem 2 holds in this case.
Next we suppose that
\[\left|\int_{0}^{1}Q_{n}\left(d,b,x\right)dx\right|=O(1). \tag{11}\]
Let \(1\leq i_{n}<n\) be an integer, such that
\[B_{n}\left(d,b\right)=\max_{1\leq i<n}\left|\int_{0}^{i/n}Q_{n}\left(d,b,x \right)dx\right|=\left|\int_{0}^{i_{n}/n}Q_{n}\left(d,c,x\right)dx\right|.\]
Suppose that for some sequence \(b=\left\{b_{k}\right\}\in l_{2}\)
\[\limsup_{n\rightarrow\infty}B_{n}(d,b)=+\infty. \tag{12}\]
Consider the following sequence of test functions:
\[f_{n}\left(x\right)=\left\{\begin{array}{ccc}0,&\text{when}&x\in\left[0,\frac{ i_{n}}{n}\right]\\ 1,&\text{when}&x\in\left[\frac{i_{n}+1}{n},1\right]\\ \text{continuous and linear},&\text{when}&x\in\left[\frac{i_{n}}{n},\frac{i_{n}+1}{n }\right].\end{array}\right.\]
Then (see (1))
\[\left\|f_{n}\right\|_{A}=\int_{0}^{1}\left|f_{n}^{{}^{\prime}}\left(x\right) \right|dx+\left\|f_{n}\left(x\right)\right\|_{C}\leq 2.\]
Furthermore,
\[\left|\sum_{i=1}^{n-1}\left(f_{n}\left(\frac{i}{n}\right)-f_{n} \left(\frac{i+1}{n}\right)\right)\int_{0}^{i/n}Q_{n}\left(d,b,x\right)dx\right| \tag{13}\] \[=\left|\int_{0}^{i_{n}/n}Q_{n}\left(d,b,x\right)dx\right|=B_{n} \left(d,b\right).\]
Then, if \(x\in\left[\frac{i-1}{n},\frac{i}{n}\right]\) we find that
\[\left|f_{n}\left(x\right)-f_{n}\left(\frac{i}{n}\right)\right|\left\{ \begin{array}{ccc}\leq 1,&\text{if}&i=i_{n}+1,\\ 0,&\text{if}&i\neq i_{n}+1,\end{array}\right.\]
and it implies that (since \(\left\{b_{n}\right\}\in l_{2}\))
\[\left|\sum_{i=1}^{n}\int_{(i-1)/n}^{i/n}\left(f\left(x\right)-f \left(\frac{i}{n}\right)\right)Q_{n}\left(d,b,x\right)dx\right| \tag{14}\] \[\leq\sum_{i=1}^{n}\sup_{x\in\left[\frac{i-1}{n},\frac{i}{n}\right] }\left|f\left(x\right)-f\left(\frac{i}{n}\right)\right|\int_{(i-1)/n}^{i/n} \left|Q_{n}(d,b,x)\right|dx\] \[\leq V(f)\max_{1\leq i\leq n}\int\limits_{(i-1)/n}^{i/n}\left|Q_{ n}(d,b,x)\right|dx\] \[=O(1)\frac{1}{\sqrt{n}}\left(\int_{0}^{1}Q_{n}^{2}(d,b,x)dx \right)^{1/2}\] \[=O(1)\frac{1}{\sqrt{n}}\max_{1\leq k\leq n}d_{k}\log n\Bigg{(} \sum_{k=1}^{n}b_{k}^{2}\Bigg{)}^{\frac{1}{2}}=O(1).\]
Consequently, by using (5) when \(f\left(x\right)=f_{n}\left(x\right)\) and \(Q_{n}\left(d,a,x\right)=Q_{n}\left(d,b,x\right),\) and combining (11), (13), (14), we get that
\[\left|\int_{0}^{1}f_{n}\left(x\right)Q_{n}\left(d,b,x\right)\right|dx\geq B_{n }(d,b)-O\left(1\right)-O(1).\]
From here and from (12) we have, that
\[\limsup_{n\rightarrow\infty}\left|\int_{0}^{1}f_{n}\left(x\right)Q_{n}\left(d,b,x \right)dx\right|=+\infty.\]
Finally, we note that since
\[U_{n}\left(f\right)=\int_{0}^{1}f\left(x\right)Q_{n}\left(d,b,x\right)dx\]
is a sequence of linear bounded functionals on \(A\), then by the Banach-Steinhaus theorem, there exists a function \(g\in A\) such that
\[\limsup_{n\rightarrow\infty}\left|U_{n}(g)\right|=\limsup_{n\rightarrow \infty}\left|\int_{0}^{1}g\left(x\right)Q_{n}\left(d,b,x\right)dx\right|=+\infty. \tag{15}\]
The proof is complete.
## 4. Applications concerning convergence of general Fourier series
Our first application reads:
**Theorem 3**.: _If condition (8) of Theorem 1 holds then, for any function \(f\in BV,\)_
\[\sum_{k=1}^{\infty}d_{k}^{2}C_{k}^{2}(f)\log^{2}k<+\infty.\]
Proof.: By using condition (8) of Theorem 1, and using equation (10) and (7) we have that
\[\sum_{k=1}^{n}d_{k}C_{k}(f)\log k =\int_{0}^{1}f(x)\sum_{k=1}^{n}d_{k}\log k\varphi_{k}(x)dx\] \[=\int_{0}^{1}f(x)Q_{n}(d,a,x)dx.\]
Hence,
\[\sum_{k=1}^{n}d_{k}a_{k}C_{k}(f)\log k=U_{n}(f).\]
Since
\[\left|U_{n}(f)\right|=O(1)\]
it follows that
\[\sum_{k=1}^{n}d_{k}a_{k}\log kC_{k}(f)=O(1).\]
Now, if we suppose that for any \(\{a_{k}\}\in l_{2},\)
\[\limsup_{n\rightarrow\infty}\left|U_{n}(f)\right|<+\infty,\]
then the following series
\[\sum_{k=1}^{\infty}d_{k}a_{k}\log kC_{k}(f) \tag{16}\]
is convergent.
This means that \(\{d_{k}C_{k}(f)\log k\}\in l_{2}\), or
\[\sum_{k=1}^{\infty}d_{k}^{2}C_{k}^{2}(f)\log^{2}k<+\infty.\]
The proof is complete.
In particular, Theorem A and Theorem 1 imply the following new result:
**Corollary 1**.: _If condition (8) of Theorem 1 holds for any function \(f\in BV,\) then the following series_
\[\sum_{k=1}^{\infty}d_{k}C_{k}(f)\varphi_{k}(x)\]
_is convergent a.e. on \([0,1]\)._
**Remark**.: _If condition (8) of Theorem 2 is fulfilled and \(d_{k}=1,\)\(k=1,2,\dots,\) then, for any \(f\in BV\) the series_
\[\sum_{k=1}^{\infty}C_{k}(f)\varphi_{k}(x)\]
_is convergent a.e. on \([0,1]\)._
Next, we state the following result showing that Theorem 3 is, in a sense, sharp.
**Theorem 4**.: _For any function \(g\in A\)\((g\neq 0)\) there exists an ONS \(\{\varphi_{n}\}\) such that for some \(\{a_{n}\}\in l_{2}\) and \(d_{k}=1,\)\(k=1,2,\dots\)_
\[\limsup_{n\to\infty}\sum_{k=1}^{n}C_{k}^{2}(f)\log^{2}k=\limsup_{n\to\infty} |U_{n}(g)|=+\infty. \tag{17}\]
Proof.: Let \(g\) be an arbitrary function. According to the Banach Theorem there exists an ONS \(\{\varphi_{n}\}\) such that
\[\limsup_{n\to\infty}\left|\sum_{k=1}^{n}C_{k}(g)\varphi_{k}(x)\right|=+\infty \ \ \text{a.e. on }[0,1]. \tag{18}\]
Consequently, by using (18) and Theorem A, we conclude that
\[\sum_{k=1}^{\infty}C_{k}^{2}(g)\log^{2}k=+\infty. \tag{19}\]
Indeed, suppose the contrary to (17) namely that for arbitrary \(\{a_{n}\}\in l_{2}\)
\[\limsup_{n\to\infty}|U_{n}(g)|<+\infty.\]
Then as it follows from (16) when \(d_{k}=1\), \(k=1,2,\dots\) for any \(\{a_{n}\}\in l_{2}\) that the series
\[\sum_{k=1}^{\infty}a_{k}\log kC_{k}(g)\]
is convergent. Thus, \(\{C_{k}(g)\log k\}\in l_{2}\) or
\[\sum_{k=1}^{\infty}C_{k}^{2}(g)\log^{2}k<+\infty,\]
which contradicts (19).
This contradiction shows that (17) holds so the proof is complete.
Finally, we state the following efficiency result:
**Theorem 5**.: _Let \(\{\varphi_{n}\}\) be an ONS such that uniformly with respect to \(x\in[0,1]\) it holds that_
\[\int_{0}^{x}\varphi_{n}(x)dx=O\left(\frac{1}{n}\right). \tag{20}\]
_Then for any \(a=\{a_{n}\}\in l_{2}\),_
\[B_{n}(d,a)\leq\max_{x\in[0,1]}\left|\int_{0}^{x}\sum_{k=1}^{n}d_{k}a_{k}\log k \varphi_{k}(u)du\right|=O(1). \tag{21}\]
Proof.: According to (20) and by the Cauchy inequality we get (see (4))
\[B_{n}(d,a) \leq\max_{x\in[0,1]}\left|\int_{0}^{x}\sum_{k=1}^{n}d_{k}a_{k} \log k\varphi_{k}(u)du\right|\] \[=\max_{x\in[0,1]}\left|\sum_{k=1}^{n}d_{k}a_{k}\log k\int_{0}^{x} \varphi_{k}(u)du\right|=O(1)\left|\sum_{k=1}^{n}a_{k}d_{k}\log k\frac{1}{k}\right|\] \[=O(1)\left(\sum_{k=1}^{n}a_{k}^{2}\right)^{1/2}\left(\sum_{k=1}^ {n}d_{k}^{2}\log^{2}k\frac{1}{k^{2}}\right)^{1/2}\] \[=O(1)\left(\sum_{k=1}^{n}\frac{k}{\log^{4}(k+1)}\frac{\log^{2}k}{ k^{2}}\right)^{1/2}=O(1).\]
Hence, (21) is proved so the proof is complete.
**Remark:** Consequently, the functionals defined by (2) are bounded e.g. when \(\{\varphi_{n}\}\) is the trigonometric or Walsh system.
**Final remark.** We pronounce that some other convergence/divergence result of one-dimensional Vilinkin-Fourier series can be found in the new book [18]. We hope that our (Functional) approach can be used to contribute to solving some of the open questions raised in this book. In this connection we also mention the new paper [19], which is related to the famous Carleson paper [3].
|
2310.01731
|
The impact of social noise on the majority-rule model across various
network topologies
|
We explore the impact of social noise, characterized by nonconformist
behavior, on the phase transition within the framework of the majority-rule
model. The order-disorder transition can reflect the consensus-polarization
state in a social context. This study covers various network topologies,
including complete graphs, two-dimensional (2-D) square lattices,
three-dimensional (3-D) square lattices, and heterogeneous or complex networks
such as Watts-Strogatz (W-S), Barab\'asi-Albert (B-A), and Erdos-R\'enyi (E-R)
networks, as well as their combinations. Social behavior is represented by the
parameter $p$, which indicates the probability of agents exhibiting
nonconformist behavior. Our results show that the model exhibits a continuous
phase transition across all networks. Through finite-size scaling analysis and
evaluation of critical exponents, our results suggest that the model falls into
the same universality class as the Ising model.
|
Roni Muslim, Didi Ahmad Mulya, Zulkaida Akbar, Rinto Anugraha NQZ
|
2023-10-03T01:47:15Z
|
http://arxiv.org/abs/2310.01731v3
|
Destructive social noise effects on homogeneous and heterogeneous networks: Induced-phases in the majority-rule model
###### Abstract
This paper explores the effects of destructive social noises, characterized by independence and anticonformity, on the occurrence of order-disorder phase transitions within the framework of the majority-rule model. The study encompasses various network topologies, including the complete graph, two-dimensional (2-D) square lattice, three-dimensional (3-D) square lattice, as well as heterogeneous networks like Watts-Strogatz, Barabasi-Albert, and Erdos-Renyi networks. These social behaviors are quantified using the parameter \(p\), representing the probability of agents exhibiting independent or anticonformity tendencies. Our results reveal that the occurrence and nature of phase transitions depend on the fundamental characteristics of the model and the underlying network structure. Specifically, the model exhibits continuous phase transitions on the complete graph and the 2-D square lattice, with critical points that vary based on the model's attributes. However, on the 3-D lattice, the independence model notably lacks a phase transition, while the anticonformity model still exhibits a continuous phase transition. By employing finite-size scaling analysis and evaluating critical exponents, we confirm that the model falls within the Ising model universality class, except in the 3-D model. This study provides insights into the dynamic interplay between social dynamics and network topology, especially based on the majority-rule model.
keywords: Opinion dynamics model, majority-rule, networks, phase transition, universality +
Footnote †: journal: Journal of the American Statistical Association
## 1 Introduction
Scientists employ the principles and concepts of physics within the field of social science to gain a deeper understanding of social phenomena occurring in society [1; 2; 3; 4]. One of the most effective approaches utilized by scientists involves opinion modeling, which is constructed based on various social characteristics. These characteristics include individuals' tendencies to align with majority opinions [5; 6; 7], the presence of social pressures that induce opinion change [8], and the influence of social validation, which compels individuals to follow prevailing trends [9; 10]. In addition to the discrete opinion models mentioned, there are also continuous opinion models that imply two individuals can influence each other if there is trust between them [11; 12]. In general, a common feature emerging from the aforementioned opinion dynamics models is that the system eventually reaches a state of homogeneity, with all agents holding the same opinion or state. However, in real social systems, such uniformity does not always occur. Instead, there are moments of coexistence between minority and majority opinions, even amidst opinion turbulence. There are even instances where minority-majority opinions vanish. Considering these factors, scientists have developed opinion dynamics models with new features, such as taking into account destructive behavior that contradicts societal norms. The existence of this destructive social behavior can give rise to intriguing new phases that are worth exploring and understanding from the perspective of physics.
Destructive social behavior refers to a state or condition in which an individual or group deviates from prevailing societal norms, expectations, or standards. It often involves departing from established conventions, traditions, or cultural practices. Such behavior is more commonly known as nonconformity [13]. Social scientists further classify nonconformity into two types: independence and anticonformity [14; 15; 16; 17]. Independence refers to the ability or disposition of an individual to think, act, or make decisions autonomously, free from undue influence, coercion, or external pressure. Conversely, anticonformity is a deliberate rejection or opposition to conforming to established norms, rules, conventions, or societal expectations. It involves consciously going against the grain, challenging the status quo, or resisting pressures to conform.
The existence of nonconformity behavior in opinion dynamics models that influences the emergence of new phases has been extensively studied with various scenarios [18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30]. The emergence of these new phases is an intriguing subject from the perspective of other sciences, such as Physics, because it shares similarities with the phenomenon of ferromagnetic-antiferromagnetic phase transitions in spin systems, which can be understood quite well. With a slight change in the noise parameter, the complexity at the microscopic level manifests itself at the macroscopic level, which can be well understood. Similarly, this occurs within social
systems. Phase transitions can be understood as a shift from consensus to discord in social-political discourse. As the Ising model explains in physics, this phase transition arises from a disruptive parameter known as temperature, which opposes the alignment of interacting spins, leading to anti-parallel orientations. At a specific critical temperature, the spin states become completely random. Similarly, in opinion models, interactions between individuals, such as discussions, lead to convergence and consensus formation. Conversely, oppositional behavior results in a stalemate condition, where individuals defend their opinions.
In the context of this destructive social behavior, apart from scenarios involving interacting agents, network topology also plays a crucial role in the occurrence and type of phase transitions. For example, the Ising model on a 1D lattice does not exhibit phase transitions, while higher-dimensional networks demonstrate this phenomenon, even though they have the same microscopic interaction scenarios. Even in the majority rule dynamics of opinion models on a 2-D lattice with an independence parameter, phase transitions are absent [31]. In Ref. [31], we observe a continuous phase transition only when four agents in a two-dimensional lattice adopt anticonformity behavior, whereas it does not occur for independent agents. Recent work has also shown continuous phase transitions in the majority-rule model with independent behavior. Furthermore, this model falls within the same universality class as the mean-field Ising model [32].
This paper is focused on investigating the impact of disruptive social factors, specifically independence and anticonformity, on the occurrence of the order-disorder phase transition in the majority-rule model. The model is defined on various types of networks to provide a more comprehensive understanding of the order-disorder phase transition phenomenon, as well as the scaling behavior of the model. The examined networks include homogeneous networks such as the complete graph, a two-dimensional lattice, and a three-dimensional lattice, as well as heterogeneous networks such as the Watts-Strogatz (W-S) network [33], Barabasi-Albert (A-B) network [34], and Erdos-Renyi (E-R) network [35]. Our analytical and numerical findings indicate that the complete graph model undergoes a continuous phase transition with critical points of \(p_{c}=1/3\) for the independence model and \(p_{c}=1/4\) for the nonconformity model. Both models exhibit identical critical exponents, specifically \(\beta\approx 0.5\), \(\nu\approx 2.0\), and \(\gamma\approx 1.0\), placing them within the same universality class as the mean-field Ising model. Moreover, numerical simulations reveal that the two-dimensional lattice model experiences a continuous phase transition with critical points at approximately \(p_{c}\approx 0.106\) for the independence model and \(p_{c}\approx 0.062\) for the nonconformity model, sharing critical exponents of \(\beta\approx 0.125\), \(\nu\approx 2.0\), and \(\gamma\approx 1.75\). In other words, both models belong to the same universality class as the two-dimensional Ising model. The critical exponents satisfy the identity equation \(vd=2\beta+\gamma\)[36]. Furthermore, a phase transition is not observed for the independence model on a three-dimensional lattice, but the nonconformity model undergoes a continuous phase transition with a critical point at approximately \(p_{c}\approx 0.268\). The best-fitting critical exponents that collapse all data for \(N\) are approximately \(\beta\approx 0.25\), \(\nu\approx 1.40\), and \(\gamma\approx 1.40\).
In our analysis of heterogeneous networks, we only ascertain whether the model undergoes a continuous phase transition. Based on our numerical results, the independence and nonconformity models exhibit continuous phase transitions for all the networks under consideration. Finally, as we did for homogeneous networks, we refrain from estimating critical points and critical exponents on heterogeneous networks. Our study will likely be a foundation for further investigations in this field.
## 2 Model and methods
In the majority-rule model, a randomly selected small group comprises several agents. Within this group, all members interact to align their opinions or states with the majority opinion or state of the group. In the realm of social psychology, when agents conform to the majority opinion, they exhibit conformity behavior, often referred to as "conformist agents". Conformity, in essence, involves aligning one's attitudes, beliefs, and behavior with the established norms of a group [37]. In contrast to conformity, another noteworthy social behavior is nonconformity, which can be further subdivided into two distinct categories: anticonformity and independence.
In this paper, we delve into these three types of social behaviors and introduce a probability parameter denoted as \(p\), which represents the likelihood of agents adopting either independence or anticonformity (nonconformist agents). In other words, with a probability of \(p\), agents choose to act independently or exhibit anticonformity. Conversely, with a probability of \(1-p\), agents conform by following the majority opinion. To elucidate further, we outline the algorithm of the model as follows:
1. To compute the macroscopic parameters of the system, the initial state of the system is prepared in a disordered state, wherein the number of agents with positive and negative opinions is equal.
2. Microscopic Interaction within the model: 1. Independence Model: Within the population, a group of agents is randomly selected, and with a probability of \(p\), all agents act independently. Simultaneously, with a probability of \(1/2\), all agents change their opinions in the opposite direction, i.e., \(\pm S_{i}(i+t)=\mp S_{i}(t)\). 2. Anticonformity Model: In the population, a group of agents is randomly chosen, and with a probability of \(p\), all agents act in an anticonformist manner. Subsequently, if all agents within the group share the same opinion, they will reverse their opinions in the opposite direction, i.e., \(\pm S_{i}(i+t)=\mp S_{i}(t)\).
3. Alternatively, with a probability of \(1-p\), all agents opt to conform by following the majority opinion.
The model is defined within three networks: a complete graph, a two-dimensional square lattice, and a three-dimensional square lattice. Each agent is associated with two
potential opinions or states, represented by Ising numbers \(\pm 1\). All agent opinions are embedded within the network nodes, and the links or edges between nodes symbolize social connections.
The complete graph represents a network structure in which every node is connected to every other node. In other words, on the complete graph, all agents are neighbors and can interact with each other with the same probability. Furthermore, we have also defined this model on two and three-dimensional square lattices and three other heterogeneous networks, namely, the B-A, W-S, and E-R networks. Each agent has four nearest neighbors in the two-dimensional square lattice, while in the three-dimensional square lattice, each agent has six nearest neighbors. In the heterogeneous networks, we examined networks in which the minimum degree of connectivity for each node is 2, and we selected three agents to follow the majority-rule model algorithm, as mentioned above.
For the model on the complete graph, we can conveniently perform an analytical treatment to compute the order (magnetization) of the model using the following formula:
\[m=\frac{N_{\uparrow}-N_{\downarrow}}{N_{\uparrow}+N_{\downarrow}}=2\,r-1, \tag{1}\]
where \(N_{\uparrow}\) represents the total number of agents with the "up" opinion, \(N_{\downarrow}\) represents the total number of agents with the "down" opinion, and \(r=N_{\uparrow}/(N_{\uparrow}+N_{\downarrow})\) denotes the fraction of agents with the "up" opinion. In the numerical simulation treatment, we utilize the expressions \(\langle m\rangle=\sum_{j}m_{j}/R,\langle\chi\rangle=\sum_{j}\chi_{j}/R\), and \(\langle U\rangle=\sum_{j}U_{j}/R\), where \(R\) represents the total number of independent realizations for each data point.
We employ finite-size scaling analysis to compute the critical exponents corresponding to the order parameter \(m\), susceptibility \(\chi\), and Binder cumulant \(U\). The finite-size scaling relations are given by:
\[m =\phi_{m}(x)N^{-\beta/v}, \tag{2}\] \[\chi =\phi_{\chi}(x)N^{\gamma/v},\] (3) \[p-p_{c} =c\,N^{-1/v},\] (4) \[U =\phi_{U}(x), \tag{5}\]
where \(\phi\) represents the dimensionless scaling function that fits the data near the critical point \(p_{c}\). The critical exponents \(\beta\), \(\gamma\), and \(v\) come into play near the critical point.
The fluctuation or susceptibility \(\chi\) and Binder cumulant \(U\) are defined as follows [38]:
\[\chi =N\left(\langle m^{2}\rangle-\langle m\rangle^{2}\right), \tag{6}\] \[U =1-\frac{1}{3}\frac{\langle m^{4}\rangle}{\langle m^{2}\rangle^{2 }}. \tag{7}\]
We can determine the critical point at which the system undergoes an order-disorder phase transition by identifying the intersection point of the curves of the Binder cumulant \(U\) and the probability \(p\).
## 3 Result and Discussion
### Time evolution and stationary state
This section discusses the dynamic evolution of the fraction opinion denoted as \(r\) across various population sizes \(N\). The density of spin-up, \(r\) at a given time \(t\), may exhibit fluctuations across different realizations, contingent upon the specific characteristics of the model under examination. We will provide a concise overview of the formulation governing the time evolution of the spin-up density \(r\) within the probability distribution framework.
The probability of finding an agent with an opinion "up" at time \(t\), denoted as \(r(t)\), is given by:
\[r(t)=\sum_{x}x\,\mathrm{P}(x,t), \tag{8}\]
where \(P(x,t)\) represents the probability distribution of the system in state \(x\) (associated with the "up" state) at time \(t\). The probability distribution \(P(x,t)\) can be determined when the initial conditions are known, employing the following recursive formula:
\[\mathrm{P}(x,t+1)=\sum_{x^{\prime}}\rho(x^{\prime}\to x)\mathrm{P}(x^{ \prime},t), \tag{9}\]
where \(\rho(x^{\prime}\to x)\) denotes the transition probability of agent with state \(x\) to \(x^{\prime}\) at time \(t\), which depends on the specific model under consideration. Eq. (9) is commonly known as the discrete-time Master equation and describes the temporal evolution of the probability distribution.
To derive the recursive formula for the density opinion \(r\), we can start by combine Eq. (8) and Eq. (9). We express the fraction opinion \(r\) at the subsequent time step, denoted as \(r(t^{\prime})\), as follows:
\[r(t^{\prime})=\sum_{x^{\prime}}\mathrm{P}(x^{\prime},t)\sum_{x}\rho(x^{\prime }\to x)x, \tag{10}\]
and the next can be further simplified as:
\[r(t^{\prime}) =\sum_{x^{\prime}}x^{\prime}\,\mathrm{P}(x^{\prime},t)+\frac{1}{N }\sum_{x^{\prime}}\mathrm{P}(x^{\prime},t)\left[\rho^{+}(x^{\prime})-\rho^{-} (x^{\prime})\right]\] \[=r(t)+\frac{1}{N}\left[\rho^{+}(r)-\rho^{-}(r)\right]. \tag{11}\]
In this context, \(r(t)\) represents the initial fraction opinion, and the total probability \(P(x^{\prime},t)=1\). It is important to emphasize that, given the model's foundation on a complete graph reminiscent of the mean-field character in statistical physics, the parameters \(x\) and time \(t\) are deemed to be non-fluctuating. Therefore, we can denote \(x^{\prime}\) as the fraction opinion \(r\). Drawing from this equation, we track the temporal evolution of the fraction opinion \(r\) during the sampling event corresponding to a single Monte Carlo sweep.
To express Eq. 11 in a differential form, we take into account that the temporal progression of the fraction opinion \(r\) is measured in one step (Monte Carlo step) by adjusting \(t\) using a factor of \(1/N\). Consequently, the time increment equals \(\Delta t=1/N\), as a single Monte Carlo sweep corresponds to \(N\Delta t\). As a result, in the scenario where \(N\rightarrow\infty\) or \(\Delta t\to 0\), Eq. (11) can be
rephrased in differential form as follows:
\[\frac{\mathrm{d}r}{\mathrm{d}t}=\rho^{+}(r)-\rho^{-}(r), \tag{12}\]
which is recognized as the rate equation governing the fraction opinion \(r\).
As mentioned previously, we can approximate the topology using a mean-field approximation on the complete graph where all agents are neighbors. Considering there are \(N\) total opinions or individuals in the system, the opinion "up" increases and decreases by \(1/N\) during each time step of the dynamic process. We can mathematically express the probabilities of spin-up decreasing, increasing, and remaining constant as follows:
\[\rho^{+}(r) =\mathrm{prob.}\left(r\to r+1/N\right) \tag{13}\] \[\rho^{-}(r) =\mathrm{prob.}\left(r\to r-1/N\right) \tag{14}\]
The explicit form of Eqs. (13) - (14) depends on the model. This paper considers a simple case in which three agents are chosen randomly in the population and interact based on the abovementioned algorithm. Because in the complete graph, the analytical result is suitable for a large population \(N>>1\) when we compare to the numerical simulation, we will concern the model for the large population. Thus, the explicit form of Eqs. (13) - (14) for the model with independence can be written as:
\[\rho^{+}(r) =3\left(1-r\right)\left[\frac{p}{2}+r^{2}\left(1-p\right)\right], \tag{15}\] \[\rho^{-}(r) =3r\left[\frac{p}{2}+\left(1-r\right)^{2}\left(1-p\right)\right], \tag{16}\]
And for the anticonformity model, Eqs. (13) - (14) can be written as:
\[\rho^{+}(r) =3\left(1-r\right)\left[p\left(1-r\right)^{2}+r^{2}\left(1-p \right)\right], \tag{17}\] \[\rho^{-}(r) =3r\left[pr^{2}+\left(1-r\right)^{2}\left(1-p\right)\right]. \tag{18}\]
Eqs. (15) - (18) represent critical formulations for analyzing the system's state on the complete graph, particularly in evaluating an order-disorder phase transition occurrence.
We can solve Eq. (12) to find the explicit expression of the fraction opinion \(r\) at time \(t\) for both the model with independence and the model with anticonformity. By inserting Eqs. (15) - (18) into Eq. (12) integrating it, we obtain the following expressions for \(r\) at time \(t\) for the model with independence as:
\[r(t,p,r_{0})=\frac{1}{2}\left[1\pm\left(\frac{1-3p}{1-p+2e^{-3(1-3p)(t+A)}} \right)^{1/2}\right], \tag{19}\]
where \(A\) is a parameter that satisfies the initial condition \(r(t)\) at \(t=0\). Specifically, \(A\) is given by the equation \(A=\ln[(1-2r_{0})^{2}/(2(1-p)(r_{0}^{2}+r_{0})+p]/[3(1-3p)]\). In the same way for the model with anticonformity, we obtain
\[r(t,p,r_{0})=\frac{1}{2}\left[1\pm\left(\frac{1-4p}{1-4\,e^{-3(1-4p)(t+A)}} \right)^{1/2}\right], \tag{20}\]
where \(A=\ln\left[(1-2r_{0})^{2}/(r_{0}^{2}-r_{0}+p)\right]/(1-4p)\). Eqs. and provide an exact expression for the fraction opinion \(r\) at time \(t\) for both models, with \(p\) representing the probability of agents adopting either independence or anticonformity, and \(r_{0}\) denoting the initial fraction opinion. For instance, when \(p=0\) (indicating no independent or anticonformist agents), both Eq. (19) and Eq. (20) reduce to the same form, namely, \(r(t)=1/2[1\pm(1-2r_{0})\exp(3t/2)/((r_{0}-1/2)^{2}\exp(3t)+r_{0}(1-r_{0}))^{1 /2}]\). This result suggests that the fraction opinion \(r\) evolves towards complete consensus states \(r=1\) (all agents have an 'up' opinion) for \(r_{0}<1/2\) and \(r=0\) (all agents have a 'down' opinion) for \(r_{0}>1/2\). At \(p=1/3\) for the model with independence and \(p=1/4\) for the model with anticonformity, \(r=1/2\), indicating a completely disordered state with an equal number of 'up' and 'down' opinions. Therefore, \(p=1/3\) and \(p=1/4\) are considered the critical points for the model with independence and anticonformity, respectively.
Fig. 1 demonstrates a comparison between Eq. (19) (red color) and Eq. (20) against numerical simulations for a large population \(N=10^{4}\) and various values of \(p\). The analytical treatment aligns closely with the numerical results. Notably, at \(p=0\), all initial fractions \(r_{0}\) evolve towards complete consensus with \(r=1\) for \(r_{0}<1/2\) and \(r=0\) for \(r_{0}>1/2\). This behavior is expected, as when there are no nonconformist agents, only the initial opinion's size influences the system's final state, with a larger initial opinion prevailing. In general, fluctuations due to the system size also impact the system's final state for all networks or graphs. However, on a complete graph with sufficiently large populations, the system's evolution is more stable toward the final state [39]. Additionally, at \(0<p<p_{c}\), the fraction opinion \(r\) evolves toward two stable values \(r_{st}\), while at \(p=p_{c}\), all initial fractions \(r_{0}\) evolve to \(r=1/2\), representing a completely disordered state.
Figure 1: The comparison between analytical treatment and numerical simulation for the model with independence (red) and anticonformity (blue) for various values of probability \(p\). As seen, at \(p=0\), all data \(r_{0}\) evolve to a complete consensus with \(r=1\) and \(r=0\). At \(0<p<p_{c}\), all data evolve to two stable \(r_{st}\), and at \(p=p_{c}\) all data evolve to \(r=1/2\) (disordered state). The population size is \(N=10^{4}\), and each data point averages over 300 independent realizations.
### Phase diagram and critical exponents
The order-disorder phase transition of the model can be analyzed by determining the stationary condition of Eq. (12), where \(dr/dt=0\). For the model with independence on the complete graph, the stationary results are as follows:
\[r_{1}=1/2,\;r_{2,3}=\frac{1}{2}\left[1\pm\left(\frac{1-3p}{1-p} \right)^{1/2}\right]\Rightarrow m_{2,3}=\pm\left(\frac{1-3p}{1-p}\right)^{1/2}, \tag{21}\]
where \(r_{1}\) corresponds to \(r=1/2\), while \(r_{2,3}\) represents two additional stationary values. The critical point occurs at \(p_{c}=1/3\), where \(m_{2,3}=0\). Additionally, for the model with anticonformity, the stationary condition for the fraction opinion \(r\) yields:
\[r_{1}=1/2,\;r_{2,3}=\frac{1}{2}\left[1\pm(1-4p)^{1/2}\right] \Rightarrow m_{2,3}=\pm(1-4p)^{1/2}\,. \tag{22}\]
In this case, \(r_{1}\) corresponds to \(r=1/2\), and \(r_{2,3}\) represents two additional stationary values. The critical point is \(p_{c}=1/4\). Both sets of equations, Eqs. (21) and (22), can be expressed as power laws in terms of \(p\), where \(m\sim(p-p_{c})^{\beta}\). In these equations, \(\beta\) takes the value \(1/2\), typical of the critical exponent corresponding to the magnetization of the mean-field Ising model [40].
As mentioned earlier, the topology of the complete graph can be approximated using a mean-field formulation. Monte Carlo simulations were conducted with a large population size \(N=10^{6}\) to confirm the analytical results, as shown in Fig. 2 (a). The analytical results closely match the Monte Carlo simulation results, demonstrating that both models with independence and anticonformity undergo a continuous phase transition. Furthermore, Fig. 2 (b) and (c) illustrate the effective potential in Eqs. (23) and (24), respectively. These plots reveal the following behaviors, namely; for \(p<p_{c}\), the potential is bistable, for \(p>p_{c}\), the potential is monostable and at \(p=p_{c}\), the transition occurs between bistable and monostable, indicating the critical point of the model. These results provide a comprehensive understanding of the phase diagram and critical behavior of the model on the complete graph.
Another approach to analyzing the order-disorder phase transition of the model is to consider the effective potential, which can be obtained through integration. Traditionally, the effective potential is derived from the effective force \(V(r)_{\rm eff}=-\int f(r)_{\rm eff}\,{\rm d}r\), where \(f(r)_{\rm eff}=\rho^{+}(r)-\rho^{-}(r)\) represents the force driving the opinion change during the dynamics process. For the model with independence on the complete graph, the effective potential is given by:
\[V(m,p)_{\rm indep.}=3\,(1-p)^{-1}\left(1-3p-(1-p)\,m^{2}\right)^{2}/32. \tag{23}\]
Moreover, for the model with anticonformity, the effective potential is expressed as:
\[V(m,p)_{\rm antic.}=-3\left(1-4p-m^{2}\right)^{2}/32. \tag{24}\]
Plot of Eqs. (23) and (24) are exhibited in panels (b) and (c) of Fig. 2. One can see for both potentials, there are bistable states for \(p<p_{c}\), the transition bistable-monostable at \(p=p_{c}\), and the system is in a monostable for \(p>p_{c}\), indicating the model undergoes a continuous phase transition at \(p_{c}\).
The critical point of the model can also be analyzed using Landau's theory. According to Landau's theory, the potential can be expanded by the magnetization \(m\) as \(V=\sum_{i}V_{i}m^{i}\), where \(V_{i}\) generally depends on thermodynamic parameters [41]. In this model, \(V_{i}\) can depend on the noise parameters of the model, such as probability independence and anticonformity \(p\). The Landau potential \(V\) is symmetric under inversion of the order parameter \(m\rightarrow-m\). Therefore, only even terms of the potential are considered. Thus, the simplified Landau potential takes the form:
\[V=V_{0}+V_{2}m^{2}+V_{4}m^{4}+V_{6}m^{6}\cdots \tag{25}\]
It is sufficient to know the terms \(V_{2}\) and \(V_{4}\) to analyze the model's phase transition using the potential V. The critical point can be determined by setting \(V_{2}=0\), while the nature of the phase transition is determined by \(V_{4}(p_{c})\), where \(V_{4}(p_{c})\geq 0\) indicates a continuous phase transition, and \(V_{4}(p_{c})<0\) indicates a discontinuous phase transition. Comparing Eq. (25) with Eqs. (23) and (24), we can determine \(V_{2}\) and \(V_{4}\) for both the model with independence and the model with anticonformity. For the model with independence, we obtain \(V_{2}(p)=3\,(1-3,p)\,/8\) and \(V_{4}(p)=9\,(1-p)\,/4\). For the model with anticonformity, we obtain \(V_{2}(p)=-3\,(1-4p)\,/8\) and \(V_{4}(p)=9/4\). As a result, the critical points \(p_{c}\) are consistent with the values obtained from the equilibrium analysis: \(p_{c}=1/3\) for the model with independence and \(p_{c}=1/4\) for the model with anticonformity. Furthermore, \(V_{4}(p_{c})\geq 0\) for both models confirms that they undergo a continuous phase transition. This analysis provides an additional perspective on the
Figure 2: Phase diagram of the model on the complete graph with both independence and anticonformity. Analytical results in Eqs. (21) and (22) are compared with Monte Carlo simulation data. The models exhibit continuous phase transitions with critical points at \(p_{c}=1/3\) and \(p_{c}=1/4\), respectively. Panels (b) and (c) display the effective potential in Eqs. (23) and (24), respectively, illustrating bistable and monostable behaviors for different \(p\) values.
phase transition behavior of the model, showing agreement with the previously obtained critical points and the nature of the transitions.
To numerically estimate the model's critical point and critical exponents, finite-size scaling relations in Eqs. (2)-(7) are employed. The population size \(N\) is varied in the range of \(2000\) to \(10000\) to compute the magnetization \(m\), susceptibility \(\chi\), and Binder cumulant \(U\) as shown in Fig. 3. We take for each data point is averaged over \(10^{5}\) independent realizations to obtain good results. In the inset graphs of Fig. 3, normal plots are presented, while the main graphs show the scaling plots of the model. The critical point of the model is determined using the Binder method, which involves observing the crossing of lines between the Binder cumulant \(U\) and the probability of anticonformity \(p\). In this case, the critical point is estimated to be \(p_{c}\approx 0.25\) (inset panel a), which agrees with the analytical result in Eq. (22).
The main graphs of Fig. 3 display the scaling plots of the model. The best critical exponents that result in the best collapse of all data points are \(\beta\approx 0.5,\nu\approx 2.0\), and \(\gamma\approx 1.0\).. Here, \(\nu\) is the critical exponent corresponding to the critical dimension \(d_{c}=4\), and the effective exponent \(\nu\) is \(1/2\). Consequently, \(\nu=d_{c}\nu^{\prime}=2\) is obtained numerically. These critical exponents satisfy the identity relation \(\nu d=2\beta+\gamma^{\prime}\), and they indicate that the universality class of the model belongs to the mean-field Ising universality class [40]. It is important to note that the same critical exponents are obtained for the model with independence, suggesting that both models with independence and anticonformity are identical. These models are also identical to well-known models in the field, such as the Sznajd and kinetics exchange models. This finite-size scaling analysis provides robust numerical evidence supporting the model's critical point and exponents, corroborating the analytical findings and classifying the model within the mean-field Ising universality class.
### The model on the 2-D lattice
We examined several population sizes, denoted as \(N=L^{2}\), where \(L\) assumes values of \(32,45,64,100,150\), and \(200\), to investigate the model's critical point and critical exponents thoroughly. The numerical results, specifically pertaining to the order parameter \(m\), susceptibility \(\chi\), and Binder cumulant \(U\), are presented in Fig. 4. The critical point, signifying the point at which the model undergoes a continuous phase transition, is identified as \(p_{c}\approx 0.106\) (as seen in the inset panel of Fig. 4 (a)). We determined the optimal critical exponents that result in the best data collapse by utilizing the finite-size scaling relations detailed in Eqs. (2)-(5). These critical exponents are approximately \(\beta\approx 2.0,\gamma\approx 1.75\), and \(\nu\approx 0.125\). Notably, these values indicate that the model is akin to the Sznajd model [30; 42] and shares characteristics with the universality class of the two-dimensional Ising model [40].
Similarly, we extended our analysis to the model with anticonformity, and our findings reveal that this model also experiences a continuous phase transition. The critical point for the model with anticonformity is approximately \(p_{c}\approx 0.062\), as shown in Fig. 5. Remarkably, our investigations yielded identical critical exponents for this model, specifically \(\beta\approx 0.125,\nu\approx 2.0\) and \(\gamma\approx 1.75\). These shared critical exponents indicate that both the model with independence and the model with anticonformity exhibit analogous behavior, suggesting that they belong to the same universality class. Our results align these models with the two-dimensional Ising universality class. Furthermore, the critical exponents for both models adhere to the identity relation \(\nu d=2\beta+\gamma\).
### The model on the 3-D lattice
We conducted investigations using various population sizes \(N=L^{3}\), with linear dimensions \(L=15,20,25,30,35\). Each data point represents an average of over \(10^{6}\) independent realizations. The numerical results for the model featuring independence are presented in Fig. 6. In panel (a), we observe that at \(p=0.0\), the system exists in an ordered state characterized by \(m=1.0\). In contrast, the magnetization decreases to zero at a small \(p=0.01\), signifying a disordered state. The inset graph visually illustrates the magnetization \(m\) at equilibrium for \(p=0.0\) and \(p=0.01\). However, relying solely on the magnetization data makes it challenging to determine whether the model undergoes an order-disorder phase transition. Nonetheless, we gain more insight into this by examining the Binder
Figure 3: Monte Carlo simulation result of the model for the order parameter \(m\), susceptibility \(\chi\), and Binder cumulant \(U\) (inset graphs) for several population sizes \(N\). The critical point is obtained from the crossing lines Binder cumulant \(U\) versus probability \(p\) that occurs at \(p=p_{c}\approx 0.25\) (inset graph of the panel (a)). This critical point confirms the analytical result in Eq. (22). The best critical exponents that make all data collapse near the critical point \(p_{c}\) are \(\beta\approx 0.5,\nu\approx 2.0\), and \(\gamma\approx 1.0\) (see the main graph) that indicate the model belongs to the mean-field Ising model [40].
cumulant \(U\), as depicted in panel (b). Notably, there are no intersections between the curves of Binder cumulant \(U\) and the probability of independence \(p\), which suggests the absence of an order-disorder phase transition in this model. See the inset graph for further clarity.
Our analysis of the model defined on the 3-D square lattice reveals that it undergoes a second-order phase transition with a critical point at \(p_{c}\approx 0.268\), as illustrated in Fig. 7. By employing finite-size scaling relations, detailed in Eqs. (2)-(5), we have determined the best-fitting critical exponents, yielding values of \(\beta\approx 0.25\), \(\nu\approx 2.0\), and \(\gamma\approx 1.4\). The results suggest that this model is not in the same universality class as the three-dimensional Ising model [40]. These critical exponents are universal, as they remain consistent across various different data sets for different system sizes \(N\).
### The model on the heterogeneous networks
Compared to the aforementioned homogeneous networks, the W-S, A-B, and E-R networks are more representative of real social networks [34; 43]. These three networks have been extensively studied in various network research and applied to a wide range of social phenomena, including applications in the field of medicine [44]. In this section, we considered these three heterogeneous networks, visually represented in Fig. 8. Within these networks, we selected an agent possessing at least two randomly chosen nearest neighbors, as illustrated in Fig. 9. The three agents interacted with each other according to the model's algorithm. We assigned varying node degrees in all networks, with the minimum node degree being 2. In other words, each agent has at least two nearest neighbors. The population size was \(N=3000\), and each data point was an average of \(10^{5}\) independent realizations. In this part, we exclusively focus on analyzing whether or not the model undergoes a continuous phase transition. Therefore, we do not seek critical points and critical exponents as in the previously discussed homogeneous networks. Our numerical results for the order parameter \(m\) are presented in Fig. 10. It can be observed that when \(p=0\), the system is in a state of complete order. This situation arises because all nodes have at least two nearest neighbors, allowing them to interact with each other following the majority rule. It is also shown that the model undergoes a continuous phase transition in all three networks, each with different critical points (not
Figure 4: (Main panel) Scaling plot of the Binder cumulant \(U\) (panel (a)), the order parameter \(m\) (panel (b)), and the susceptibility (panel (c)). These results are derived from extensive numerical simulations. Notably, for a critical point at approximately \(p_{c}\approx 0.106\), we observe that all data, with population sizes \(N=L^{2}\), exhibit a remarkable collapse. The associated critical exponents are estimated to be \(\nu\approx 2.0\), \(\gamma\approx 1.75\), and \(\beta\approx 0.125\). The critical point \(p_{c}\) is determined by identifying the intersection of the curves for the Binder cumulant \(U\) as a function of the independence probability \(p\) (inset panel (a)). These results align with the critical exponents of the two-dimensional Ising model universality class. It is important to note that all inset panels present data in a normal plot format, and each data point results from averaging over \(3\times 10^{6}\) independent realizations.
Figure 5: (Main panel) Scaling plots for the model with anticomformiy: the Binder cumulant \(U\) [panel (a)], the order parameter \(m\) [panel (b)], and the susceptibility \(\chi\) [panel (c)]. These results have been obtained through extensive numerical simulations. Notably, for a critical point located at approximately \(p_{c}\approx 0.062\), we observe a remarkable collapse of all data sets with population sizes \(N=L^{2}\). The associated critical exponents are estimated to be \(\nu\approx 2.0\), \(\gamma\approx 1.75\), and \(\beta\approx 0.125\). The critical point \(p_{c}\) is determined through the intersection of curves representing the Binder cumulant \(U\) as a function of the independence probability \(p\) (inset panel (a)). These results are consistent with the critical exponents characterizing the universality class of the two-dimensional Ising model. It is important to note that all inset panels present data in a normal plot format, and each data point results from averaging over \(3\times 10^{6}\) independent realizations.
specified in this paper). Other types of phase transitions, such as discontinuous phase transitions, can also occur for groups with a larger number of the group of agents, for example, 7, or under different scenarios.
To further observe the continuous phase transition in the model defined on these heterogeneous networks, we analyzed the magnetization fluctuations m versus Monte Carlo steps. The results are shown in Fig. 11. It can be seen that a bistable state of the magnetization \(m\) emerges for all models on all networks, indicating that the model undergoes a continuous phase transition.
## 4 Summary and outlook
This paper delves into the disruptive social phenomena's impact, such as independence and nonconformity, on the occurrence of phase transitions from order to disorder based on majority rule. The model's scope encompasses homogeneous networks such as complete graphs, two-dimensional square lattices, and three-dimensional square lattices, as well as heterogeneous networks like Barabasi-Albert networks, Watts-Strogatz networks, and Erdos-Renyi networks. A probability parameter measures the likelihood of agents adopting independent or nonconformist behavior, denoted as \(p\), while agents conform to the majority opinion with a probability of \((1-p)\).
Our results, analytically (for the complete graph) and through numerical simulations, indicate that the model in homogeneous networks undergoes a continuous phase transition with different critical points, as summarized in Table 1. However, we found no phase transition for the 3-D lattice model with independence. The critical exponents for the complete graph and the two-dimensional lattice model, obtained through finite-size scaling analysis, suggest that the model belongs to the same class as mean-field and 2-D Ising models. These critical exponents satisfy the scaling identity relation \(\nu d=2\beta+\gamma\). Furthermore, for the model with nonconformity on the 3-D lattice, the obtained critical exponents are \(\beta\approx 0.25,\nu\approx 1.40\), and \(\nu\approx 2.0\), indicating that the model does not belong to the same class as the 3-D Ising model.
The model defined on heterogeneous networks such as B-A, W-S, and E-R networks undergoes a continuous phase transition with distinct critical points, as demonstrated by the magnetization data versus probability \(p\). This conclusion is further supported by the fluctuations in the two stable magnetic states, denoted as \(m\), indicating that the model undergoes a continuous phase transition. We did not estimate the critical points of the model any further; however, based on numerical data for magnetization \(m\) versus probability \(p\), it is evident that the critical point of the model with nonconformity is smaller than that of the model with independence. This difference also holds consistently for models defined on complete graphs and 2-D lattices. From these data, we can assert that models with nonconformity have a greater tendency to undergo an order-disorder
\begin{table}
\begin{tabular}{l l c c c c} \hline \hline Networks & S. Noises & Critical Point & \(\beta\) & \(\gamma\) & \(\nu\) \\ \hline \multirow{2}{*}{Com.-graph} & Independence & \(1/3\) & 0.5 & 1.0 & 2.0 \\ & Anticonformity & \(1/4\) & 0.5 & 1.0 & 2.0 \\ \hline \multirow{2}{*}{2-D lattice} & Independence & 0.106 & 0.125 & 1.75 & 2.0 \\ & Anticonformity & 0.062 & 0.125 & 1.75 & 2.0 \\ \hline \multirow{2}{*}{3-D lattice} & Independence & - & - & - & - \\ & Anticonformity & 0.268 & 0.25 & 1.40 & 2.0 \\ \hline \end{tabular}
\end{table}
Table 1: Critical exponents of the majority-rule model on regular networks.
Figure 6: The numerical results for the order parameter \(m\) [panel (a)] and Binder cumulant \(U\) [panel (b)] are presented for the model with independence on the three-dimensional square lattice. Notably, the order parameter \(m\) diminishes to zero at a low independence probability of \(p=0.01\), indicating the absence of an order-disorder phase transition in this model. Furthermore, the data depicting Binder cumulant \(U\) as a function of probability \(p\) reveals a lack of intersections between the curves. For a clearer illustration, please refer to the inset graph.
Figure 7: (Main graph) Scaling plots for the Binder cumulant \(U\), order parameter \(m\), and susceptibility \(\chi\) for the model with anticonformity on the 3-D lattice. Based on the results, the model undergoes a continuous phase transition with a critical point at \(p_{c}\approx 0.268\) (see inset figure (a)). Additionally, we determine the best critical exponents as \(\beta\approx 0.25,\nu\approx 2.0\), and \(\gamma\approx 1.40\), leading to the collapse of all data points around the critical point, as indicated. Each data point averages \(10^{6}\) independent realizations.
phase transition.
From a social systems perspective, we can assert that complete consensus is achieved when no independent or nonconformist agents are present. Below the critical point is a coexistence of majority and minority opinions. However, at \(p\geq p_{c}\), the system reaches an equilibrium state akin to a deadlock. After comparing the critical points of the independence and nonconformity models, it becomes evident that the model with nonconformity exhibits a lower critical point than its independence counterpart. These results imply that systems containing nonconformist agents tend to reach a stalemate state more than systems containing independent agents. These findings are consistent with what we previously reported in our prior study [31]. Of course, differences in such phenomena are highly dependent on the applied model or scenario.
## CRediT authorship contribution statement
**D. A. Mulya** and **R. Muslim:** Conceptualization, Methodology, Writing, Software, Formal analysis, Validation, Writing, Visualization, Review & editing. **R. Muslim:** Main Contributor & Supervision. All authors read and reviewed the paper.
## Declaration of Interests
The contributors declare that they have no apparent competing business or personal connections that might have appeared to have influenced the reported work.
## Acknowledgments
The authors thank the BRIN Research Center for Quantum Physics for providing the mini HPC (Quantum Simulation Computer) for conducting numerical simulations. **D. A. Mulya** expresses gratitude for the support received from the Research Assistant program of BRIN talent management, as evidenced by Decree Number 60/II/HK/2023.
|
2303.01609
|
Theory of Topological Nernst and Thermoelectric Transport in Chiral
Magnets
|
We calculate the thermoelectric transport of spin-orbit coupled conduction
electrons in the presence of topological spin textures. We show, within a
controlled, semiclassical approach that includes all phase space Berry
curvatures, that the Nernst effect has two contributions in addition to the
usual effect proportional to a magnetic field. These are an anomalous
contribution governed by the momentum-space Berry curvature and proportional to
net magnetization, and a topological contribution determined by the real-space
Berry curvature and proportional to the topological charge density, which is
non-zero in skyrmion phases. We derive a generalized Mott relation expressing
the thermoelectric tensor as the chemical potential derivative of the
conductivity tensor and show how the Sondheimer cancellation in the Nernst
effect is evaded in chiral magnets.
|
Zachariah Addison, Lauren Keyes, Mohit Randeria
|
2023-03-02T22:10:01Z
|
http://arxiv.org/abs/2303.01609v1
|
# Theory of Topological Nernst and Thermoelectric Transport in Chiral Magnets
###### Abstract
We calculate the thermoelectric transport of spin-orbit coupled conduction electrons in the presence of topological spin textures. We show, within a controlled, semiclassical approach that includes all phase space Berry curvatures, that the Nernst effect has two contributions in addition to the usual effect proportional to a magnetic field. These are an anomalous contribution governed by the momentum-space Berry curvature and proportional to net magnetization, and a topological contribution determined by the real-space Berry curvature and proportional to the topological charge density, which is non-zero in skyrmion phases. We derive a generalized Mott relation expressing the thermoelectric tensor as the chemical potential derivative of the conductivity tensor and show how the Sondheimer cancellation in the Nernst effect is evaded in chiral magnets.
+
Footnote †: preprint: APS/123-QED
There has been enormous effort in the investigation of chiral magnetic materials in recent years [1; 2; 3; 4]. This has been in part due to the fundamental interest in topological spin textures and their impact on the properties of materials, and in part motivated by the possibility of using skyrmions (topological textures of unit charge) for potential device applications.
One of the most widely studied effects of topological charge density in chiral magnets is their unusual signature in transport: the topological Hall effect (THE) [3]. This effect arises when the conduction electrons - in metallic magnets [2; 5; 6; 7; 8; 9] or in heavy metals proximate to a magnet [10; 11] - are impacted by an "emergent magnetic field", which is the flux quantum (\(h/e\)) times the topological charge, density [12; 13; 14; 15; 16]. Our focus here is the analogous topological effect in the transverse _thermoelectric_ response.
The Nernst signal \(N\), the transverse voltage response to an applied thermal gradient in the absence time reversal symmetry, is a quantity of fundamental importance. It is well known that \(N=-E_{y}/|\nabla_{x}T|\) in an external magnetic field \(B_{z}\) is vanishingly small in simple metals due to the Sondheimer cancellation [17; 18]. A large Nernst effect is usually observed in either semimetals or strongly correlated systems [19].
Naively, if the "emergent magnetic field" due to a nontrivial topological charge density was simply analogous to an external magnetic field (as in the theory of the THE [13]) one might expect a Sondheimer cancellation and a very small topological Nernst effect. It is thus interesting that a robust topological Nernst effect has been seen in the skyrmion phase of chiral magnets [20; 21; 22; 23; 24; 25].
In this paper we develop a theory of the topological Nernst effect in chiral magnetic materials that addresses this puzzle. In addition, we also need to address the issue that the topological contribution is only one part of the observed signal. Experiments [20; 21; 22; 23; 24; 25] on the transverse thermoelectric response in chiral magnetic are analyzed as the sum of three pieces: an "ordinary" response proportional to the external magnetic field, an "anomalous" contribution proportional to the magnetization, and a "topological" contribution proportional to the topological charge density \(n_{\rm top}=\int d^{3}r\,\hat{\mathbf{m}}\cdot(\partial_{r_{s}}\hat{\mathbf{m}}\times \partial_{r_{y}}\hat{\mathbf{m}})/4\pi V\). This decomposition is motivated by the empirical success of a similar expression for Hall resistivity [2; 5; 6; 7; 8; 9; 10; 11] as the sum of three contributions.
The results presented here build on the recent demonstration [26] that, within a controlled semiclassical calculation, the Hall response arising from chiral magnetism can be shown rigorously to be the sum of an (intrinsic) anomalous contribution, proportional to the \(\mathbf{k}\)-space Berry curvature [27; 28] and a topological contribution, proportional to the real-space Berry curvature. We analyze the dynamics of wave-packets in phase space, taking into account _all_ Berry curvatures (including the mixed \((\mathbf{r},\mathbf{k})\) curvatures) on an equal footing together with \(\mathbf{r}\) and \(\mathbf{k}\) derivatives of the semiclassical energy eigenvalues. In the semiclassical regime where the lattice spacing \(a\ll\) the mean free path \(\ell\ll L_{s}\) the spin texture length scale, and weak SOC \(\lambda\) compared to electronic energy scales, we solve the Boltzmann equation to determine the thermoelectric tensor \(\overleftarrow{\alpha}\), which relates the electrical transport current \(\mathbf{j}_{\rm tr}\) to the temperature gradient via \(\mathbf{j}_{\rm tr}=-\overleftarrow{\alpha}\mathbf{\nabla}_{r}T\).
We summarize our main results.
(1) We show that, to leading order in the small parameters indicated above, the transverse (off-diagonal) thermoelectric response in a system with spin textures is just the sum of an anomalous piece and a topological piece. As summarized in the Table in Fig. 1, the former arises from \(\mathbf{k}\)-space Berry curvature [29] and is proportional to the net magnetization, while the latter arises from \(\mathbf{r}\)-space Berry curvature and is proportional to the topological charge density. All other contributions, arising, e.g., from mixed curvatures, are small corrections in the semiclassical regime with weak SOC.
(2) We derive a Mott relation relating the thermoelectric tensor \(\overleftarrow{\alpha}\) to the chemical potential derivative \(\partial\overrightarrow{\sigma}(\mu)/\partial\mu\) of the electric conductivity tensor \(\overleftarrow{\sigma}\). A Mott relation for just the anomalous response in a ferromagnet was derived in the pioneering work of Ref. [29]; here we show
that it is valid in the presence of arbitrary spin textures including both the anomalous and topological terms.
(3) We show how the topological Nernst contribution evades the Sondheimer cancellation. The \(\mathbf{r}\)-space Berry curvature couples with opposite signs to the spin-split conduction bands, unlike an external magnetic field, and this leads to a non-zero contribution even for a simple parabolic dispersion.
(4) Although the anomalous and topological contributions originate from vastly different physical mechanisms, we find, somewhat surprisingly, that they have the same functional dependence on the chemical potential \(\mu\) or density of conduction electrons, provided the SOC is proportional to the conduction electron group velocity.
Our conclusions are derived for conduction electrons with arbitrary dispersion and a general form for the SOC, including Rashba SOC arising at interfaces, interacting with any spin texture in 2D. More generally, we also analyze the 3D problem with a spin texture that does not vary in the \(z\)-direction, as would be the case for a random array or a crystal of skyrmion tubes.
Previous theoretical analyses of thermoelectric transport in chiral magnets have been restricted to either numerical calculations [30], where a decomposition into the anomalous and topological contributions is ill-defined, or to an analytic approach [31] that ignores SOC so that spin remains a good quantum number and, in addition, the Mott relation is assumed rather than derived.
**Model:** We consider the Hamiltonian
\[\widehat{H} =-\sum_{ij,\sigma}t_{ij}\hat{c}^{\dagger}_{i\sigma}\hat{c}_{j \sigma}-J\sum_{i,\sigma\sigma^{\prime}}\hat{c}^{\dagger}_{i\sigma}(\hat{ \mathbf{m}}(\mathbf{r}_{i})\cdot\mathbf{\sigma}^{\sigma\sigma^{\prime}})\hat{c}_{i\sigma^ {\prime}}\] \[+\frac{\lambda\hbar}{at}\sum_{ij,\gamma,\delta,\sigma\sigma^{ \prime}}\hat{c}^{\dagger}_{i\sigma}(v^{ij}_{\gamma}\,\chi_{\gamma\delta}\sigma ^{\sigma\sigma^{\prime}}_{\delta})\hat{c}_{j\sigma^{\prime}} \tag{1}\]
where \(i,j\) label lattice sites, \(\sigma,\sigma^{\prime}\in\{\uparrow,\downarrow\}\) and \(\gamma,\delta\in\{x,y\}\). The first term describes an arbitrary band structure using tight-binding amplitudes \(t_{ij}\) whose scale is \(t\). The second term couples the conduction electron spin to a given magnetic texture \(\hat{\mathbf{m}}(\mathbf{r})\) with an exchange coupling \(J\). For simplicity we choose \(\hat{\mathbf{m}}(\mathbf{r})\) to be independent of \(z\), which is adequate to model crystals or disordered arrays of skyrmion tubes.
The SOC with strength \(\lambda\) is proportional to the electron velocity \(\mathbf{v}^{ij}=it_{ij}(\mathbf{r}_{i}-\mathbf{r}_{j})/\hbar\) on a bond with lattice constant \(a\). For simplicity, we restrict ourselves to SOC that involves only \(\sigma_{x}\) and \(\sigma_{y}\) as appropriate for systems with broken interfacial inversion. The precise form of the SOC depends on the \(\overrightarrow{\chi}\) tensor. \(\overrightarrow{\chi}=i\tau_{y}\) leads to Rashba SOC (\(v^{ij}_{x}\sigma_{y}-v^{ij}_{y}\sigma_{x}\)) which preserves vertical mirror planes (\(\mathcal{M}_{x}\), \(\mathcal{M}_{y}\)), but breaks \(\mathcal{M}_{z}\). Choosing \(\overrightarrow{\chi}=\tau_{z}\) leads to \((v^{ij}_{x}\sigma_{x}-v^{ij}_{y}\sigma_{y})\) which breaks all mirror planes [32, 33]. (The effects of Ising SOC \(\propto\sigma_{z}\) are suppressed by \(\lambda/J\) and ignored; see Appendix A).
Finally, we include effects due to impurity scattering processes in \(\widehat{H}_{\text{imp}}\) which will enter our Boltzmann equation analysis below through the relaxation time \(\tau\). The energy scales in our model can be organized as \(\lambda\ll J<t\sim E_{F}\), where \(E_{F}\) is the Fermi energy measured from the band edge and where \(E_{F}\gg k_{B}T\).
To take into account both \(\mathbf{r}\) and \(\mathbf{k}\)-space Berry curvatures at the same time, we need to use a semiclassical approach. This demands that the microscopic length scales \(a\sim k_{F}^{-1}\) are much smaller than the mean free path \(\ell=v_{F}\tau\) and the length scale \(L_{s}\) on which the spin texture varies. To control our calculations we will work in the regime \(a\ll\ell\ll L_{s}\). These are realistic assumptions for many chiral magnetic materials, where \(10\lesssim L_{s}\lesssim 500\) nm [34], while \(1\lesssim\ell\lesssim 100\) nm (given that \(10\lesssim k_{F}\ell\lesssim 100\)).
**Semiclassical Theory of Thermoelectric Transport:** To analyze the dynamics of wave packets in phase space \(\mathbf{\xi}=(r_{x},r_{y},r_{z},k_{x},k_{y},k_{z})\), we construct the semiclassical Bloch Hamiltonian \(\mathcal{H}_{sc}(\mathbf{\xi})=\varepsilon(\mathbf{k})\mathbb{1}+\mathbf{d}(\mathbf{\xi}) \cdot\mathbf{\sigma}\)
Figure 1: **Summary of Results.** Dominant scaling relations, Berry curvature and magnetization dependencies for leading order contributions to the thermoelectric conductivity, Seebeck, and Nernst effects in the regime \(k_{F}^{-1}\sim a\ll\ell=v_{F}\tau\ll L_{s}\) and \(\lambda\ll J<t\sim E_{F}\). The Seebeck and Nernst effects are related to the thermopower tensor \(\overrightarrow{S}=\overrightarrow{\sigma}^{-1}\overline{\alpha}\), where Seebeck \(S_{L}=1/3(S_{xx}+S_{yy}+S_{zz})\) and Nernst \(N=(S_{xy}-S_{yx})/2\). The chemical potential or density dependence of quantities are determined by the dimensionless functions \(\mathcal{E}(\mu)\), \(\mathcal{S}(\mu)\), \(\mathfrak{A}(\mu)\), and \(\mathcal{N}(\mu)\) that depend on the ratios \(J/\mu\) and \(J/t\).
where
\[d_{\gamma}(\mathbf{\xi}) =\frac{\lambda}{a\,t}\,\sum_{\delta}\chi_{\gamma\delta}\partial_{k_{ k}}\varepsilon(\mathbf{k})-J\hat{m}_{\gamma}(\mathbf{r});\ \ \gamma,\delta\in\{x,y\}\] \[d_{z}(\mathbf{\xi}) =-J\hat{m}_{z}(\mathbf{r}) \tag{2}\]
where \(\varepsilon(\mathbf{k})\) is the band dispersion in the absence of \(\lambda\) and \(\mathbf{d}(\mathbf{\xi})\) captures the quantum mechanical nature of the spin (see Appendix A for details). The semiclassical eigenenergies are \(\mathcal{E}_{\pm}(\mathbf{\xi})=\varepsilon(\mathbf{k})\pm|\mathbf{d}(\mathbf{\xi})|\) and the derivatives of the eigenfunctions, \(|u_{l}(\mathbf{\xi})\rangle\), encode the quantum geometry of the semiclassical bands through the generalized Berry curvatures
\[\Omega^{\pm}_{\alpha\beta}(\mathbf{\xi})=\pm\frac{1}{2}\hat{\mathbf{d}}(\mathbf{\xi}) \cdot\Big{(}\partial_{\alpha}\hat{\mathbf{d}}(\mathbf{\xi})\times\partial_{\beta }\hat{\mathbf{d}}(\mathbf{\xi})\Big{)} \tag{3}\]
with \(\alpha,\beta\in\{r_{x},r_{y},r_{z},k_{x},k_{y},k_{z}\}\). The semiclassical equations of motion (with band index \(l=\pm\)) are
\[\dot{\xi}^{l}_{\alpha}(\mathbf{\xi})=[\Gamma^{-1}_{l}(\mathbf{\xi})]_{\alpha\beta}\ \partial_{\beta}\widetilde{\mathcal{E}}_{l}(\mathbf{\xi})/\hbar \tag{4}\]
where \([\Gamma_{l}(\mathbf{\xi})]_{\alpha\beta}=\Omega^{l}_{\alpha\beta}(\mathbf{\xi})-[i \sigma_{y}\otimes\mathds{1}]_{\alpha\beta}\). \(\widetilde{\mathcal{E}}_{l}(\mathbf{\xi})\simeq\mathcal{E}_{l}(\mathbf{\xi})\) up to corrections of order \((\lambda/E_{F})(a/L_{s})\) that can be ignored in the regime of interest [35; 26]. We suppress the \(\mathbf{\xi}\)-dependence of quantities in what follows.
Building on the analysis of Ref. [28] we obtain the local charge current
\[\mathbf{j}_{\rm loc}=\sum_{l=\pm}\int\frac{d^{3}k}{(2\pi)^{3}}\bigg{(}-e\mathcal{D }_{l}f_{l}\dot{\mathbf{r}}_{l}+\mathbf{\nabla}_{r}\times(\mathcal{D}_{l}f_{l}\mathbf{\mathfrak {m}}_{l})\bigg{)} \tag{5}\]
The first term describes the center of mass motion of wave packets, while the second describes their orbital rotation. Here \(\dot{\mathbf{r}}_{l}\) is determined by Eq. (4), \(f_{l}\) is the electronic distribution function, and \(\mathcal{D}_{l}=\sqrt{\det[\Gamma_{l}(\mathbf{\xi})]}\) describes the modification of the phase space volume element in the presence of Berry curvatures so that Liouville's theorem is satisfied. The orbital magnetic moment \(\mathbf{\mathfrak{m}}_{l}\) of the semiclassical wave packet (with \(a,b,c\!\in\!\{x,y,z\}\)) is given by
\[\mathbf{\mathfrak{m}}_{l}\!\cdot\!\dot{\mathbf{r}}_{a}=-i\frac{e}{2\hbar}\sum_{bc} \varepsilon_{abc}(\partial_{k_{b}}\left<u_{l}\right|)(\mathcal{H}_{sc}- \mathcal{E}_{l})(\partial_{k_{e}}\left|u_{l}\right>) \tag{6}\]
In experiments, the Nernst effect is determined by the \(\mathbf{q}=0\)_transport_ current [36]
\[\mathbf{j}_{\rm tr}=\int\frac{d^{3}r}{V}\bigg{(}\mathbf{j}_{\rm loc}-\mathbf{\nabla}_{r} \times\mathbf{M}\bigg{)} \tag{7}\]
where \(V\) is the volume of the system and \(\mathbf{M}\) the thermodynamic magnetization. To calculate anomalous and topological contributions to the thermoelectric conductivity, we expand Eq. (7) to first order in temperature gradients (linear response) and to second order in the small parameters of our theory.
**Thermoelectric Conductivity.** We write the thermoelectric tensor as \(\overrightarrow{\alpha}=\overleftarrow{\alpha}(1)+\overleftarrow{\alpha}(2)\), where \(\alpha^{(1)}\) is independent of \(\tau\), and \(\alpha^{(2)}\) depends on \(\tau\). \(\overleftarrow{\alpha}^{(1)}\) arises from the orbital magnetic moment in E. (5) and the magnetization in Eq. (7). The antisymmetric Hall component \(\alpha_{H}=(\alpha_{xy}-\alpha_{yx})/2\) has two leading order contributions, \(\alpha^{A}_{H}\) and \(\alpha^{T}_{H}\). Here, \(\alpha^{A}_{H}=(\alpha^{(1)}_{xy}-\alpha^{(1)}_{yx})/2\) is the anomalous contribution to the thermoelectric Hall conductivity [29], given by
\[\alpha^{A}_{H}=\frac{\lambda^{2}}{T}\int_{\xi}((\varepsilon_{l}-\mu)f^{0}[ \varepsilon_{l}]-G[\varepsilon_{l}])\bigg{(}\frac{\partial^{2}\Omega^{l}_{k_{ x}k_{y}}}{\partial\lambda^{2}}\bigg{)}\bigg{|}_{\lambda=0} \tag{8}\]
Here \(\int_{\xi}\equiv\sum_{l=\pm}\int d^{6}\xi/(8\pi^{3}V)\), the local grand potential density \(G_{l}[\varepsilon_{l}]=-k_{B}T\ln(1+e^{-\beta(\varepsilon_{l}-\mu)})\), the equilibrium distribution function \(f^{0}[\varepsilon_{l}]=(e^{(\varepsilon_{l}-\mu)/k_{B}T}+1)^{-1}\), and \(\varepsilon_{l}\) are the semiclassical eigenenergies in the absence of \(\lambda\): \(\varepsilon_{\pm}(\mathbf{k})=\varepsilon(\mathbf{k})\pm J\). We note that real space gradient corrections to Eq. (8) are down by \(a/L_{s}\).
The \(\tau\)-dependent contribution \(\overleftarrow{\alpha}^{(2)}\) is obtained from the Boltzmann equation
\[-\frac{f_{l}-f^{0}[\mathcal{E}_{l}]}{\tau}=\dot{\mathbf{r}}\cdot\mathbf{\nabla}_{r}f_{ l}+\dot{\mathbf{k}}\cdot\mathbf{\nabla}_{k}f_{l} \tag{9}\]
which we solve for \(f_{l}=f^{0}[\mathcal{E}_{l}]+g_{l}\) to linear order in the temperature gradient witin the relaxation time approximation (see Appendix B). We find that the leading order longitudinal contribution \(\alpha_{L}=(\alpha^{(2)}_{xx}+\alpha^{(2)}_{yy}+\alpha^{(2)}_{zz})/3\) can be written as
\[\alpha_{L}=-\frac{e\tau}{3\hbar^{2}}\int_{\xi}|\mathbf{\nabla}_{k}\varepsilon_{l}|^ {2}\,\partial_{T}f^{0}[\varepsilon_{l}] \tag{10}\]
The topological Hall contribution to the theormoelectric conductivity derives from the antisymmetric component \(\alpha^{T}_{H}=(\alpha^{(2)}_{xy}-\alpha^{(2)}_{yx})/2\) that can be written as
\[\alpha^{T}_{H}=\frac{e\tau^{2}t^{3}}{\hbar^{3}}\,n_{\rm top}a^{2}\sum_{l=\pm} \,l\,\int\frac{d^{3}k}{8\pi^{2}}\,\partial_{T}f^{0}[\varepsilon_{l}]\, \mathbbm{G}(\mathbf{k}). \tag{11}\]
Here \(n_{\rm top}\) is the topological charge density and
\[\mathbbm{G}(\mathbf{k})=\frac{1}{(a^{2}\,t^{3})}\bigg{(}\mathbf{v}^{T}\cdot( \overleftarrow{M}^{-1}-\mathrm{Tr}(\overleftarrow{M}^{-1})\mathds{1})\cdot \mathbf{v}\bigg{)} \tag{12}\]
with \(\mathbf{v}=(\partial_{k_{x}}\varepsilon,\partial_{k_{y}}\varepsilon)\) and \(\overleftarrow{M}^{-1}_{k_{i}k_{j}}=\partial_{k_{i}}\partial_{k_{j}} \varepsilon_{l}\) (see Appendix C).
Thus, the leading order contributions to the thermoelectric conductivity are just the sum of an anomalous contribution proportional to the momentum space Berry curvature and a topological contribution proportional to the topological charge density: \(\alpha_{H}=\alpha^{A}_{H}+\alpha^{T}_{H}\) with corrections suppressed in powers of the small parameters of our theory (see Fig. 1).
**Mott Relation.** Temperature gradients couple to the distribution function via the real-space gradient operator \(\mathbf{r}\cdot\mathbf{\nabla}_{r}=\mathbf{r}\cdot\mathbf{\nabla}_{r}[T(\mathbf{r})\partial_{T}+ \hat{\mathbf{m}}(\mathbf{r})\cdot\mathbf{\nabla}_{\hat{m}}]\) in the Boltzmann equation. In contrast, electric field perturbations only enter the Boltzmann equation through the semiclassical
equations of motion: \(\dot{\mathbf{r}}\to\dot{\mathbf{r}}_{E}\) and \(\dot{\mathbf{k}}\to\dot{\mathbf{k}}_{E}\) (see Appendix D). However, even in the presence of all phase space Berry curvatures, the electric field dependent perturbations to the equations of motion can be rewritten such that the electric field dependent part of \(\dot{\mathbf{r}}_{E}\cdot\mathbf{\nabla}_{r}+\dot{\mathbf{k}}_{E}\cdot\mathbf{\nabla}_{k}\) takes the form \(\dot{\mathbf{r}}\cdot\mathbf{E}\partial_{\vec{E}}\). This allows a simple relationship between the different field dependent perturbations to the distribution function to be established.
The solution to the Boltzmann equation requires inverting the operator \(1+\mathds{P}\) with \(\mathds{P}_{l}=\tau(\dot{\mathbf{r}}_{l}\cdot\mathbf{\nabla}_{r}+\dot{\mathbf{k}}_{l}\cdot \mathbf{\nabla}_{k})\). The formal solution can then be written as
\[g_{l}=\tau\partial_{T}f^{0}[\tilde{\mathbf{\mathcal{E}}}_{l}]\sum_{n=0}^{\infty}(- \mathds{P}_{l})^{n}\dot{\mathbf{r}}_{l}\cdot(-\mathbf{\nabla}_{r}T(\mathbf{r})) \tag{13}\]
(see Appendix B). Similarly in the presence of a constant electric field \(\mathbf{E}=-\mathbf{\nabla}_{r}\phi(\mathbf{r})\) the linear response solution can be written as [26]
\[g_{l}^{\phi}=\tau e\partial_{\varepsilon}f^{0}[\tilde{\mathbf{\mathcal{E}}}_{l}] \sum_{n=0}^{\infty}(-\mathds{P}_{l})^{n}\dot{\mathbf{r}}_{l}\cdot(-\mathbf{\nabla}_{r} \phi(\mathbf{r})) \tag{14}\]
The charge current \(\mathbf{j}\) deriving from these contributions takes the form \(\dot{\mathbf{r}}_{l}g_{l}\) which allows one to determine a Mott relation between \(\overline{\dot{\mathbf{\upsilon}}}^{(2)}\) and \(\partial\overline{\dot{\mathbf{\upsilon}}}^{(2)}/\partial\mu\) where \(\mu\) is the chemical potential. Similarly by generalizing the work of Ref. [29] we find that a Mott relation also holds for \(\overline{\dot{\mathbf{\upsilon}}}^{(1)}\) and \(\partial\overline{\dot{\mathbf{\upsilon}}}^{(1)}/\partial\mu\) in the regime \(a\ll L_{s}\) (see Appendix D). By adding these two contributions we arrive at the Mott relation for the full response tensors
\[\alpha_{ij}=-\frac{\pi^{2}}{3}\frac{k_{B}^{2}T}{e}\frac{\partial\sigma_{ij}}{ \partial\mu} \tag{15}\]
valid for \(k_{B}T\ll E_{F}\).
The leading order contribution to the anomalous Hall conductivity \(\sigma_{H}^{A}=(\sigma_{xy}^{(1)}-\sigma_{yx}^{(1)})/2\) derives from the anomalous velocity that is proportional to \(\Omega_{k_{x}k_{y}}\) and can be written as
\[\sigma_{H}^{A} =-\frac{e^{2}}{2\hbar}\sum_{l=\pm}\frac{l}{J^{2}}\int\frac{d^{3} k}{(2\pi)^{3}}f^{0}[\varepsilon_{l}]\,\hat{\mathbf{z}}\cdot\bigg{(}\partial_{k_{x}} \mathbf{d}_{l}\times\partial_{k_{y}}\mathbf{d}_{l}\bigg{)}\] \[=\frac{e^{2}}{2\hbar}\text{Det}(\overline{\chi})\sum_{l=\pm}l \bar{m}_{z}\frac{t\lambda^{2}}{J^{2}}\int\frac{d^{3}k}{(2\pi)^{3}}\mathbb{G}( \mathbf{k})\partial_{\varepsilon_{l}}f^{0}[\varepsilon_{l}] \tag{16}\]
where we have used an integration by parts to restrict the integration to momenta near the Fermi surface [37] (see Appendix E). Using the Mott relation, we find that the leading order contributions to \(\alpha_{H}=\alpha_{H}^{A}+\alpha_{H}^{T}\) are
\[\alpha_{H}^{A} =\bigg{(}\frac{k_{B}e}{\hbar}\bigg{)}\bigg{(}\frac{\lambda^{2}k_{ B}T}{J^{2}t}\bigg{)}\bigg{(}\bar{m}_{z}\frac{\text{Det}(\overline{\chi})}{2\pi} \bigg{)}\mathfrak{sl}(\mu)\] \[\alpha_{H}^{T} =-\bigg{(}\frac{k_{B}e}{\hbar}\bigg{)}\bigg{(}\frac{tk_{B}T}{( \hbar/\tau)^{2}}\bigg{)}\bigg{(}n_{\text{top}}a^{2}\bigg{)}\mathfrak{sl}(\mu)\] \[\mathfrak{sl}(\mu) =\frac{t^{2}}{24}\frac{\partial}{\partial\mu}\bigg{(}\sum_{l=\pm 1 }l\int d^{3}k\,\partial_{\varepsilon_{l}}f^{0}[\varepsilon_{l}]\,\mathbb{G}( \mathbf{k})\bigg{)} \tag{17}\]
\(\mathbb{G}(\mathbf{k})\) is defined in (12), and \(\mathfrak{sl}(\mu)\) is a dimensionless function of \(\mu/t\) and \(J/t\) and describes the chemical potential or density dependence of \(\alpha_{H}\) (see Appendix E).
Note that \(\alpha_{H}^{T}\) and \(\alpha_{H}^{A}\) derive from very different mechanisms, the former from the real space Berry curvature and the latter from the momentum space Berry curvature. Nevertheless, both contributions to the thermoelectric conductivity can be shown to be proportional to \(\mathbb{G}(\mathbf{k})\) and thus have the same functional dependence with the chemical potential or density.
To understand why \(\mathbb{G}(\mathbf{k})\) appears in both contributions, we note that the totally anti-symmetric part of any rank two tensor must be invariant under rotations about the \(\hat{\mathbf{z}}\)-axis and must change sign under vertical mirror planes. \(\mathbb{G}(\mathbf{k})\) transforms trivially under rotations about the \(\hat{\mathbf{z}}\)-axis, and at this level of our perturbation expansion, it is the natural object to construct from one and two momentum space derivatives of \(\varepsilon(\mathbf{k})\). In \(\alpha_{H}^{T}\), vertical mirror operations flip the sign of \(n_{\text{top}}\). In \(\alpha_{H}^{A}\), it is \(\bar{m}_{z}\) which changes sign under such a transformation, while \(\text{Det}(\overline{\chi})\) is left invariant. See Appendix E.
In the regime \(\sigma_{L}\gg\sigma_{ij}\), \(i\neq j\), with \(\sigma_{L}=(\sigma_{xx}+\sigma_{yy}+\sigma_{zz})/3\), and using Eq. (15) the Nernst signal can be written as [38]
\[N=-\frac{\pi^{2}}{3}\frac{k_{B}^{2}T}{e}\frac{\partial\tan(\Theta_{H})}{ \partial\mu} \tag{18}\]
where \(\tan(\Theta_{H})=\sigma_{H}/\sigma_{L}\) is the Hall angle. For simple metals in the presence of an external magnetic field, a Sondheimer cancellation [17] can occur whereby the dominant contributions to the Hall and longitudinal conductivities have similar \(\mu\)-dependences, so that \(\partial\Theta_{H}/\partial\mu\) is small and \(N\) can be highly suppressed. This cancellation can be avoided by an energy-dependent scattering mechanism [39]. However, even with a constant relaxation time, the anomalous and topological contributions to \(N\) avoid Sondheimer cancellation because the Berry curvatures have opposite signs in spin-split bands (Appendix F).
In parallel to Eq. (17), we write the contributions to the Nernst effect \(N=N^{A}+N^{T}\) as
\[N^{A} =\bigg{(}\frac{k_{B}}{e}\bigg{)}\bigg{(}\frac{\lambda^{2}(\hbar/ \tau)k_{B}T}{J^{2}t^{2}}\bigg{)}\bigg{(}\bar{m}_{z}\frac{\text{Det}(\overline{ \chi})}{2\pi}\bigg{)}\mathcal{N}(\mu)\] \[N^{T} =-\bigg{(}\frac{k_{B}}{e}\bigg{)}\bigg{(}\frac{k_{B}T}{(\hbar/\tau) }\bigg{)}\bigg{(}n_{\text{top}}a^{2}\bigg{)}\mathcal{N}(\mu)\] \[\mathcal{N}(\mu) =\pi^{3}t^{3}\frac{\partial}{\partial\mu}\frac{\sum_{l}l\int d^{3}k \,\mathbb{G}(k)\,\partial_{\varepsilon_{l}}f^{0}[\varepsilon_{l}]}{\sum_{l} \int d^{3}k\,|\mathbf{\nabla}_{k}\varepsilon_{l}|^{2}\,\partial_{\varepsilon_{l}}f^{0}[ \varepsilon_{l}]} \tag{19}\]
\(\mathbb{G}(\mathbf{k})\) is defined in (12), and \(\mathcal{N}(\mu)\) is a dimensionless function of \(\mu/t\) and \(J/t\) and describes the chemical potential or density dependence of \(N\).
**Model Calculations.** Given an arbitrary band structure, one can use equations (17) and (19) to compute the
thermoelectric conductivity and Nernst signals. As illustrative examples, we calculate these transport signals in 2D for a system with a parabolic dispersion and for a tight binding model.
Consider parabolic bands with arbitrary SOC in 2D. To calculate \(\mathfrak{A}(\mu)\), we use \(\varepsilon_{\pm}(\mathbf{k})=\varepsilon_{0}+a^{2}k^{2}\pm J\) and Eq. (17) to find that \(\mathfrak{A}(\mu)=2\pi^{2}/3\left(\Theta[\mu-(\varepsilon_{0}-J)]-\Theta[\mu- (\varepsilon_{0}+J)]\right)\) (see dotted lines in Fig. 2(a)). The non-analytic structure in \(\mathfrak{A}(\mu)\) occurs at the band edges. For the Nernst signal \(\mathscr{N}(\mu)=-4\pi^{3}tJ/3\mu^{2}\,\Theta(\mu-(\varepsilon_{0}+J))\) and is nonzero only when electron states in both bands are occupied (see Fig. 2(b)).
Fig. 2a shows \(\mathfrak{A}(\mu)\) and Fig. 2b shows \(\mathscr{N}(\mu)\), calculated for nearest neighbor interactions on the two dimensional square lattice: \(\varepsilon(\mathbf{k})=-2t(\cos(k_{x}a)+\cos(k_{y}a))\). The quadratic band approximation with \(\varepsilon_{0}=-4t\) is marked by the dashed red lines. The non-analytic jumps in \(\mathfrak{A}(\mu)\) and \(\mathscr{N}(\mu)\) are due to the non-analyticity of the density of states as the chemical potential crosses the band edge.
Near the Van Hove singularities, \(\mathfrak{A}(\mu)\) and \(\mathscr{N}(\mu)\) are sharply enhanced as is commonly recognized in signatures of thermoelectric transport [40; 41]. In fact, anywhere the density of states changes rapidly with the chemical potential will show an enhancement.
**Discussion.** Through a controlled semiclassical analysis in the regime where \(\lambda<J\ll E_{F}\), \(a\ll l\ll L_{s}\), we have shown that the thermoelectric conductivity is composed of the sum of an anomalous contribution, proportional to the average magnetization, and a topological contribution, proportional to the topological charge density. In addition, we have shown that a Mott relation holds even in the presence of a nonzero topological charge density, thus justifying a commonly held assumption [20; 21; 22; 30; 31; 41]. As a consequence of the Mott relation, the thermoelectric conductivity and Nernst signal are enhanced at points in the band structure in which there is a rapidly changing density of states, such as near van Hove singularities.
We estimate the order of magnitude of the Nernst signal by approximating Eq. (19) with 3D parabolic bands and find \(N^{T}\sim(k_{B}/e)(k_{F}\ell)(k_{B}T/E_{F})(n_{\text{top}}a^{2})\mathscr{N}(\mu)\). In a skyrmion material with \(a/L_{s}=1/100\), \(n_{\text{top}}a^{2}\approx 10^{-3}\). We use \(k_{B}T/E_{F}\approx 10^{-4}\), and \(10<k_{F}\ell<100\). For \((tJ/\mu^{2})\approx 1/3\), we find \(\mathscr{N}(\mu)\approx 100\). We thus estimate \(N^{T}\) in the range \((10^{-3}\) - \(10^{-2})\left(k_{B}/e\right)\) which is equal to 86 - 860 nV/K, similar to what has been measured in experiments; see e.g., [21].
We have considered the regime \(a\ll\ell\ll L_{s}\) in the analysis above. The question of how to solve the Boltzmann equation when \(a\ll L_{s}\ll\ell\), the analog of the "strong field limit", is an open question. We have focused here on the intrinsic part of the anomalous thermoelectric response arising from k-space Berry curvature, known to be the dominant contribution to the anomalous Hall response in many materials. The question of how extrinsic effects like skew and side-jump scattering impact the thermoelectric response has not been explored. These are all important questions for future research.
**Acknowledgements.** We thank Nishchhal Verma for insightful discussion. This work was supported by the NSF Materials Research Science and Engineering Center Grant DMR-2011876. Z.A. was also supported by the Ohio State University President's Postdoctoral Scholars Program.
|
2308.14610
|
PolarRec: Radio Interferometric Data Reconstruction with Polar
Coordinate Representation
|
In radio astronomy, visibility data, which are measurements of wave signals
from radio telescopes, are transformed into images for observation of distant
celestial objects. However, these resultant images usually contain both real
sources and artifacts, due to signal sparsity and other factors. One way to
obtain cleaner images is to reconstruct samples into dense forms before
imaging. Unfortunately, existing reconstruction methods often miss some
components of visibility in frequency domain, so blurred object edges and
persistent artifacts remain in the images. Furthermore, the computation
overhead is high on irregular visibility samples due to the data skew. To
address these problems, we propose PolarRec, a transformer-encoder-conditioned
reconstruction pipeline with visibility samples converted into the polar
coordinate representation. This representation matches the way in which radio
telescopes observe a celestial area as the Earth rotates. As a result,
visibility samples distribute in the polar system more uniformly than in the
Cartesian space. Therefore, we propose to use radial distance in the loss
function, to help reconstruct complete visibility effectively. Also, we group
visibility samples by their polar angles and propose a group-based encoding
scheme to improve the efficiency. Our experiments demonstrate that PolarRec
markedly improves imaging results by faithfully reconstructing all frequency
components in the visibility domain while significantly reducing the
computation cost in visibility data encoding. We believe this high-quality and
high-efficiency imaging of PolarRec will better facilitate astronomers to
conduct their research.
|
Ruoqi Wang, Zhuoyang Chen, Jiayi Zhu, Qiong Luo, Feng Wang
|
2023-08-28T14:26:15Z
|
http://arxiv.org/abs/2308.14610v2
|
A Transformer-Conditioned Neural Fields Pipeline with Polar Coordinate Representation for Astronomical Radio Interferometric Data Reconstruction
###### Abstract
In radio astronomy, visibility data, which are measurements of wave signals from radio telescopes, are transformed into images for observation of distant celestial objects. However, these resultant images usually contain both real sources and artifacts, due to signal sparsity and other factors. One way to obtain cleaner images is to reconstruct samples into dense forms before imaging. Unfortunately, existing visibility reconstruction methods may miss some components of the frequency data, so blurred object edges and persistent artifacts remain in the images. Furthermore, the computation overhead is high on irregular visibility samples due to the data skew. To address these problems, we propose Polar-Rec, a reconstruction method for interferometric visibility data, which consists of a transformer-conditioned neural fields pipeline with a polar coordinate representation. This representation matches the way in which telescopes observe a celestial area as the Earth rotates. We further propose Radial Frequency Loss function, using radial coordinates in the polar coordinate system to correlate with the frequency information, to help reconstruct complete visibility. We also group visibility sample points by angular coordinates in the polar coordinate system, and use groups as the granularity for subsequent encoding with a Transformer encoder. Consequently, our method can capture the inherent characteristics of visibility data effectively and efficiently. Our experiments demonstrate that PolarRec markedly improves imaging results by faithfully reconstructing all frequency components in the visibility domain while significantly reducing the computation cost.
## Introduction
In radio astronomy, visibility refers to radio signal data from celestial objects, obtained by radio telescopes. These data are represented as complex values in the _uv-plane_, a geometric plane defined for interferometric observations. Visibility data are subsequently converted into images through _imaging_ for further analysis. However, these images, known as _dirty images_, are often dominated by artifacts [12]. This phenomenon is due to limitations in telescope configurations, under which not the entire uv-plane is sampled. Therefore, visibility data normally have to be reconstructed before being utilized in scientific analysis. In this paper, we propose a visibility reconstruction method, aiming to reconstruct the real sky by recovering all visibility components in the uv-plane effectively and efficiently.
Traditional methods first transfer the sparse visibility data into dirty images through the imaging process and then reconstruct the dirty images to _clean images_. The process is shown in Figure 1 (a). In contrast, some recent deep-learning-based studies [13, 14] have proposed to first do inpainting on the visibility data to reconstruct the sparse samples to dense coverage, and then perform imaging to obtain the clean image, as shown in Figure 1 (b). In order to produce a high-fidelity sharp image, it is crucial to densely sample the full visibility domain [14]. In this paper, we adopt the reconstruction-and-imaging processing flow.
Existing methods for reconstructing visibility data face challenges in both effectiveness and efficiency. Specifically, the first challenge is to effectively capture all visibility components within the uv-plane. For example, [12] use a convolutional neural network that is based on pixels and grids, resulting in discontinuous information in the reconstructed visibility. In comparison, Wu et al. [14] use neural fields to address the conti
Figure 1: Two visibility data processing flows.
nuity problem, but their method can accurately restore the low-frequency part of visibility only (located near the center of the uv-plane), largely missing the high-frequency portion (found far from the uv-plane center). Such discontinuity and incompleteness in the visibility domain result in blurred edges of observed objects, disappearance of faint astronomical sources, and persistence of artifacts, in resultant images. The second major issue is the inefficiency of current strategies. For instance, Radionets (Schmidt et al., 2022) encode visibility data together with an excessive amount of zero-inpainting, causing a large amount of unnecessary computation. Wu et al. (Wu et al., 2022) embed each visibility sample point as a token in their Transformer encoder (Vaswani et al., 2017) to attend to each other, which is of quadratic computation cost in the number of sample points, too high for real applications (Dosovitskiy et al., 2021).
To address these challenges, we explore the impact of different frequency components of visibility on imaging results and then propose PolarRec, which leverages the polar representation of sample points to enhance the reconstruction performance. Our key observation is that high- and low-frequency signals are distributed on the uv-plane in accordance with their distances from the center and visibility samples are obtained by telescopes as the Earth rotates. Therefore, we propose to use the radial coordinate to associate with the frequency information and the angular coordinate to group sample points, both in the polar coordinate system.
Specifically, we design a weighting scheme based on the radial coordinates of visibility points and integrate our weighting scheme with the Focal Frequency Loss function (Jiang et al., 2021). Our weighted loss function, Radial Frequency Loss, associates frequency with radial coordinates, enabling effective reconstruction of visibility data, on both high- and low-frequency components, whereas existing work recovers mainly low-frequency components. Consequently, our method produces sharper, more detailed imaging results. Furthermore, by grouping visibility points according to their angular coordinates and performing encoding at the group level, our Transformer encoder is more efficient than single-visibility-point encoding since each group of visibility points is processed to be one token for the Transformer encoder. This group encoding improves computation efficiency, making the use of Transformer encoders for visibility encoding more practical and scalable.
In summary, our main contributions are as follows:
* We propose Radial Frequency Loss (RFL) function, which incorporates the radial coordinate in the polar representation of visibility data. This approach enables the model to effectively capture the complete visibility data, especially the high-frequency components.
* We design an intuitive and effective grouping method based on the angular coordinate in the polar representation of visibility data. Utilizing these angular groups as the granularity for subsequent encoding by the Transformer improves computation efficiency significantly.
We have experimentally evaluated our proposed PolarRec, including an overall comparison with other state-of-the-art methods, ablation studies on individual weighting techniques, and tests on group sizes and grouping techniques. Our experimental results confirm that our method can faithfully reconstruct all frequency components of visibility, significantly enhancing the quality of the resultant images. Also, our method effectively reduces the computation cost of visibility encoding while preserving the high quality of resultant images. This efficiency improvement makes it more practical to use Transformer encoders for visibility data encoding in real-world applications.
## Background and Related Work
### Very Long Baseline Interferometry (VLBI)
In radio astronomy, using radio interference signals to image distant astronomical sources requires telescopes of very large aperture (Bouman et al., 2018), because the angular resolution of a telescope is inversely proportional to the diameter. A major observation technique is the Very Long Baseline Interferometry (VLBI), which uses multiple radio telescopes spreading over the globe to form a virtual Earth-sized telescope. The radio waves from astronomical sources are recorded separately at individual telescopes. Then, these signals are cross-correlated for all pairs of antennas at a central location, generating _visibility_ data. A VLBI observation is typically performed for hours to measure as many points in the uv-plane as possible. However, the measurement results remain sparse due to the limited number of antennas (Thompson et al., 2017; Bouman et al., 2018). Consequently, sparse-to-dense reconstruction on visibility data is necessary to improve the imaging quality.
### Interferometric Imaging
Visibility data, represented as complex values, is the result of a Fourier transform of the sky's brightness distribution (Liu et al., 2022). _Imaging_ converts visibility data into images, which can be analyzed to provide insights about the observed celestial bodies (Thompson et al., 2017). In the imaging process, an inverse Fourier transform maps the \((u,v)\) coordinates from the Fourier domain to \((l,m)\) coordinates in the image domain (Wu et al., 2022). The transformation can be described as:
\[I(l,m) = \int_{u}\int_{v}e^{2\pi i(ul+vm)}V(u,v)dudv. \tag{1}\]
In this equation, \(V(u,v)\) is the visibility data in Fourier space, and \(I(l,m)\) represents the intensity distribution in the image domain.
### Radio Interferometric Data Reconstruction
The number of radio antennas in VLBI is limited and the antennas are non-uniformly distributed on the ground. As a result, visibility data are sparsely sampled and irregularly scattered (Thompson et al., 2017; Bouman et al., 2018). When the inverse Fourier transform is applied to this sparse data, the resultant dirty image is dominated by artifacts (Schmidt et al., 2022). Therefore, these data must be further reconstructed to recover the real sky for subsequent scientific analysis (Wu et al., 2022).
There are two ways to reconstruct radio interferometric data to obtain clean images, reconstruction in the visibility domain and reconstruction in the image domain. A number of existing methods [1, 1, 1, 2] first do imaging to transfer the sparse visibility into dirty images and then reconstruct the dirty images to clean images. Most recently two papers have proposed to perform sparse-to-dense inpainting in the visibility domain first and then do imaging [2, 2]. Specifically, Schmidt et al. [2] used Radionets with a convolutional neural network structure to generate reproducible clean images on simulated data, whereas Wu et al. [22] performed sparse-to-dense inpainting in the spectral domain with neural fields, outperforming other established reconstruction techniques.
In our experiments, we employed various state-of-the-art deep learning methods for visibility reconstruction [23, 24, 25]. We found that these approaches could successfully recover the general structure of the most prominent object. However, certain details and faint surrounding sources were missing, and some artifacts persisted in the reconstructed images. These limitations are due to the incomplete reconstruction of visibility components. For example, both Radionets [24] and U-Nets [12] were based on pixels and grids, so the reconstruction of visibility data was not continuous. In comparison, Wu et al. used Neural Interferometry [23] to reconstruct continuous visibility, but it could recover the low-frequency components only, missing the high-frequency ones. Furthermore, embedding each visibility point as a token results in a large number of tokens to be processed by the Transformer encoder, leading to a quadratic increase of computation cost [23]. To the best of our knowledge, no previous studies have managed to reconstruct all components in visibility data or increase the granularity of visibility data by Transformer encoders for efficiency.
## Our Method
In this section, we first investigate the relation between visibility components and corresponding imaging results. After that, we present PolarRec, which adopts polar coordinate representation in visibility reconstruction. In PolarRec, we design Radial Frequency Loss to incorporate a weighting scheme based on the radial coordinate within the uv-plane. Moreover, we group the visibility points according to their angular coordinates and then extract grouping tokens for the subsequent Transformer encoding. An overview of our method is presented in Figure 3.
### Imaging Results of Visibility
In radio interferometry, visibility data plays a vital role in generating high-fidelity images of celestial objects. The imaging process is the inverse Fourier transformation of visibility data to construct the brightness distribution of the observed sky. Consequently, missing or incomplete visibility components can significantly degrade the imaging result, leading to artifacts and loss of crucial information about the object's structure. Therefore, we first investigate the impact of missing visibility components on the final imaging output.
We explore the impact of visibility components in various frequency regions by applying standard band-limiting operations [10] and analyzing the effects of individual components on the imaging results. As shown in Figure 2), loss of high frequency data due to a low-pass filter (Column low-pass-1 and low-pass-2) results in blur and artifacts and causes the vanishing of weak sources in the imaging results. Comparing low-pass-1 and low-pass-2, we can see that the larger the radius of the retained portion in the frequency domain, the clearer the resulting image is, and the fewer artifacts there are. In comparison, when low frequency data are absent due to a high-pass filter (Column high-pass-1 and high-pass-2), the overall quality of the image declines, but the object edges are clear, and dim or small sources around the main observed object are retained. Last, the band-and-stop filter (Column Band-stop) also causes artifacts and blur in imaging results.
In summary, different frequency regions cause distinct imaging effects. This observation suggests that recovering all missing visibility components could enhance the quality of imaging result. More specifically, if a resultant image has blurred edges or misses dim light sources, it is probably due to poor reconstruction of the high-frequency components of the visibility data.
### Polar Coordinate Representation
Our utilization of the polar coordinate representation is a natural and intuitive approach, because visibility sampling is based on Earth's rotation and high- and low-frequency visibility components are distributed on the uv-plane according to their distances from the origin of the plane.
In the uv-plane, we convert \((u,v)\) coordinates to polar coordinates, denoted as \((r(u,v),\theta(u,v))\):
\[r(u,v)=\sqrt{u^{2}+v^{2}} \tag{2}\]
\[\theta(u,v)=\text{arctan2}(v,u) \tag{3}\]
Where \(r(u,v)\) represents the radial distance from the origin, and \(\theta(u,v)\) represents the angle of the vector from the positive \(u\)-axis.
Figure 2: Effects of band limiting.
### Radial Frequency Loss
Our Radial Frequency Loss (RFL) is built upon the Focal Frequency Loss (FFL) [10]. Based on FFL [10], we compute the weight matrix \(w_{1}(u,v)\) to down-weight easy visibility components (components whose predicted values are close to ground truth):
\[w_{1}(u,v)=|V_{r}(u,v)-V_{p}(u,v)|^{\alpha} \tag{4}\]
where \(V_{r}(u,v)\) and \(V_{p}(u,v)\) represent the real and predicted visibility respectively, and \(\alpha\) is a scaling factor.
We introduce an additional weight \(w_{2}(u,v)\), computed from \(r(u,v)\), to make our model pay more attention to high-frequency components of the visibility during reconstruction. The weight term \(w_{2}(u,v)\) is calculated as follows:
\[w_{2}(u,v)=\left(\frac{r(u,v)}{\max(r(u,v))}+1\right)^{\beta} \tag{5}\]
Where \(\frac{r(u,v)}{\max(r(u,v))}\) normalizes the radial coordinate, ensuring the weight is more significant for points farther from the center, corresponding to higher frequencies in the visibility data. Adding 1 prevents any weights from becoming zero, maintaining the influence of all visibility components during the learning process. \(\beta\) is a scaling factor.
The final weight \(w(u,v)\) is computed as:
\[w(u,v)=\left(\frac{r(u,v)}{\max(r(u,v))}+1\right)^{\beta}|V_{r}(u,v)-V_{p}(u,v )|^{\alpha} \tag{6}\]
The final form of the Radial Frequency Loss is then given by:
\[\text{RFL}=\frac{1}{MN}\sum_{u=0}^{M-1}\sum_{v=0}^{N-1}w(u,v)|V_{r}(u,v)-V_{p} (u,v)|^{2} \tag{7}\]
### Encoding by Angular-Coordinate-Based Groups
As shown on the left in Figure 3, the sparsely sampled visibility data is in the form of \(\{u_{s},v_{s},V\left(u_{s},v_{s}\right)\}\), where \((u_{s},v_{s})\) are the coordinates at which a measurement is sampled and \(V\left(u_{s},v_{s}\right)\) is the complex value of the sample points.
First, we divide the sample points into \(m\) groups according to their angular coordinate \(\theta(u_{s},v_{s})\), as shown on the left of Figure 3. To integrate each visibility value and its corresponding position, we encode each sample individually using positional embedding (PE\((u_{s},v_{s})\) in Figure 3). Specifically, we encode the positional information of a sample point using Random Fourier Embedding [14]. After that, the positional embedding \(\text{PE}(u_{s},v_{s})\) and the embedding of complex value of visibility \(V\left(u_{s},v_{s}\right)\) are concatenated to form visibility tokens \(V^{\prime}\). Denote the sparse visibility tokens as \(V^{\prime}=[v_{1}^{\prime};v_{2}^{\prime};v_{3}^{\prime};\dots v_{n}^{\prime}]\), where \(V^{\prime}\in\mathbb{R}^{n\times d}\), \(n\) is the number of sample measurement points and \(d\) is the number of dimensions of visibility tokens.
Then, we apply a Multi-Layer Perceptron (MLP) mapping layer and averaging aggregating for intra-group encoding, as shown in the middle of Figure 3. The encoding result is \(\hat{V}=[\hat{v}_{1};\hat{v}_{2};\hat{v}_{3};\dots\hat{v}_{m}]\), where \(\hat{V}\in\mathbb{R}^{m\times d}\).
For each group \(i\), we compute the group token \(\hat{v}_{i}\) as follows:
\[\hat{v}_{i}=\text{Avg}\left(\text{MLP}(v_{j})\right),\;i=1\;\text{to}\;m,\; v_{j}\in\;\text{group}\;i. \tag{8}\]
In intra-group encoding (Algorithm 1), we first sort the tokens \(V^{\prime}\) of sample points according to their angular coordinates. After mapping these tokens through MLP, we use adaptive average pooling to compute the group tokens \(\hat{V}\). For data collected with the same telescope configuration, the process of sorting based on angular coordinates needs to be performed only once, as the locations of sample points remain unchanged.
```
1: Given input tokens \(V^{\prime}\)
2:\(V^{\prime\prime}\leftarrow\text{SortByAngle}(V^{\prime},\theta(u,v))\)
3:\(V^{\text{MLP}}\leftarrow\text{MLP}(V^{\prime\prime})\)
4:\(\hat{V}\leftarrow\text{AdaptiveAvgPooling}(V^{\text{MLP}},m)\)
5:return\(\hat{V}\)
```
**Algorithm 1** Intra-Group Encoding
Finally, the group tokens \(\hat{V}\) go through inter-group encoding by a Transformer encoder. We base our encoder design on Transformer structures similar to prior work [21].
Figure 3: An overview of our method. Sparse visibility data \(V\left(u_{s},v_{s}\right)\) are grouped by the angular coordinate and concatenated with positional embedding \(\text{PE}\left(u_{s},v_{s}\right)\) to obtain \(V^{\prime}\). Then \(V^{\prime}\) passes through two encoding layers: intra-group encoding to generate group tokens and inter-group encoding by a Transformer encoder. The encoded output then conditions the predicted visibility generation in the neural field. The final output is compared with the ground truth to compute the Radial Frequency Loss.
al. 2017; Dosovitskiy et al. 2021; Wu et al. 2022). The input group tokens are then transformed into latent tokens by multi-headed self-attention layers.
### Neural Field Conditioning
Our method follows the conditional neural field pipeline proposed by Wu et al. (Wu et al. 2022). Given the sparsely sampled visibility \(V(u_{s},v_{s})\), our objective is to determine a neural field \(\Phi\), fulfilling a constraint set by the function \(F\):
\[F(\Phi(u_{s},v_{s}),V(u_{s},v_{s}))=0. \tag{9}\]
We approximate this implicit function \(\Phi(u,v)\) with an MLP of \(l\) layers parameterized by weights \(\Theta_{m}\).
We use the output tokens of inter-group encoding \(T=[t_{1};t_{2};t_{3};\ldots t_{l}]\) to extend the neural field with a learning-based prior, with each token corresponding to an MLP layer. Using the FiLM conditioning (Perez et al. 2018), the output tokens modulate the \(i\)th layer's activation \(\mathbf{x_{i}}\) by:
\[\mathrm{FiLM}(\mathbf{x_{i}})=\gamma(t_{i})\odot\mathbf{x_{i}}+\beta(t_{i}), i\in 1\text{ to }l, \tag{10}\]
where \(\gamma\) and \(\beta\) are simple affine layers with non-linearities and \(\odot\) signifies a Hadamard product (Horn 1990).
The MLP parameters \(\Theta_{m}\) and the encoder parameters \(\Theta_{e}\) are jointly optimized during training:
\[\begin{split}\min_{\Theta_{m},\Theta_{e}}\text{{RFL}}\left(\Phi \left(u_{d},v_{d};\{T\}\,;\Theta_{m}\right),V_{\text{gt}}(u_{d},v_{d})\right),\\ \text{with }\left\{T\right\}=\Psi\left(\left\{u_{s},v_{s},V(u_{s},v _{s})\right\};\Theta_{e}\right),\end{split} \tag{11}\]
where \((u_{d},v_{d})\) are the dense coordinates in visibility plane and \(V_{gt}\left(u_{d},v_{d}\right)\) is the ground truth of visibility inpainting.
## Experiments
In this section, we conduct a comprehensive evaluation of our method in comparison with several classic and recent state-of-the-art methods to demonstrate the overall improvement achieved by our approach. We also designed experiments to explore the effects of different grouping methods and group sizes on the reconstruction results. In addition, we conduct ablation experiments to study the effects of individual weighting techniques in RFL.
### Experimental Setup
**Platform**. We conduct all experiments on a server with two AMD EPYC 7302 CPUs, 128GB main memory, and eight Nvidia RTX 3090 GPUs each with 24GB device memory. The server is equipped with an NVME 2TB SSD and four 1TB SATA hard disks. The operating system is Ubuntu 20.04. Our model is implemented in PyTorch 1.8.1 (Paszke et al. 2019).
**Datasets.** In our studies, we use the Galaxy10 DECals dataset (Henry 2021), consistent with the latest research in astronomical interferometric visibility reconstruction (Wu et al. 2022). This dataset comprises 17,736 galaxy images, sourced from the DESI Legacy Imaging Surveys (Dey et al. 2019). This in turn, merges data from the Beijing-Arizona Sky Survey (BASS) (Zou et al. 2017), the DECam Legacy Survey (DECaLS) (Blum et al. 2016), and the Mayall z-band Legacy Survey (Silva et al. 2016). Using these images as a reference, we employ the eht-imaging toolkit (Chael et al. 2019; Chael et al. 2018) to produce visibility data represented by \(\{u_{s},v_{s},V\left(u_{s},v_{s}\right)\}\). The parameters for observation were adjusted to mirror an 8-telescope Event Horizon Telescope (EHT) setup (Wu et al. 2022), with the EHT being one of the most prominent arrays leveraging VLBI techniques. Each image has 1660 visibility points sampled and the image dimensions are set at 256 \(\times\) 256 pixels. Following the methods of Wu et al. (Wu et al. 2022), we apply the discrete Fourier transform (DFT) technique to create dirty images out of the visibility data. We then randomly split 5000 images for testing, with the remainder being used for training.
**Implementation Details**. We use a 2-layer MLP with a Leaky ReLU activation in the intra-group encoder. We then use an 8-layer MLP in the neural field, and only the first 8 output tokens with the dimension of 1024 from the Transformer encoder are used to condition this 8-layer MLP. The two scaling factors \(\alpha\) and \(\beta\) in the Radial Frequency Loss are both set to 1.
**Methods under Comparison**. We compare our method with three other methods for radio interferometry reconstruction, including the classic method CLEAN (Hogbom 1974), which is for dirty image reconstruction, and two recent deep learning-based approaches for visibility data reconstruction - Radionets (Schmidt et al. 2022) and Neural Interferometry (Wu et al. 2022). We use the original code of these methods and follow the parameter setting in the original code for the best performance. All these methods are implemented on PyTorch. In addition, we test the U-Net (Ronneberger, Fischer, and Brox 2015) to reconstruct visibility data as supplementary baselines.
**Evaluation Metrics**. To measure differences in frequency data, we use the Log Frequency Distance (LFD) (Jiang et al. 2021), which is defined as follows:
\[\text{LFD}=\log\left[\frac{1}{MN}\left(\sum_{u=0}^{M-1}\sum_{v=0}^{N-1}|V_{r }(u,v)-V_{p}(u,v)|^{2}\right)+1\right] \tag{12}\]
where \(V_{r}(u,v)\) and \(V_{p}(u,v)\) represent the real and predicted visibility respectively. A lower LFD is better.
To evaluate the quality of images after imaging the reconstructed visibility, we employ two common metrics: Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index Measure (SSIM). PSNR quantifies the overall image quality of the resultant images, and SSIM quantifies the perceptual similarity to the ground truth images. We compute these two metrics using the scikit-image package (Singh and Singh 2019), which follows the formulas presented by Hore et al. (Hore and Ziou 2010). A higher PSNR and SSIM is better.
To evaluate the efficiency, we use inference time and Floating Point Operations (FLOPs). The inference time in the experiments is for a batch size of 32 on a single Nvidia RTX 3090. The FLOPs number is also computed in the same setting. We compare time performance between our method and Neural Interferometry only, because only these two are transformer-encoder based.
### Overall Comparison
We calculate the LFD, PSNR, and SSIM values for all test images reconstructed using our method and other methods, presenting both mean values and standard deviations. These results are tabulated in Table 1. The results show that PolarRec consistently outperforms the other methods in all three measures, LFD, PSNR and SSIM, underscoring the effectiveness of our reconstruction method.
We also present some representative reconstructed visibility and the corresponding images on the Galaxy10(DECals) dataset in Figure 4, including all deep-learning-based methods under comparison as well as the dirty images and the ground truth images of the real sky. Comparing the dirty images in Figure 4 (a) and the ground truth in Figure 4 (f), we find there are many artifacts and distortion of object structure in dirty images because of the sparsity of the visibility.
As shown in Figure 4 (b), reconstructions of the visibility by U-Net [12] are the worst, with artifacts dominating the resultant images. In Figure 4 (c), Radionets [15] are able to reconstruct more visibility content than U-Net. However, the reconstruction is discontinuous. Although Radionets reduce artifacts in the imaging results, it cannot distinguish between separate sources that are in close proximity. Furthermore, many faint astronomical sources are missing in the reconstructed images. In contrast, as shown in Figure 4 (d), Neural Interferometry [23] (denoted Neu-Int) can continuously and realistically reconstruct the low-frequency components of the visibility, but misses much information from the high-frequency components, leading to a loss of details in the reconstruction.
The results of PolarRec (Figure 4 (e)) show that our method can effectively reconstruct more complete and continuous visibility data than others. The resultant imaging results not only eliminate artifacts but also restore the true structure of astronomical sources while preserving details and small faint sources.
process, and use Floating Point Operations (FLOPs) to measure the computation cost. LFD, PSNR and SSIM are used to assess the reconstruction quality.
As illustrated in Figure 5, there is a sharp drop in both FLOPs and inference latency as the group size increases from 1 to 16. In contrast, the image quality in LFD, SSIM, and PSNR is almost constant. Between a group size of 64 to 128, there is a slight decrease in both PSNR and SSIM, while LFD shows a slight increase, implying that increasing group size beyond 64 might compromise the output quality. Moreover, when the group size is set to 1, it is the same as encoding at the granularity of individual points and the computation cost is the same as Neural Interferometry (Wu et al., 2022). The results indicate that encoding with group granularity as input to the transformer encoder is significantly more efficient than encoding at the point granularity.
### Ablation Experiment of Weights in RFL
This ablation experiment aims to examine the significance of two weight matrices \(w_{1},w_{2}\) in the Radial Frequency Loss by omitting them one at a time. The results in Table 2 confirm that the full Radial Frequency Loss is the best. Removing either component \(w_{1}\), or \(w_{2}\) results in reduced performance across all metrics.
### Effect of grouping method
We also vary the grouping method and measure its performance impact on PolarRec. We implemented three grouping strategies: (1) grouping by clustering visibility points by their positions, (2) grouping by the radial coordinate, and (3) grouping by the angular coordinate. Each of these strategies was tested under various group sizes. The LFD and PSNR results are presented in Figure 6. Regardless of the group size, grouping the visibility points by angular coordinates always has the best performance. Especially when the group size increases, other methods tend to show a noticeable decline in PSNR and an increase in LFD, whereas grouping by the angular coordinate can effectively maintain faithful reconstruction results.
## Conclusion and Future Work
We have presented PolarRec, introducing a polar coordinate representation to the reconstruction of interferometric visibility. By adopting angular and radial coordinates of visibility points, our method can reconstruct the visibility data with both effectiveness and efficiency. Our results show that PolarRec markedly improves imaging outcomes by faithfully reconstructing all frequency components of visibility while significantly reducing the computation cost, making it a practical solution for radio interferometric reconstruction applications. In the future, we will extend our method
\begin{table}
\begin{tabular}{c|c|c|c} \hline \multirow{2}{*}{Setting} & \multicolumn{4}{c}{Metrics} \\ \cline{2-4} & LFD\(\downarrow\) & PSNR\(\uparrow\) & SSIM\(\uparrow\) \\ \hline \multirow{2}{*}{w/o \(w_{1}\)} & 0.727 & 25.459 & 0.8974 \\ & (\(\pm\) 0.228) & (\(\pm\) 2.613) & (\(\pm\) 0.0267) \\ \hline \multirow{2}{*}{w/o \(w_{2}\)} & 0.722 & 25.816 & 0.8963 \\ & (\(\pm\)0.272) & (\(\pm\)2.822) & (\(\pm\)0.0289) \\ \hline \multirow{2}{*}{Full RFL} & 0.658 & 26.225 & 0.9002 \\ & (\(\pm\)0.243) & (\(\pm\)2.751) & (\(\pm\)0.0268) \\ \hline \end{tabular}
\end{table}
Table 2: Results of ablation experiments on RFL
Figure 5: Effect of group size.
Figure 6: Effect of grouping method
to observations from a broader range of radio telescopes and extend this efficient visibility encoding process to tasks beyond reconstruction.
|
2309.03045
|
An Evaluation of Software Sketches
|
This work presents a detailed evaluation of Rust (software) implementations
of several popular sketching solutions, as well as recently proposed
optimizations. We compare these solutions in terms of computational speed,
memory consumption, and several approximation error metrics. Overall, we find a
simple hashing based solution employed with the Nitro sampling technique [22]
gives the best trade-off between memory, error and speed. Our findings also
include some novel insights about how to best combine sampling with Counting
Cuckoo filters depending on the application.
|
Roy Friedman
|
2023-09-06T14:39:18Z
|
http://arxiv.org/abs/2309.03045v1
|
# An Evaluation of Software Sketches
###### Abstract
This work presents a detailed evaluation of Rust (software) implementations of several popular sketching solutions, as well as recently proposed optimizations. We compare these solutions in terms of computational speed, memory consumption, and several approximation error metrics. Overall, we find a simple hashing based solution employed with the Nitro sampling technique [22] gives the best trade-off between memory, error and speed. Our findings also include some novel insights about how to best combine sampling with Counting Cuckoo filters depending on the application.
## 1 Introduction
Sketches are approximate compact synopsis that represent statistical features of large data sets and streams of events. Particularly, in this work we focus on software implementation of frequency estimation sketches. That is, sketches that answer queries about a given item's frequency in the dataset, or stream, and whose reply may include an estimation error bounded by some accuracy parameter \(\epsilon\). In some cases, the error bound only holds with a given probability \(1-\delta\).
Historically, sketches were designed to give the minimal memory to error trade-offs. This makes sense for hardware implementations of sketches, where computation speed is often a secondary issue due to hardware's inherent parallelism. In resource constrained environments, being as memory frugal is also very important. Finally, the claim is that since SRAM memory is much smaller than DRAM, memory frugality is also an important characteristic of software implementations to obtain fast execution in practice. That is, given the orders of magnitude difference in access times between SRAM and DRAM, it is important for a data structure to fit into the hardware cache and present a hardware cache-friendly access pattern.
Yet, several works have shown that on modern processors, simple implementations often outperform complex data structures, and the cost of calculating multiple hash functions can be detrimental to performance [2, 3, 19, 22]. In particular, NitroSketch [22] was offered as generic approach to expediting sketches based on multiple counter arrays updated with independent hash functions, which include, e.g., Count-Min Sketch [10], Count Sketch [7], UnivMon [23], and K-ary Sketch [21]. This is by sampling, through geometric distribution, how many sketch' counters (and items) should be skipped before any counter should be updated.
Another interesting optimization that targets the Space Saving algorithm [27] is RAP [5]. Space Saving is an algorithm for heavy-hitter detection as well as frequency estimation. Space Saving keeps track of a fixed number of items, and whenever an un-tracked item arrives, it gets the entry of an item whose frequency count is minimal at that point. RAP improves on Space Saving's accuracy and performance by only replacing an un-tracked item with a minimally frequent item \(x\) with a probability that is inversely proportional to \(x\)'s estimated frequency.
In this work, we measure the performance of several known algorithms, with and without the RAP and Nitro optimizations described above (depending on their applicability). Unlike previous studies we are aware of, we have implemented all algorithms in Rust, which is a modern memory safe, strongly typed, platform neutral and highly efficient programming language (claimed to be as fast as C/C++). Specifically, we have implemented and measured the performance of a simple Hash based solution, a hashing plus Nitro solution nicknamed NitroHash, Count-Min Sketch (CMS) [10] and NitroCMS, counting Cuckoo filter [15], NitroCuckoo, Space Saving [27] and Space Saving with the RAP optimization [5]. Notice that NitroHash and NitroCuckoo were never explored before, especially in terms of the throughput impact of the geometric probabilistic process used for sampling in Nitro.
Many of our findings echo previous studies, e.g., [2, 3, 19, 22]. For example, speed wise, a simple hash based solution gives the fastest result when Nitro is not applied (and of course the most accurate), with Nitro giving it another significant performance boost with a very small accuracy degradation. Our new insights are related to the questionable benefit of the minimum increment (also known as conservative update) optimization of CMS, and guidelines
for the use of sampling, e.g., Nitro, with fingerprint based hash tables like Cuckoo depending on whether the application is update intensive or query intensive. Finally, we make all our code available in open source [17].
## 2 Background
### Hash Tables
The simplest approach to track the frequency of items in a data stream is to maintain a counter for each encountered item in a hash table, as shown in Figure 1. The first time the item appears, it is inserted into the hash table with a count of \(1\). Each subsequent occurrence simply increment the counter. A frequency count query for an item \(x\) simply returns \(x\) counter from the hash table if it exists, or \(0\) otherwise.
The main benefit of this approach is its simplicity. The drawbacks include a linear memory consumption. This is because we need to allocated a counter for each unique item in the stream, and in the worst case it is on the same order as the stream length. Moreover, the hash table needs to store the exact identifier, which can be quite large. For example, a typical \(4\)-tuple consisting of source IP, source port, destination IP, and destination port consumes \(128\) on IPv4 and \(196\) bits on IPv6. In case of URIs, the size is much larger. Further, there is the need to calculate a hash function on each element in the stream, which also involves non-negligible computational overhead.
### Count-Min sketch
A Count-Min sketch (CMS) [10] data structure consists of \(d\) arrays of \(w\) counters each, denoted \(\text{CMS}[d,w]\), and \(d\) pairwise independent hash functions \(\{h_{1},\ldots,h_{d}\}\), as illustrated in Figure 2. In order to increment the frequency of an item \(x\), all counters \(\text{CMS}[i,h_{i}(x)]\) are incremented, for all \(i\in[1,\ldots,d]\). In order to estimate the frequency of \(x\), CMS returns \(\min_{i\in[1,\ldots,d]}\text{CMS}[i,h_{i}(x)]\).
Denote the true frequency of \(x\) after processing \(N\) items by \(f(x)\). Assigning \(w=e\epsilon^{-1}\) and \(d=\log(\delta^{-1})\) ensures that a frequency the estimate \(0\leq\hat{f}(x)-f(x)\leq\epsilon N\) with probability \(1-\delta\). Notice that the memory requirement of CMS is independent of the number of items that are inserted \(N\) (discounting the \(O(\log N)\) bits per counter). However, the absolute error value does grow linearly with \(N\). Also, CMS does not save any identifiers, which reduces its memory overhead, especially when identifiers are large.
The _conservative update_[14] or _minimum increment_[8] optimization mandates that only the minimal counter(s) would be incremented on each update. The motivation for this is that non-minimal counters have reached their current value due to hash collisions, so incrementing them further increases the error for the colliding flows, without impacting the estimation of the current flow. Thus, this optimization helps reduce the expected error [12]. The downside of this optimization is that it prevents handling negative values (or decrements) and is not always applicable. It also requires additional computational steps and memory accesses.
The main drawbacks of CMS include the fact that it requires computing \(k\) hash functions and also accessing \(k\) random memory locations on each query and update. This implies both heavy computational load and hardware-cache unfriendly access pattern.
CMS is used here as a representative of several additional counter matrices sketches, such as Count Sketch [7], UnivMon [23], K-ary [21], and Spectral Bloom Filters [8]. Each has a slightly different update rule and guarantees, and all suffer from the same shortcomings.
### NitroSketch
Nitrosketch [22] solves the above mentioned inherent shortcomings of Count-Min sketch and similar counter array based sketches mentioned above for software implementation. This is by updating the respective counter of each array with probability \(p\) (a parameter), where \(p\) can be even smaller than \(1/d\) (where \(d\) is the number of arrays) to en
Figure 1: Hash Table Based Frequency Counting
Figure 2: Count-Min Sketch
able skipping entire events. Further, since invoking the PRNG has non-negligible costs, the actual process used is that at the beginning and after every counter update, the number of counters updates to skip is drawn from geometric distribution with an average of \(1/p\). This gives an equivalent behavior, at a much reduced computational cost as now the PRNG is only invoked on average every \(1/p\) potential counter updates (once in every \(d/p\) events). As been shown in [22], the error rapidly converges to the same one as provided by the original sketch. A downside of sampling in general and Nitro in particular is that it does not support decrementing.
### Space Saving
Space Saving (SS) is an efficient mechanism for detecting heavy-hitters, but it also provides an effective method for estimating specific items' frequency [27]. Specifically, Space Saving includes an array of \(M\) pairs of IDs and tuples, as illustrated in Figure 3. When an item with id \(x\) arrives, if \(x\) has an allocated entry, it associated counter is increment. Otherwise, \(x\) replaces the item whose counter \(C_{m}\) is minimal, and \(C_{m}\) is incremented (without being reset first). For frequency estimation, if \(x\) has an entry, then the value of the associated counter is returned. Otherwise, the value of the minimal counter is returned.
Space Saving guarantees that \(0\leq\hat{f}(x)-f(x)\leq N/M\). Hence, when \(M=\epsilon^{-1}\), the error is (one side) bounded by \(\epsilon N\).
The memory access pattern of Space Saving is more hardware-cache friendly than CMS. Space Saving can be implemented so that its computational complexity is \(O(1)\) with a single hash function computation, but the details are not trivial [6]. Unlike CMS, the error guarantee is deterministic and it holds only \(1/\epsilon\) counters. Yet, Space Saving needs to save \(1/\epsilon\) identifiers, so its memory efficiency when compared to CMS depends on the identifiers' size1. Also, Space Saving as is can detect heavy-hitters, while CMS needs to be complemented with a heap structure in order to do the same2. Finally, the number of non-zero counters in CMS can be used to estimate the number of unique flows (also known as cardinality estimation and count distinct), which is not the case with Space Saving. Yet, given the significant computation cost and poor accuracy of unique flow estimation from CMS vs. the negligible cost of HLL [16] and its high accuracy, in practice it is extremely rare to employ CMS for this.
Footnote 1: Space Saving can be augmented to store \(O(\log\delta^{-1})\)-bit fingerprints instead of identifiers, bringing its size down to below CMS, in which case the error guarantee of Space Saving becomes accordingly probabilistic.
Footnote 2: It is also possible to employ reversible CMS for heavy-hitter detection [33]. While its space and computational overheads are even worse, it does support deletions.
Space Saving is a representative of several similar algorithms, including Misra-Gries [28], Frequent [11, 20], and Lossy Counting [26]. We have chosen it here since Space Saving has been claimed to be the fastest and most accurate among them [9].
### Rap
RAP improves on Space Saving's ability to detect heavy hitters and the frequency estimation accuracy, especially with heavy-tailed workloads [5]. With RAP, when an item from a flow \(x\) that does not have an allocated counter arrives, \(x\) is only allocated the entry of the minimal counter \(C_{m}\) with probability \(1/(C_{m}+1)\) rather than unconditionally as done in Space Saving. The idea here is that since most untracked items belong to the tail, always giving them a counter hurts the ability of Space Saving to track and heavy hitters and reduces its estimation accuracy. By using the above mentioned probabilistic insertion filter, RAP avoids giving an entry to most tail items, while still admitting heavy-hitters as the latter will most probably arrive often enough to obtain a counter.
### Counting Cuckoo Filter
A cuckoo hash table [31] is composed of multiple buckets, each consisting of a (small) fixed number of slots that can hold hashed items, as well as two independent hash functions \(h_{1}\) and \(h_{2}\). An item with id \(x\) may be placed in one of the buckets \(h_{1}(x)\) and \(h_{2}(x)\). Hence, looking up an item requires checking at most these two buckets. As for insertion, if the bucket \(h_{1}(x)\) has empty slots, then \(x\) is inserted there. Otherwise, \(x\) is inserted into bucket \(h_{2}(x)\). In case this bucket was already full, then a random item \(y\) is removed from bucket \(h_{2}(x)\), to make room for \(x\), and \(y\) is moved to its alternate bucket.
Figure 3: Space Saving
This may start a chain reaction, in case the alternate bucket of \(y\) was also full, and the process could continue until a maximal number of attempts, after which failure is declared. It has been shown that when the bucket size is 3, the insertion process will succeed with high probability as long as the number of items is less than 80% of the total hash table capacity [29].
Cuckoo filters extends the above idea to implement a Bloom filter (approximate set-membership) like functionality [15]. Rather than storing an exact item, a Cuckoo filter only stores a fingerprint of size \(L\) of the item. Also, the hash functions are defined such that \(h_{2}(x)=h_{1}(x)\oplus h_{1}(\text{fingerprint}(x))\). Notice that due to the symmetry of \(\oplus\), \(h2(x)\) can be computed from \(h_{1}(x)\) and vice versa. The latter is needed since Cuckoo filters do not store identifiers. This way, a Cuckoo filter can answer approximate set membership queries without false negatives and having a false positive ratio bounded by \(2^{-L}\). Counting Cuckoo filters add to each slot a counter, which is logically initialized to zero and is incremented for each add/insert operation.
## 3 Our Novel Combinations
### NitroHash
In NitroHash, we add a NitroSketch like optimization to the hash based solution described in Section 2.1. Specifically, rather than adding every item to the hash table, and incrementing its value if already there, we do so only with probability \(p\). Further, to reduce the number of PRNG invocations, at the beginning and after each update, we draw the number of item to skip before the next actual update from a geometric distribution with an average of \(1/p\). When required to estimate the frequency of an item, we multiply its counter in the table, if exist, by \(1/p\). In case there is no counter, we return 0.
This optimization has three complementing runtime benefits: We invoke the hash function and perform the associated random memory access only once every \(1/p\) items in expectation. We invoke the PRNG only once every \(1/p\) items in expectation. We expect to insert much fewer items into the hash table, so its size can be much smaller, as demonstrated in the evaluation section.
Let us note that this is not the same as Sticky Sampling [26]. In the latter, on each item arrival, a non-monitored item is allocated a counter with probability \(p\), whereas an item that already has a counter is always incremented (deterministically). This means performing a hash table lookup for each item arrival plus a PRNG invocation for each occurrence of a non-monitored item. With NitroHash, each item is always sampled with probability \(p\), where we employ the geometric based sampling to reduce the number of PRNG invocations.
### NitroCuckoo
Similarly to above, in NitroCuckoo augment a counting Cuckoo filter with a NitroSketch like optimization. That is, rather than performing an increment/add operation on the filter for every item, we do so only with probability \(p\). Also, we chose the number of items to skip from a geometric distribution with an average of \(1/p\). A frequency estimation query multiplies the associated counter by \(1/p\) if exists, and returns 0 otherwise. The benefits here are similar to those mentioned for NitroHash.
## 4 Implementation
We have implemented all data structures and associated algorithms in Rust, and the code is available in open source at [17]. Our code is single threaded. For the hash table implementations, we have used Rust's default hash table. For CMS, we have taken the open source implementation of [30]3, and have also augmented it as needed to obtain our NitroSketch implementation. We have taken the Cuckoo filter implementation from [24] and augmented it to serve as a counting Cuckoo filter, and then further modified it to also serve as a NitroCuckoo filter. We have implemented Space Saving ourselves, and maintain the counters sorted using Rust's default priority queue implementation [18]. We have also realized the RAP optimization for it, and the exact behavior, with or without RAP, is controlled by a runtime parameter. It is possible that a more sophisticated implementation along the lines of [6] would have yielded faster operation execution; this is left for future work.
Footnote 3: This implements the conservative update optimization.
## 5 Evaluation
### Method
#### 5.1.1 Metrics
#### 5.1.2 Throughput
We measure the throughput in terms of number of operations per second. This is obtained by measuring the net elapsed time for invoking the relevant operations for each item in a given trace, in the order of which items appear in
the trace, and then dividing the trace size by the measured execution time.
Specifically, the application first uploaded and parsed an entire trace file, storing its parsed lines as entries in an array. Then, we iterated over the lines of the array, invoking the respective data structure method for each entry. The memory, number of items, and various error metrics were measured in a separate run than the throughput measurement. For throughput measurements, we measured the elapsed time during the iteration over the array. In case of write only measurements, each iteration consisted of invoking only an add/increment operation for each encountered item. In the case of write-read measurement, in each iteration we perform an add/increment operation followed immediately by a query for the same item. For read-only, we perform two iterations: in the first, we add/increment all items, whereas in the second we issue queries for all items; here, we only measure the net execution time of the second iteration.
Memory and Number of EntriesWe measure the memory consumed by the associated data structures of each algorithm. For data-structures that store explicit item identifiers or fingerprints, we also measure the number of items in the data-structure. This includes the hash table as well as the counting cuckoo filter, in both cases with and without the nitro optimization.
Approximation ErrorDenote \(N\) the number of nodes in the trace, \(M\) the number of unique flows in the trace, \(\widehat{f(x)}\) the estimated frequency of \(x\) at a given time and \(f(x)\) the true frequency of flow \(x\) at the same time. We measure the following variants of approximation error4:
Footnote 4: According to the findings of [4], the AVGERR measure becomes minimal when estimating all frequencies as 0.
**On Arrival:**: For each encountered item \(x_{i}\), we immediately query its frequency estimation and compute
\[\text{OA MSRE}=\frac{\Sigma_{i}^{N}\sqrt{\left(f(x_{i})-\widehat{f(x_{i})} \right)^{2}}}{N}.\]
\[\text{OA AVGERR}=\frac{\Sigma_{i}^{N}|f(x_{i})-\widehat{f(x_{i})}|}{N}.\]
\[\text{OA AVGRELR}=\frac{\Sigma_{i}^{N}\left(|f(x_{i})-\widehat{f(x_{i})}|/f(x_{ i})\right)}{N}.\]
**Per Flow:**: After inserting all items in the trace, we scan all flow identifiers that appeared in the trace, query their frequency estimation and compute
\[\text{Flow MSRE}=\frac{\Sigma_{i}^{M}\sqrt{\left(f(x_{i})-\widehat{f(x_{i})} \right)^{2}}}{M}.\]
\[\text{Flow AVGRELR}=\frac{\Sigma_{i}^{M}|f(x_{i})-\widehat{f(x_{i})}|}{M}.\]
**Postmortem:**: After inserting all items in the trace, we scan again all items in the trace in their appearance order and compute
\[\text{PM MSRE}=\frac{\Sigma_{i}^{N}\sqrt{\left(f(x_{i})-\widehat{f(x_{i})} \right)^{2}}}{N}.\]
\[\text{PM AVGERR}=\frac{\Sigma_{i}^{N}|f(x_{i})-\widehat{f(x_{i})}|}{N}.\]
\[\text{PM AVGRELR}=\frac{\Sigma_{i}^{N}\left(|f(x_{i})-\widehat{f(x_{i})}|/f(x_{ i})\right)}{N}.\]
Notice that the formulas for the On Arrival errors and the Postmortem cases look the same, but they are computed differently, as explained in the textual description above.
#### 5.1.2 Settings
All measurements were performed on an Intel Core i9-13900 (8+16 Cores/36MB/32T/2.0GHz to 5.2GHz/65W) machine with 128GB (2X64GB) DDR5 DRAM memory, and an M.2 2280 2TB PCIe NVMe Class 40 SSD, running Microsoft Windows 11 Pro. The L1 cache size is 2.1MB, L2 is 32MB, and L3 is 36MB. The code was compiled for release mode using Rust's cargo application.
As mentioned before, the entire trace is first loaded into memory and parsed, so we measure the net time for processing the items in memory by the various algorithms and data structures. The amount of DRAM memory in the machine used for testing is high enough so that there were no page swapping during the execution.
#### 5.1.3 Traces
We have used several real-world traces taken from the CAIDA repository of backbound routers [1]. These traces are summarized in Table 1.
The traces Chicago16Small and Chicago1610Mil are prefixes of Chicago16. They are used to exemplify the impact of trace length on the results when considering the same trace. All displayed data points are the average of 13 runs, and we also plot the respective confidence intervals.
### Throughput Results
#### 5.2.1 Update Only
The throughput results for the write-only measurements appear in Figures 4. As can be seen, the Nitro optimization brings an order of magnitude improvement to the throughput for all three algorithms it was applied to: Hashing, Cuckoo, and CMS. This is as expected, following the discussion above about this optimization.
Interestingly, among the three, Hash (NitroHash, respectively) came out the fastest policy, followed by Cuckoo (NitroCuckoo, respectively), with CMS (NitroCMS, respectively) being the slowest, due to its multiple hash computations and random memory accesses per update. SpaceSaving is the slowest policy, with SpaceSaving-RAP being slightly better than CMS. As expected, the RAP policy improves performance, since there are fewer inserts into the data structure. Also, the implementation of SpaceSaving that we used is a bit naive, as it uses Rust's priority queue. It is possible that a more sophisticated implementation along the lines of [6] would have resulted in better performance. This is left for future work.
Another factor that slows down CMS is the conservative update optimization that was applied, by which only the minimal counter is incremented. Our conjecture is that in software based implementation, it might be better to configure the structure with a lower theoretical error guarantee (i.e., more space) than applying this optimization. This is explored below in Section 5.5.
When comparing the Chicago16Small trace (Figure 4d) to the other traces in Figure 4, an interesting phenomenon is revealed: We notice that the throughput of all algorithms is roughly an order of magnitude better with the Chicago16Small trace than the other traces, which are much longer. For Hash and NitroHash, we can relate this to the fact that they require more memory as the trace grows. One could argue that for Cuckoo and NitroCuckoo is that the fuller the Cuckoo table becomes, the more common relocation is needed, and the relocation chains become longer. However, in all traces the load factor on the table is very low since the number of unique items is below \(10\%\) of the trace length, and we allocate enough items for the entire trace. Further, for CMS and NitroCMS, it is surprising, as both the size and amount of work performed by these algorithms are independent of the number of items. Recall that we pre-load into an in-memory array the entire trace before the timing measurement begins. Also, since the machine has plenty of memory, the reason is not related to swapping. Rather, we speculate that when the trace is small, the array fits better into the various levels of the hardware cache. Yet, when the trace size grows, we get more hardware cache misses when trying to fetch the next items from the in-memory array, which dominates the time measurement. In that sense, the shorter trace (Chicago16Small) gives better indication of the maximal direct throughput bound of each algorithm.
#### 5.2.2 Write Read
The throughput results for the write-read measurements appear in Figures 5. As can be seen, the Nitro optimization is less effective here, since Nitro only expedites the updating process, but has no impact on query execution. Among the three Nitro enabled variants, NitroHash is the fastest, due to the fact that when using a plain hash table, each access only requires a single hash function calculation (in expectation). NitroCMS is the slowest of the three since both its update and query processes involve multiple hash computation and random memory accesses.
Finally, Space Saving performed worse. As mentioned above, this is due to the fact that our implementation only relies on Rust's priority queue, whose associative access time is slow.
#### 5.2.3 Read Only
The throughput results for the read only measurements appear in Figures 6. The most surprising result here is related to the impact of the Nitro optimization on the Hash algorithm. Nitro improved performance here, despite not being part of the query process at all. The reason is that fewer items have been inserted into the hash table. This translates into a smaller final size and fewer items, thereby faster completion times.
In contrast, Cuckoo is faster than NitroCuckoo. The reason is that as fewer items are inserted in the Cuckoo table with the Nitro optimization, ad
\begin{table}
\begin{tabular}{|l|c|c|} \hline Trace & \# items & uniques \\ \hline \hline Chicago15 & \(133,988,641\) & \(3,495,149\) \\ \hline Chicago16 & \(88,529,637\) & \(1,650,097\) \\ \hline Chicago16Small & \(1,000,000\) & \(69,046\) \\ \hline Chicago161Mil & \(10,000,000\) & \(332,183\) \\ \hline NY19A & \(30,098,745\) & \(1,522,382\) \\ \hline NY19B & \(63,284,829\) & \(2,968,038\) \\ \hline SJ14 & \(188,511,031\) & \(2,922,904\) \\ \hline \end{tabular}
\end{table}
Table 1: Traces from [1] used in this work.
ditional lookup invocations require computing both hash functions and accessing two random memory locations than when the table is much fuller, which happens when Nitro is not used. This indicates that when the read performance is more important, the use of sampling when creating the table is actually counter-productive, unless the table is pre-configured to accept fewer items.
In fact, for finger-printing hash table based solutions, be it counting Cuckoo filters, TinyTable [13], Counting Quotient Filter [32], DASH [25], etc., sampling reduces the load factor for a given table size and stream length. Hence, the decision of whether to use sampling or not and how to configure the table in return depends on the type of application. For network monitoring applications, which are update dominant and it is paramount to keep up with the line rate, the use of sampling while keeping a relatively large hash table, e.g., one that can accommodate all possible items even without sampling, is the best way to go. On the other hand, in data-base applications, when the sketch (or filter) is used to represent a large but mostly static relation, the use of sampling can be beneficial when the size of the hash table is reduced roughly proportionately to the sampling ratio. This way, both the space consumption is significantly lowered, and the query time improves. We explore this below for NitroCuckoo in Section 5.6.
As for Space Saving, we see here too the RAP slightly hurts the performance. Once again, this is due to the fact that we used a priority queue, so searching for a non-existing item takes longer. In general, the relative performance of Space Saving, with and without RAP, compared to the other algorithms is much better. This is because the update
Figure 4: Throughput results for the write only test
Figure 5: Throughput results for the write-read test
process of Space Saving is much more expensive than query, at least with the priority queue based realization we used.
### Approximation Errors Results
The results for the OA MSRE, OA AVGERR, and OA AVGRELERR metrics appear in Figures 7, 8, and 9, respectively. As can be seen, Cuckoo has a negligible error, regardless of the exact metric, compared to the other methods. We notice small but non-negligible error for these traces with the NitroHash and NitroCuckoo metrics. This indicates that a few heavy-hitters has non-negligible errors, as they were no properly represented by the sampling method. Yet, the errors become very small as we are moving from Chicago16Small to Chicago1610Mil and then to the full Chicago16. This can be expected since as the trace becomes longer, the probabilistic process becomes more representative. Also, echoing the results of [5], the error of SpaceSaving-RAP is much lower than the basic SpaceSaving, regardless of the metric, due to its more effective usage of the counters as explained in Section 2. Interestingly, it also presents a significantly lower error than CMS and NitroCMS, whereas NitroCMS almost doubles the error compared to CMS. We explore the impact of the minimal increment optimization in Section 5.5 below.
We notice small but non-negligible errors for the Chicago16, Chicago1610Mil, and Chicago16Small traces with the NitroHash and NitroCuckoo algorithms and the MSRE and AVGERR metrics. This indicates that a few heavy-hitters have non-negligible errors, as they were not properly represented by the sampling method. Even here, the errors become very small as we are moving from Chicago16Small to Chicago1610Mil and then to the full Chicago16 trace. This can be expected since as the trace becomes longer, the probabilistic process becomes more representative.
The Per Flow MSRE, Per Flow AVGERR, and Per Flow AVGRELERR results appear in Figures 10, 11, and 12, respectively. The results are qualitatively similar to the on-arrival error metrics.
The Postmortem MSRE, Postmortem AVGERR, and Postmortem AVGRELERR results appear in Figures 13, 14, and 15, respectively. Here, too, we notice small but non-negligible errors for the Chicago16, Chicago1610Mil, and Chicago16Small traces with the NitroHash and NitroCuckoo algorithms for the MSRE metric, which diminishes for the longer trace (the full Chicago16), similar to the on arrival error metric discussed above.
### Memory Consumption Results
The memory consumed by each of the algorithms appears in Figure 16. For all algorithms, we used 32 bits counters. The complete identifier size (IP 4-tuple) is 32 bits. In the case of HASH and NitroHash, the size is the final hash table capacity multiplied by the size of each item (identifier + counter). In these implementations, that table is initiated with a very small capacity, and is enlarged (doubled) whenever Rust's default implementation decides it is not large enough. Cuckoo and NitroCuckoo are configured to hold all items in the trace, since the Cuckoo filter implementation is not elastic. In this case, we multiply the capacity of the table by the size of each item, this time made of an 8-Byte fingerprint + counter. CMS and Nitro CMS are configured according to their specification for \(\epsilon=0.01\) and \(\delta=0.01\); the space is computed as the number of resulting counters (1024) times counter size. Finally, Space Saving (with and without RAP) is also configured according to its specification for \(\epsilon=0.01\); the space is computed as the number of resulting entries (100) times the counter size + identifier size.
For the HASH, NitroHash, Cuckoo, and NitroCuckoo, we also report the number of (unique) items stored in the respective data structure in Figure 17 as well as the amount of space these items consume in Figure 18. The rational here is that if we have a good estimate for how many unique items would be stored, we could configure the data structure accordingly and save space. In particular, when applying the Nitro optimization, the use of sampling translates into inserting much fewer items into the data structure, and so in addition to the improved throughput, it can be used to reduce the memory consumption, as explored in Section 5.6 below. The reason HASH requires more space than Cuckoo when they hold the same number of items is related to the size of full identifiers vs. fingerprints. Notice that with CMS, NitroCMS, and Space Saving (w/o RAP) the amount of counters or entries is fixed by the error guarantees and do not depend on the workload, and therefore these measures are meaningless for them.
### CMS W/O Minimal Increment
We now discuss the impact of the minimal increment (conservative update) optimization of CMS. We compare an implementation without minimal increment, nicknamed CMS-NOMI, to the default implementation. As can be seen in Figure 19, the minimal increment optimization increases performance in the write only test by roughly 10%. This
is naturally reduced in the read-write test, shown in Figure 20, and completely diminishes in the read only test, reported in Figure 21.
On the other hand, in terms of errors, the minimal increment increases the error by more than a factor of 2 for the various metrics, as shown in Figures 22-30. Given the very low memory requirement of CMS, it may make sense to avoid the minimal increment optimization for write only cases, and compensate for this by allocating more counters in each array. However, when memory os very tight, or when the workload includes mostly reads, then the minimal increment is very effective.
### Space Tradoffs for NitroCuckoo
In this section, we explore what happens when NitroCuckoo is allocated with a \(p\) fraction of the memory, where \(p\) is the sampling probability used by the Nitro optimization. Specifically, we compare Cuckoo and NitroCuckoo when configured as before, with the NitroCuckoo algorithm whose Cuckoo filter table is allocated with \(p\) times the number of entries, nicknamed NC-SMALL. In our case, \(p=0.01\), which means NC-SMALL's table is 100 times smaller.
As can be seen in Figure 31, this gave a further 10% to 20% throughput improvement when compared to NitroCuckoo in the write only test, due to the smaller size of the table, which fits better in the hardware cache. The improvement becomes even more significant in the case of the write-read test, as the impact of having a smaller size more than offsets the increased lookup time for non-existing items. However, the difference between NC-SMALL and Cuckoo shrinks.
Finally, in the read only test, reported in Figure 33, Cuckoo is significantly faster than NitroCuckoo as discussed above, but NC-SMALL is still the clear winner, meaning that the impact of fitting better in the hardware cache is stronger than the impact of the average longer query process when fewer items are found.
The impact of the smaller table allocation with NC-SMALL on the error metrics is shown in Figures 34-42. In the case of the on arrival and post-mortem metrics, the results are inconclusive, with slight tendency to worsening the error. NC-SMALL fares much worse in the per flow metrics, since here small flows, whose relative error tends to be higher, have a much larger impact on the error calculation than in the on arrival and postmortem cases. Yet, the errors with any of theses algorithms, Cuckoo, NitroCuckoo, and NC-SMALL, are much lower than many of the other alternatives we explored in this paper. Hence, overall, we conclude that NC-SMALL is one of the best approaches.
## 6 Conclusion
In this work, we have compared several Rust implementations of frequency estimation algorithms in terms of speed (throughput), estimation error under several metrics, and space requirements. Specifically, we have realized and compared the following algorithms: HASH (serves as a baseline), NitroHash, Cuckoo (representing fingerprint based hash tables), NitroCuckoo, NC-SMALL, CMS (representing shared counter arrays sketches), Nitro CMS, CMS-NOMI, Space Saving, and Space Saving with the RAP optimization. The measurements were carried over several real world network traffic monitoring traces from major backbone routers.
Figure 6: Throughput results for the read only test
In terms of throughput, HASH was the fastest without Nitro, and NitroHash the fastest overall, both in the case of write only, write-read, and read only benchmarks. In general, the Nitro optimization helps a lot for the write only benchmark, but its impact in other cases is inconsistent. While it always improves performance for HASH and CMS, for read-only, NitroCuckoo is slower than Cuckoo. The reason is that querying Cuckoo takes on average more time for non-exiting items, and with Nitro the table is under-loaded. We speculate that this problem persists in many other fingerprint based hash tables, including TinyTable [13], Counting Quotient Filter [32], and DASH [25]. The results of the NC-SMALL method showed that it is best to configure NitroCuckoo with a much smaller table, which fits better in the hardware cache. This more than offsets the impact of the longer lookup process in the read only and write-read tests, and improved throughput even in the write only case.
Our SpaceSaving implementations (with and without RAP) were the slowest, motivating trying a more optimized implementation along the lines of [6]. Also, we speculate that software implementations of CMS would be better off without the minimum increment (or conservative update) optimization, de to its computational cost.
Interestingly, all error metrics yielded qualitatively similarly looking behavior, just at different scale. The only minor exception is NitroCuckoo and NitroHash over short traces. This is because Hash is accurate and Cuckoo has a marginal error, while the sampling approach of Nitro takes some time to converge [22]. Still, even on the relatively short traces, NitroHash and NitroCuckoo were more accurate than SpaceSaving, CMS, and NitroCMS. Similarly to the findings of [5], RAP improved the estimation error of SpaceSaving by orders of magnitude.
Space wise, CMS and SpaceSaving (plus SpaceSaving-RAP) are significantly more efficient than HASH and Cuckoo. HASH exhibits the benefit of elasticity, as it can adjust its size to the number of unique items, which is orders of magnitudes lower than the total trace size. Here, NitroHash seems to be a very compelling approach, since its memory requirements are very reasonable for any type of modern hardware, it is the fastest, and its estimation error is very low compared to the alternatives we tested. NC-SMALL is also a very promising approach, almost as fast as NitroHash in the write only case, the second fastest in the write-read and read only case, its error is still relatively small, and its memory consumption is manageable.
We note that CMS or SpaceSaving are useful for situations where the space requirement of the filter must be known and fixed. Also, their significantly lower space requirements can be beneficial, e.g., if the filters need to be sent over a limited bandwidth network, in large multi-tenancy situations, or when holding a single filter per relation in a data base that stores a very large number of relations. SpaceSaving is more compact when the identifiers are relatively small (or when fingerprints can be used instead of full identifiers), and otherwise CMS should be the preferred choice.
Acknowledgments:I would like to thank Eytan Singer for helping me getting up to speed with Rust programming, and to Alec Mocatta for helping me understand how to use the Amadeus CMS implementation. Thanks also to Ran Ben Basat, Gil Einziger, and Rana Shahout for insightful comments that greatly improved the presentation in this paper. This work was partially funded by the Israeli Science Foundation grant #319/21.
|
2310.04891
|
The OIGroebnerBases Package for Macaulay2
|
We introduce the $\textit{Macaulay2}$ package $\texttt{OIGroebnerBases}$ for
working with OI-modules over Noetherian polynomial OI-algebras. The main
methods implement OI-analogues of Buchberger's algorithm and Schreyer's theorem
to compute Gr\"obner bases, syzygies and free resolutions of submodules of free
OI-modules.
|
Michael Morrow
|
2023-10-07T18:28:04Z
|
http://arxiv.org/abs/2310.04891v1
|
# The OIGroebnerbases package for Macaulay2
###### Abstract.
We introduce the _Macaulay2_ package OIGroebnerBases for working with OI-modules over Noetherian polynomial OI-algebras. The main methods implement OI-analogues of Buchberger's algorithm and Schreyer's theorem to compute Grobner bases, syzygies and free resolutions of submodules of free OI-modules.
## 1. Introduction
Suppose we are given a sequence \((M_{n})_{n\in\mathbb{Z}_{\geq 0}}\) of related modules \(M_{n}\) over related polynomial rings whose number of variables increases with \(n\). One may ask how to simultaneously compute a finite Grobner basis for each \(M_{n}\). Furthermore, one may ask how to simultaneously compute the module of syzygies of each \(M_{n}\). Using the framework of OI-modules over OI-algebras introduced in [5], these questions were addressed in [4], where OI-analogues of Buchberger's algorithm for computing Grobner bases and Schreyer's theorem for computing syzygies were given. Here, OI denotes the category of totally ordered finite sets and order-preserving increasing maps.
It was further shown in [4] that the OI-analogue of Schreyer's theorem can be iterated to compute free resolutions of OI-modules out to desired homological degree. Only a few explicit constructions for such resolutions are known; see [1, 2] for examples.
This note introduces the package OIGroebnerBases1 for _Macaulay2_[3] to facilitate the computations described above. We review the necessary mathematical background material in Section 2 and summarize the main features of our package in Section 3.
Footnote 1: Available at [https://github.com/morrowhh/OIGroebnerBases](https://github.com/morrowhh/OIGroebnerBases).
## 2. Preliminaries
We fix notation and recall the needed background on OI-modules. For the rest of this paper, \(K\) denotes an arbitrary field.
**Definition 2.1**.: Let OI be the category whose objects are intervals \([n]:=\{1,\ldots,n\}\) for \(n\in\mathbb{Z}_{\geq 0}\) (we put \([0]=\emptyset\)) and whose morphisms are strictly increasing maps \([m]\to[n]\).
If \(\mathbf{A}\) is a functor out of OI, we write \(\mathbf{A}_{n}\) instead of \(\mathbf{A}([n])\). We call \(\mathbf{A}_{n}\) the _width \(n\) component of \(\mathbf{A}\)_. We abuse notation and write \(\operatorname{Hom}(m,n)\) for the set of all OI-maps \(\operatorname{Hom}_{\operatorname{OI}}([m],[n])\) from \([m]\) to \([n]\). If \(\varepsilon\in\operatorname{Hom}(m,n)\), we sometimes write \(\varepsilon_{*}\) in place of \(\mathbf{A}(\varepsilon)\).
**Definition 2.2**.: Let \(c>0\) be an integer and define a functor \(\mathbf{P}=\mathbf{P}^{c}\) from OI to the category of associative, commutative, unital \(K\)-algebras as follows. For \(n\geq 0\), define
\[\mathbf{P}_{n}=K\begin{bmatrix}x_{1,1}&\cdots&x_{1,n}\\ \vdots&\ddots&\vdots\\ x_{c,1}&\cdots&x_{c,n}\end{bmatrix}\]
and for \(\varepsilon\in\operatorname{Hom}(m,n)\) define \(\varepsilon_{*}:\mathbf{P}_{m}\to\mathbf{P}_{n}\) via \(x_{i,j}\mapsto x_{i,\varepsilon(j)}\).
Assigning each variable degree \(1\), the functor \(\mathbf{P}\) is a _graded Noetherian polynomial_ OI-_algebra_; see [5, 4].
**Definition 2.3** ([5]).: An OI-_module_\(\mathbf{M}\) over \(\mathbf{P}\) is a (covariant) functor from OI to the category of \(K\)-vector spaces such that
1. each \(\mathbf{M}_{n}\) is an \(\mathbf{P}_{n}\)-module, and
2. for each \(a\in\mathbf{P}_{m}\) and \(\varepsilon\in\operatorname{Hom}(m,n)\) we have a commuting diagram
where the vertical maps are multiplication by the indicated elements.
We sometimes refer to \(\mathbf{M}\) as a \(\mathbf{P}\)-_module_.
A _homomorphism_ of \(\mathbf{P}\)-modules is a natural transformation \(\varphi:\mathbf{M}\to\mathbf{N}\) such that each \(\varphi_{n}:\mathbf{M}_{n}\to\mathbf{N}_{n}\) is a \(\mathbf{P}_{n}\)-module homomorphism. We sometimes call \(\varphi\) a _\(\mathbf{P}\)-linear map_. OI-modules over \(\mathbf{P}\) and \(\mathbf{P}\)-linear maps form an abelian category with all concepts such as subobject, quotient object, kernel, cokernel, injection, and surjection being defined "width-wise" from the corresponding concepts in \(K\)-vector spaces (see [7, A.3.3]). Thus, for example, if \(\varphi:\mathbf{M}\to\mathbf{N}\) is a \(\mathbf{P}\)-linear map, then the kernel of \(\varphi\) is a submodule of \(\mathbf{M}\) defined by \((\ker(\varphi))_{n}=\ker(\varphi_{n})\). The image of \(\varphi\) is a submodule of \(\mathbf{N}\) defined in an analogous fashion.
If \(f\in\mathbf{M}_{n}\) for some \(n\geq 0\) then we call \(f\) an _element_ of \(\mathbf{M}\) and write \(f\in\mathbf{M}\). In this case we say \(f\)_has (or is in) width \(n\)_. A _subset_ of \(\mathbf{M}\), denoted \(S\subseteq\mathbf{M}\), is a subset of the disjoint union \(\coprod_{n\geq 0}\mathbf{M}_{n}\). The submodule of \(\mathbf{M}\)_generated_ by a subset \(S\subseteq\mathbf{M}\) is the smallest submodule of \(\mathbf{M}\) containing \(S\). This submodule is denoted \(\langle S\rangle_{\mathbf{M}}\).
We now discuss freeness.
**Definition 2.4** ([5]).: For any integer \(d\geq 0\), define an OI-module \(\mathbf{F}^{\mathrm{OI},d}\) over \(\mathbf{P}\) as follows. For \(n\in\mathbb{Z}_{\geq 0}\) let
\[\mathbf{F}^{\mathrm{OI},d}_{n}=\bigoplus_{\pi\in\operatorname{Hom}(d,n)} \mathbf{P}_{n}e_{\pi}\cong(\mathbf{P}_{n})^{\binom{n}{d}}.\]
For \(\varepsilon\in\operatorname{Hom}(m,n)\), define \(\mathbf{F}^{\mathrm{OI},d}(\varepsilon)\colon\mathbf{F}^{\mathrm{OI},d}_{m} \to\mathbf{F}^{\mathrm{OI},d}_{n}\) via \(e_{\pi}\mapsto e_{\varepsilon\circ\pi}\). An OI-module \(\mathbf{F}\) that is isomorphic to a direct sum \(\bigoplus_{i=1}^{s}\mathbf{F}^{\mathrm{OI},d_{i}}\) for integers \(d_{1},\dots,d_{s}\geq 0\) is called a _free_ OI-module over \(\mathbf{P}\)_of rank \(s\) generated in widths \(d_{1},\dots,d_{s}\)_.
It is straightforward to see that, given a free OI-module \(\mathbf{F}=\bigoplus_{i=1}^{s}\mathbf{F}^{\mathrm{OI},d_{i}}\), we have
\[\mathbf{F}_{n}=\bigoplus_{\begin{subarray}{c}\pi\in\operatorname{Hom}(d_{i},n )\\ 1\leq i\leq s\end{subarray}}\mathbf{P}_{n}e_{\pi,i}\]
for all \(n\geq 0\), where the second index on \(e_{\pi,i}\) is used to keep track of which direct summand it lives in. We call the \(e_{\mathrm{id}_{[d_{i}]},i}\) the _basis elements_ of \(\mathbf{F}\). The \(e_{\mathrm{id}_{[d_{i}]},i}\) generate \(\mathbf{F}\) as an OI-module, and to define a \(\mathbf{P}\)-linear map out of \(\mathbf{F}\) it is enough to specify where the basis elements are mapped. The functor \(\mathbf{F}\) is an example of a _graded_ OI-module [5] over \(\mathbf{P}\) by assigning each basis element degree \(0\).
It is convenient to adjust the grading of an OI-module as follows. Given a graded OI-module \(\mathbf{M}\), define the \(d^{th}\)_twist_ of \(\mathbf{M}\) to be the OI-module \(\mathbf{M}(d)\) that is isomorphic to \(\mathbf{M}\) as an OI-module, and whose grading is determined by
\[[\mathbf{M}(d)_{n}]_{j}=[\mathbf{M}_{n}]_{d+j}.\]
**Example 2.5**.: Let \(\mathbf{P}=\mathbf{P}^{1}\) so that \(\mathbf{P}_{n}=K[x_{1},\ldots,x_{n}]\) for \(n\geq 0\). Then \(\mathbf{F}^{\mathrm{OI},1}\oplus\mathbf{F}^{\mathrm{OI},2}\) has its basis elements in degree \(0\), while \(\mathbf{F}^{\mathrm{OI},1}(-3)\oplus\mathbf{F}^{\mathrm{OI},2}(-4)\) has its basis elements in degrees \(3\) and \(4\). In width \(n\), the rank of both modules as a free \(\mathbf{P}_{n}\)-module is \({n\choose 1}+{n\choose 2}={n+1\choose 2}\).
Let \(\mathbf{F}=\bigoplus_{i=1}^{s}\mathbf{F}^{\mathrm{OI},d_{i}}\) be a free OI-module over \(\mathbf{P}\) with basis \(\{e_{\mathrm{id}_{[d_{i}]},i}\ :\ i\in[s]\}\). A _monomial_ in \(\mathbf{F}\) is an element of the form \(ae_{\pi,i}\) where \(a\) is a monomial in \(\mathbf{P}\). There is a suitable notion of a _monomial order_ on the monomials of \(\mathbf{F}\) (see [4, Definition 3.1 and Example 3.2]) with which we can define the _lead monomial_\(\mathrm{lm}(f)\) of any element \(f\in\mathbf{F}\). Moreover, we define \(\mathrm{lm}(E)=\{\mathrm{lm}(f)\ :\ f\in E\}\) for any subset \(E\subseteq\mathbf{F}\).
Our primary object of study is defined as follows.
**Definition 2.6** ([5, 4]).: Fix a monomial order \(<\) on \(\mathbf{F}\) and let \(\mathbf{M}\) be a submodule of \(\mathbf{F}\). A subset \(G\subseteq\mathbf{M}\) is called a _Grobner basis_ of \(\mathbf{M}\) (with respect to \(<\)) if
\[\langle\mathrm{lm}(\mathbf{M})\rangle_{\mathbf{F}}=\langle\mathrm{lm}(G) \rangle_{\mathbf{F}}.\]
In [5], it was established that any submodule of a finitely generated free OI-module over a Noetherian polynomial OI-algebra has a finite Grobner basis. It was shown in [4] how to compute such bases in finite time. Our package implements this construction with the oiGB method; see Section 3.1.
We also consider syzygies. Given a finitely generated submodule \(\mathbf{M}\) of a free OI-module \(\mathbf{F}\), there is a canonical surjective \(\mathbf{P}\)-linear map \(\varphi:\mathbf{G}\to\mathbf{M}\) sending the basis elements of a free OI-module \(\mathbf{G}\) to the generators of \(\mathbf{M}\) (see [5, Proposition 3.19]). The oiSyz method in our package implements a construction given in [4] for computing the kernel of \(\varphi\); see Section 3.2.
Finally, it was shown in [4] how to iterate the syzygy construction to compute free resolutions \(\mathbf{F}^{\bullet}\to\mathbf{M}\to 0\) out to desired homological degree. If \(\mathbf{M}\) is graded, then \(\mathbf{F}^{\bullet}\) can be pruned in order to form a _graded minimal free resolution_ of \(\mathbf{M}\) (see [1] and [4, Theorem 5.4]). This is implemented in our package with the oiRes method; see Section 3.3.
## 3. The Package
The main methods of our package are oiGB for computing Grobner bases, oiSyz for computing syzygies and oiRes for computing resolutions. This section illustrates how to use these methods. For more information about other methods, as well as optional arguments such as grading shifts, we refer the reader to the package documentation.
### Grobner bases
Let \(\mathbf{F}\) be a finitely generated free \(\mathbf{P}\)-module and let \(\mathbf{M}\) be a submodule generated by \(B=\{b_{1},\ldots,b_{s}\}\). Fix a monomial order \(<\) on \(\mathbf{F}\). Then by [4, Algorithm 3.17], a finite Grobner basis \(G\) (with respect to \(<\)) for \(\mathbf{M}\) containing \(B\) can be computed in finite time. Using our package, one computes such Grobner bases with the oiGB method.
**Example 3.1**.: Let \(\mathbf{F}=\mathbf{F}^{\mathrm{OI},1}\oplus\mathbf{F}^{\mathrm{OI},1}\oplus \mathbf{F}^{\mathrm{OI},2}\) have basis \(\{e_{\mathrm{id}_{[1]},1},e_{\mathrm{id}_{[1]},2},e_{\mathrm{id}_{[2]},3}\}\). Let \(\mathbf{P}=\mathbf{P}^{2}\) so that \(\mathbf{P}\) has two rows of variables, and let
\[B=\{x_{1,1}e_{\mathrm{id}_{[1]},1}+x_{2,1}e_{\mathrm{id}_{[1]},2},\ x_{1,2}x_{1,1}e_{ \pi,2}+x_{2,2}x_{2,1}e_{\mathrm{id}_{[2]},3}\}\]
where \(\pi:[1]\to[2]\) is given by \(1\mapsto 2\). Thus, the first element of \(B\) has width \(1\) and the second element has width \(2\). Fix the _lex order_ on \(\mathbf{F}\) as described in [4, Example 3.2]. We compute a finite Grobner basis for \(\langle B\rangle_{\mathbf{F}}\) in _Macaulay2_ as follows. First, we define our polynomial OI-algebra \(\mathbf{P}\) with makePolynomialOIAlgebra. The user must specify the number of variable rows, the variable symbol, and the ground field \(K\):
i1 : needsPackage "OIGroebner Bases"; i2 : P = makePolynomialOIAlgebra(2, x, QQ);
Now we define our free OI-module \(\mathbf{F}\) with makeFreeOIModule. The user specifies the basis symbol, a list of basis element widths, and the underlying polynomial OI-algebra:
i3 : F = makeFreeOIModule(e, {1,1,2}, P); Since we want to define our elements of \(B\), we need to call the installGeneratorsInWidth method in order to work with our basis symbol e. This method takes a free OI-module and a width as input:
i4 : installGeneratorsInWidth(F, 1); i5 : installGeneratorsInWidth(F, 2); We're ready to define the elements of \(B\):
i6 : use F_1; b1 = x_(1,1)*e_(1,{1},1)+x_(2,1)*e_(1,{1},2); i8 : use F_2; b2 = x_(1,2)*x_(1,1)*e_(2,{2},2)+x_(2,2)*x_(2,1)*e_(2,{1,2},3); Here, for example, e_(2,{2},2) is the element \(e_{\pi,2}\) as defined above. In general, an element \(e_{\sigma,i}\in\mathbf{F}\) translates to an object in our package as follows. Suppose \(\sigma\in\mathrm{Hom}(m,n)\), and write \(\mathrm{im}(\sigma)=\{a_{1},\ldots,a_{m}\}\) where each \(a_{j}\in[n]\). Then \(e_{\sigma,i}\) becomes e_(n,{a_1,\(\ldots\),a_m},i) in our package. Now let's compute a Grobner basis with the method oiGB, which takes a list of elements as input:
i10 : oiGB {b1, b2} o10 = {x e + x e, x x e, 1,1 1,{1},1 2,1 1,{1},2 1,2 1,1 2,{2},2 2,1 2,{1, 2},3 ---------------- x x e - x x e } 2,3 2,2 1,1 3,{2, 3},3 2,3 2,1 1,2 3,{1, 3},3 This tells us that a Grobner basis for \(\mathbf{M}=\langle B\rangle_{\mathbf{F}}\) with respect to the lex order is given by the following elements:
\[b_{1}=x_{1,1}e_{\mathrm{id}_{[1]},1}+x_{2,1}e_{\mathrm{id}_{[1]},2}\in \mathbf{F}_{1}\] \[b_{2}=x_{1,2}x_{1,1}e_{\pi,2}+x_{2,2}x_{2,1}e_{\mathrm{id}_{[2]},3 }\in\mathbf{F}_{2}\] \[b_{3}=x_{2,3}x_{2,2}x_{1,1}e_{\sigma_{1},3}-x_{2,3}x_{2,1}x_{1,2 }e_{\sigma_{2},3}\in\mathbf{F}_{3}\]
where \(\sigma_{1}:[2]\rightarrow[3]\) is given by \(1\mapsto 2\) and \(2\mapsto 3\) and \(\sigma_{2}:[2]\rightarrow[3]\) is given by \(1\mapsto 1\) and \(2\mapsto 3\). This agrees with [4, Example 3.20]. It follows that, given any \(n\geq 3\), a finite Grobner basis for \(\mathbf{M}_{n}\) is given by the images of \(b_{1}\), \(b_{2}\) and \(b_{3}\) under any morphism \([1]\rightarrow[n]\), \([2]\rightarrow[n]\) and \([3]\rightarrow[n]\) respectively.
**Remark 3.2**.: Passing the optional argument Verbose => true into the methods oiGB, oiSyz and oiRes will print useful debug information, and also provides a way to track the progress of the computation.
### Syzygies
As in the previous section, let \(\mathbf{F}\) be a finitely generated free \(\mathbf{P}\)-module. Let \(B=\{b_{1},\ldots,b_{s}\}\subset\mathbf{F}\) and let \(w_{i}\) denote the width of \(b_{i}\). Define the free OI-module \(\mathbf{G}=\bigoplus_{i=1}^{s}\mathbf{F}^{\mathrm{OI},w_{i}}\) with basis \(\{d_{\mathrm{id}_{[w_{i}]},i}\}\). Let \(\varphi:\mathbf{G}\rightarrow\langle B\rangle_{\mathbf{F}}\) be the canonical surjective map defined by \(d_{\mathrm{id}_{[w_{i}]},i}\mapsto b_{i}\). The _syzygy module_ of \(\langle B\rangle_{\mathbf{F}}\) is defined to be the kernel of \(\varphi\). Suppose \(B\) is a Grobner basis for \(\langle B\rangle_{\mathbf{F}}\) with respect to some monomial order \(<\) on \(\mathbf{F}\). Using the construction described in [4, Theorem 4.6], the method oiSyz computes a finite Grobner basis for \(\ker(\varphi)\) with respect to a suitable monomial order on \(\mathbf{G}\) induced by \(<\).
**Example 3.3**.: Let \(\mathbf{P}=\mathbf{P}^{2}\) and let \(\mathbf{F}=\mathbf{F}^{\mathrm{OI},1}\oplus\mathbf{F}^{\mathrm{OI},1}\) have basis \(\{e_{\mathrm{id}_{[1]},1},e_{\mathrm{id}_{[1]},2}\}.\) Define
\[f=x_{1,2}x_{1,1}e_{\pi,1}+x_{2,2}x_{2,1}e_{\rho,2}\in\mathbf{F}_{2}\]
where \(\pi:[1]\rightarrow[2]\) is given by \(1\mapsto 2\) and \(\rho:[1]\rightarrow[2]\) is given by \(1\mapsto 1.\) We will compute a Grobner basis \(G\) for \(\langle f\rangle_{\mathbf{F}},\) and then compute the syzygy module of \(G.\) Starting a new _Macaulay2_ session, we run the following:
i1 : needsPackage "OIGroebnerBases"; i2 : P = makePolynomialOIAlebra(2, x, QQ); i3 : F = makeFreeOIModule(e, {1,1}, P); i4 : installGeneratorsInWidth(F, 2); i5 : use F_2; f = x_(1,2)*x_(1,1)*e_(2,{2},1)+x_(2,2)*x_(2,1)*e_(2,{1},2); i7 : G = oiGB {f} o7 = {x x e + x x e, 1,2 1,1 2,{2},1 2,2 2,1 2,{1},2 ---------------------------------------------------------------- x x e - x x x e } 2,3 2,2 1,1 3,{2},2 2,3 2,1 1,2 3,{1},2 Hence, \(\langle f\rangle_{\mathbf{F}}\) has a Grobner basis (with respect to the lex order)
\[G=\{x_{1,2}x_{1,1}e_{\pi,1}+x_{2,2}x_{2,1}e_{\rho,2},\;x_{2,3}x_{2,2}x_{1,1}e_ {\sigma_{1},2}-x_{2,3}x_{2,1}x_{1,2}e_{\sigma_{2},2}\}\]
where \(\sigma_{1}:[1]\rightarrow[3]\) is given by \(1\mapsto 2\) and \(\sigma_{2}:[1]\rightarrow[3]\) is given by \(1\mapsto 1.\) Define the free OI-module \(\mathbf{G}=\mathbf{F}^{\mathrm{OI},2}(-2)\oplus\mathbf{F}^{\mathrm{OI},3}(-3)\) with basis \(\{d_{\mathrm{id}_{[2]},1},d_{\mathrm{id}_{[3]},2}\}.\) The package assigns these degree shifts automatically. Putting \(g=x_{2,3}x_{2,2}x_{1,1}e_{\sigma_{1},2}-x_{2,3}x_{2,1}x_{1,2}e_{\sigma_{2},2} \in\mathbf{G}_{3}\) so that \(G=\{f,g\},\) we define the map \(\varphi:\mathbf{G}\rightarrow\langle G\rangle_{\mathbf{F}}\) via \(d_{\mathrm{id}_{[2]},1}\mapsto f\) and \(d_{\mathrm{id}_{[3]},2}\mapsto g.\) We can now compute a Grobner basis \(D\) for \(\ker(\varphi)\) (with respect to the Schreyer order on \(\mathbf{G}\) induced by the monomial order on \(\mathbf{F};\) see [4, Definition 4.2]) using the method oiSyz. The user inputs the Grobner basis \(G\) and the basis symbol \(d\):
i8 : D = oiSyz(G, d) o8 = {x d - x d + 1d, 1,2 3,{1, 3},1 1,1 3,{2, 3},1 3,{1, 2, 3},2 ---------------------------------------------------------------- x d - x d, x d 2,4 4,{1, 2, 3},2 2,3 4,{1, 2, 4},2 1,2 4,{1, 3, 4},2 ---------------------------------------------------------------- - x d - x d } 1,1 4,{2, 3, 4},2 1,3 4,{1, 2, 4},2 This says that \(\ker(\varphi)\) has a Grobner basis \(D\) given by the following elements of \(\mathbf{G}\):
\[x_{1,2}d_{\pi_{1},1}-x_{1,1}d_{\pi_{2},1}-d_{\mathrm{id}_{[3]},2} \in\mathbf{G}_{3}\] \[x_{2,4}d_{\pi_{3},2}-x_{2,3}d_{\pi_{4},2} \in\mathbf{G}_{4}\] \[x_{1,2}d_{\pi_{5},2}-x_{1,1}d_{\pi_{6},2}-x_{1,3}d_{\pi_{4},2} \in\mathbf{G}_{4}\]
where the \(\pi_{i}\) for \(1\leq i\leq 6\) are given as follows:
\[\pi_{1}:[2]\to[3]\quad\text{via}\quad 1\mapsto 1,2\mapsto 3\] \[\pi_{2}:[2]\to[3]\quad\text{via}\quad 1\mapsto 2,2\mapsto 3\] \[\pi_{3}:[3]\to[4]\quad\text{via}\quad 1\mapsto 1,2\mapsto 2,3 \mapsto 3\] \[\pi_{4}:[3]\to[4]\quad\text{via}\quad 1\mapsto 1,2\mapsto 2,3 \mapsto 4\] \[\pi_{5}:[3]\to[4]\quad\text{via}\quad 1\mapsto 1,2\mapsto 3,3 \mapsto 4\] \[\pi_{6}:[3]\to[4]\quad\text{via}\quad 1\mapsto 2,2\mapsto 3,3 \mapsto 4.\]
### Resolutions
The syzygy construction described in [4, Theorem 4.6] can be iterated to build resolutions of submodules of free \(\operatorname{OI}\)-modules. Let \(\mathbf{F}\) be a free \(\mathbf{P}\)-module of finite rank, and let \(\mathbf{M}\subseteq\mathbf{F}\) be a submodule generated by a finite set \(B\). Then a free resolution of \(\mathbf{M}\) can be computed out to desired homological degree using [4, Procedure 5.1]. Moreover, if \(\mathbf{M}\) is homogeneous, then a graded minimal free resolution of \(\mathbf{M}\) can be computed out to arbitrary homological degree.
**Example 3.4**.: Let \(\mathbf{P}=\mathbf{P}^{2}\) and let \(\mathbf{F}=\mathbf{F}^{\operatorname{OI},1}\oplus\mathbf{F}^{\operatorname{OI},1}\) have basis \(\{e_{\operatorname{id}_{[1]},1},e_{\operatorname{id}_{[1]},2}\}\), so \(\mathbf{F}\) has rank \(2\). Define
\[f=x_{1,2}x_{1,1}e_{\pi,1}+x_{2,2}x_{2,1}e_{\rho,2}\in\mathbf{F}_{3}\]
where \(\pi:[1]\to[3]\) is given by \(1\mapsto 2\) and \(\rho:[1]\to[3]\) is given by \(1\mapsto 1\). Note the similarity to **Example 3.3**. Since \(f\) is homogeneous, \(\langle f\rangle_{\mathbf{F}}\) is a graded submodule, and we will compute the beginning of a graded minimal free resolution using oiRes. The user specifies a list of elements (who generate the module to be resolved) and a homological degree. In a new _Macaulay2_ session, we run the following:
i1 : needsPackage "\(\operatorname{OI}\)Groebner Bases"; i2 : P = makePolynomial\(\operatorname{OI}\)Algebra(2, x, QQ); i3 : F = makeFree\(\operatorname{OI}\)Module(e, {1, 1}, P); i4 : installBasisElements(F, 3); i5 : use F_3; f = x_(1,2)*x_(1,1)*e_(3,{2},1)+x_(2,2)*x_(2,1)*e_(3,{1},2); i7 : ranks oiRes({f}, 5) o7 = 0: rank 1 1: rank 2 2: rank 4 3: rank 7 4: rank 11 5: rank 22
Note: if one computes out to homological degree \(n\), then only the first \(n-1\) ranks are guaranteed to be minimal. Thus, we have the beginning of a minimal free resolution for \(\mathbf{M}=\langle f\rangle_{\mathbf{F}}\):
\[\cdots\to\mathbf{F}^{4}\to\mathbf{F}^{3}\to\mathbf{F}^{2}\to\mathbf{F}^{1}\to \mathbf{F}^{0}\to\mathbf{M}\to 0\]
where
\[\operatorname{rank}(\mathbf{F}^{0}) =1\] \[\operatorname{rank}(\mathbf{F}^{1}) =2\] \[\operatorname{rank}(\mathbf{F}^{2}) =4\] \[\operatorname{rank}(\mathbf{F}^{3}) =7\] \[\operatorname{rank}(\mathbf{F}^{4}) =11.\]
One can obtain more information about resolutions such as grading shifts, generators of the free modules, and differentials by using the describe method. We refer the reader to the package documentation. Such information can be used to restrict a resolution of \(\mathbf{M}\) to any width \(w\) to obtain a graded (but not necessarily minimal) free resolution of the \(\mathbf{P}_{w}\)-module \(\mathbf{M}_{w}\), as in [1, Section 3].
|
2305.00841
|
Complete reducibility for Lie subalgebras and semisimplification
|
Let $G$ be a connected reductive linear algebraic group over a field $k$.
Using ideas from geometric invariant theory, we study the notion of
$G$-complete reducibility over $k$ for a Lie subalgebra $\mathfrak h$ of the
Lie algebra $\mathfrak g = Lie(G)$ of $G$ and prove some results when
$\mathfrak h$ is solvable or $char(k)= 0$. We introduce the concept of a
$k$-semisimplification $\mathfrak h'$ of $\mathfrak h$; $\mathfrak h'$ is a Lie
subalgebra of $\mathfrak g$ associated to $\mathfrak h$ which is $G$-completely
reducible over $k$. This is the Lie algebra counterpart of the analogous notion
for subgroups studied earlier by the first, third and fourth authors. As in the
subgroup case, we show that $\mathfrak h'$ is unique up to $Ad(G(k))$-conjugacy
in $\mathfrak g$. Moreover, we prove that the two concepts are compatible: for
$H$ a closed subgroup of $G$ and $H'$ a $k$-semisimplification of $H$, the Lie
algebra $Lie(H')$ is a $k$-semisimplification of $Lie(H)$.
|
Michael Bate, Sören Böhm, Benjamin Martin, Gerhard Roehrle, Laura Voggesberger
|
2023-05-01T14:11:30Z
|
http://arxiv.org/abs/2305.00841v2
|
# Complete reducibility for Lie subalgebras and semisimplification
###### Abstract.
Let \(G\) be a connected reductive linear algebraic group over a field \(k\). Using ideas from geometric invariant theory, we study the notion of \(G\)-complete reducibility over \(k\) for a Lie subalgebra \(\mathfrak{h}\) of the Lie algebra \(\mathfrak{g}=\operatorname{Lie}(G)\) of \(G\) and prove some results when \(\mathfrak{h}\) is solvable or \(\operatorname{char}(k)=0\). We introduce the concept of a _\(k\)-semisimplification_\(\mathfrak{h}^{\prime}\) of \(\mathfrak{h}\); \(\mathfrak{h}^{\prime}\) is a Lie subalgebra of \(\mathfrak{g}\) associated to \(\mathfrak{h}\) which is \(G\)-completely reducible over \(k\). This is the Lie algebra counterpart of the analogous notion for subgroups studied earlier by the first, third and fourth authors. As in the subgroup case, we show that \(\mathfrak{h}^{\prime}\) is unique up to \(\operatorname{Ad}(G(k))\)-conjugacy in \(\mathfrak{g}\). Moreover, we prove that the two concepts are compatible: for \(H\) a closed subgroup of \(G\) and \(H^{\prime}\) a \(k\)-semisimplification of \(H\), the Lie algebra \(\operatorname{Lie}(H^{\prime})\) is a \(k\)-semisimplification of \(\operatorname{Lie}(H)\).
Key words and phrases:Semisimplification, \(G\)-complete reducibility, geometric invariant theory, rationality, cocharacter-closed orbits, degeneration of \(G\)-orbits 2
We also prove some results of independent interest about \(G\)-complete reducibility for subalgebras, including Propositions 4.3 and 7.10. We consider solvable subalgebras in Section 6. Theorem 7.3 gives a necessary condition for a subalgebra to be \(G\)-completely reducible when \(\operatorname{char}(k)=p>0\) is sufficiently large, and yields a characterisation of \(G\)-completely reducible subalgebras when \(\operatorname{char}(k)=0\). We discuss some related results of Richardson, who pioneered the application of geometric invariant theory to the study of \(G\)-complete reducibility.
The notion of \(G\)-complete reducibility for subalgebras of \(\mathfrak{g}\) was first defined by McNinch [29] for algebraically closed \(k\) and was developed further in [12, SS3] by the first, third and fourth authors; the non-algebraically closed case was first studied in [5, SS5]. The approach via geometric invariant theory stems from work of Richardson [34], who studied subgroups of \(G\) and subalgebras of \(\operatorname{Lie}(G)\) -- mainly for algebraically closed \(k\) in characteristic \(0\). Some of our arguments involve extending constructions from _op. cit._ to positive characteristic. We also use more recent techniques from [7] and [10] including some deep results from the theory of spherical buildings.
The theory of \(G\)-complete reducibility for Lie subalgebras closely parallels the theory of \(G\)-complete reducibility for subgroups [7], [10]. Some arguments are easier in the subalgebra case, as we need not worry about non-connected groups. There are, however, some extra problems for subalgebras, which can be traced back to the failure of certain normalisers to be smooth. It is to avoid these difficulties that we need assumptions on the characteristic of \(k\): see Section 2.3.
## 2. Preliminaries
### Basic notation
Let \(k\) be a field with \(\operatorname{char}(k)=p\geq 0\). Following [15], [13], and [6], we regard an affine variety over a field \(k\) as a variety \(X\) over the algebraic closure \(\overline{k}\) together with a choice of \(k\)-structure. We write \(X(k)\) for the set of \(k\)-points of \(X\) and \(X(\overline{k})\) (or just \(X\)) for the set of \(\overline{k}\)-points of \(X\). By a subvariety of \(X\) we mean a closed \(\overline{k}\)-subvariety of \(X\); a \(k\)-subvariety is a subvariety that is defined over \(k\).
Throughout, \(G\) denotes a connected reductive linear algebraic \(k\)-group. Algebraic groups and their subgroups are assumed to be smooth (although we consider certain non-smooth subgroup schemes in Section 2.3). A \(k\)-defined affine \(G\)-variety \(X\) is an affine variety over \(k\) equipped with a \(k\)-defined morphic action of \(G\). Two important examples of affine \(G\)-varieties in this paper are the group \(G\) itself, with \(G\) acting by inner automorphisms, and the Lie algebra \(\mathfrak{g}=\operatorname{Lie}(G)\) of \(G\), with \(G\) acting by the adjoint action (that \(\mathfrak{g}\) admits a \(k\)-structure is [37, (12.2.3), (4.4.8)]). Recall that these two actions are closely related: if \(g\in G\), then we have \(\operatorname{Inn}(g):G\to G,x\mapsto gxg^{-1}\), and then \(\operatorname{Ad}(g):=d(\operatorname{Inn}(g)):\mathfrak{g}\to\mathfrak{g}\) is the differential (see below). To simplify notation we denote both of these actions by a dot: that is, if \(g\in G\), then we let \(g\cdot x=gxg^{-1}\) for each \(x\in G\), and \(g\cdot x=\operatorname{Ad}(g)(x)\) for each \(x\in\mathfrak{g}\). We define \(\operatorname{ad}\) to be the usual adjoint action of \(\mathfrak{g}\) on \(\mathfrak{g}\): so \(\operatorname{ad}(x)(y)=[x,y]\) for \(x,y\in\mathfrak{g}\). If necessary we write \(\operatorname{Ad}_{G}\) instead of \(\operatorname{Ad}\) and \(\operatorname{ad}_{G}\) instead of \(\operatorname{ad}\). Given any \(m\in\mathbb{N}\), we may extend these actions to actions of \(G\) on \(X=G^{m}\) or \(\mathfrak{g}^{m}\), the \(m\)-fold Cartesian product of \(G\) or \(\mathfrak{g}\), by setting
\[g\cdot(x_{1},\ldots,x_{m}):=(g\cdot x_{1},\ldots,g\cdot x_{m})\]
for each \(g\in G\) and \((x_{1},\ldots,x_{m})\in X\). We refer to this action informally in both cases as the action _by simultaneous conjugation_.
When \(\operatorname{char}(k)=p>0\), the Lie algebra \(\mathfrak{g}\) is _restricted_ with a \(p\)_-operation_\(\mathfrak{g}\to\mathfrak{g},x\mapsto x^{[p]}\), [37, (4.4.3)]. By a subgroup of \(G\) we mean a closed \(\overline{k}\)-subgroup and by a \(k\)-subgroup we mean a subgroup that is defined over \(k\). For a subgroup \(H\) of \(G\) we write \(H^{0}\) for the identity component of \(H\) and \(\operatorname{Lie}(H)\) for its Lie algebra (which is a subalgebra of \(\mathfrak{g}\)). If \(H\) is \(k\)-defined, then \(\operatorname{Lie}(H)\) also admits a \(k\)-structure, [37, (12.2.3), (4.4.8)]. Subalgebras of \(\mathfrak{g}\) of the form \(\operatorname{Lie}(H)\) as above are called _algebraic_.
If \(f\colon G_{1}\to G_{2}\) is a homomorphism of algebraic groups then we denote the induced map from \(\operatorname{Lie}(G_{1})\) to \(\operatorname{Lie}(G_{2})\) by \(df\).
### Cocharacters and parabolic subgroups
We recall some basic notation and facts concerning cocharacters and parabolic subgroups in connected reductive groups \(G\) from [13], [34] and [37]. Recall that a cocharacter of \(G\) is a homomorphism of algebraic groups \(\lambda:\mathbb{G}_{m}\to G\), where \(\mathbb{G}_{m}\) is the multiplicative group. We define \(Y_{k}(G)\) to be the set of \(k\)-defined cocharacters of \(G\) and \(Y(G):=Y_{\overline{k}}(G)\) to be the set of all cocharacters of \(G\). Given \(\lambda\in Y(G)\), we define
\[P_{\lambda}=\{g\in G\,|\,\lim_{a\to 0}\lambda(a)g\lambda(a)^{-1}\text{ exists}\}\]
and \(L_{\lambda}=C_{G}(\operatorname{Im}(\lambda))\subseteq P_{\lambda}\) (for the definition of a limit, see [37, SS3.2.13]). We have \(P_{\lambda}=L_{\lambda}=G\) if and only if \(\operatorname{Im}(\lambda)\) is contained in the centre of \(G\). Any parabolic subgroup (resp., any Levi subgroup of a parabolic subgroup of \(G\)) is of the form \(P_{\lambda}\) (resp., \(L_{\lambda}\)) [37, Prop. 8.4.5]. In fact, this also holds for \(k\)-defined parabolic and Levi subgroups of \(G\), by the following, which is [37, Lem. 15.1.2(ii)]:
**Lemma 2.1**.: _Every pair \((P,L)\) consisting of a parabolic \(k\)-subgroup \(P\) of \(G\) and a Levi \(k\)-subgroup \(L\) of \(P\) is of the form \((P,L)=(P_{\lambda},L_{\lambda})\) for some \(\lambda\in Y_{k}(G)\), and vice versa._
For ease of reference, we record without proof some basic facts about these subgroups. The unipotent radical of a parabolic subgroup \(P\) of \(G\) is denoted by \(R_{u}(P)\). The following is [34, 2.3].
**Lemma 2.2**.: _If \(P\) is a \(k\)-defined parabolic subgroup then \(R_{u}(P)\) is \(k\)-defined._
The following is [13, Lem. 2.5 (i)+(iii)].
**Lemma 2.3**.: _Let \(P\) be a parabolic subgroup of \(G\) and \(L\) a Levi subgroup of \(P\)._
* _We have_ \(P\cong L\ltimes R_{u}(P)\)_, and this is a_ \(k\)_-isomorphism if_ \(P\) _and_ \(L\) _are_ \(k\)_-defined._
* _Let_ \(T\) _be a maximal torus of_ \(P\)_. Then there is a unique Levi subgroup_ \(L\) _of_ \(P\) _such that_ \(T\subseteq L\)_. If_ \(P\) _and_ \(T\) _are_ \(k\)_-defined then_ \(L\) _is_ \(k\)_-defined._
* _Any two Levi_ \(k\)_-subgroups of an parabolic_ \(k\)_-subgroup_ \(P\) _are_ \(R_{u}(P)(k)\)_-conjugate._
We denote the canonical projection from \(P\) to \(L\) by \(c_{L}\); this is \(k\)-defined if \(P\) and \(L\) are. Given any cocharacter \(\lambda\) of \(G\) such that \(P=P_{\lambda}\) and \(L=L_{\lambda}\) then we often write \(c_{\lambda}\) instead of \(c_{L}\). We have \(c_{\lambda}(g)=\lim_{a\to 0}\lambda(a)g\lambda(a)^{-1}\) for \(g\in P_{\lambda}\); the kernel of \(c_{\lambda}\) is the unipotent radical \(R_{u}(P_{\lambda})\) and the set of fixed points of \(c_{\lambda}\) is \(L_{\lambda}\).
All of this transfers over to the Lie algebra \(\mathfrak{g}\) through the adjoint action: since \(G\) acts on \(\mathfrak{g}\) by the adjoint action, so does any subgroup of \(G\) and, in particular, the image of any cocharacter of \(G\). For \(\lambda\in Y(G)\), we define and use throughout \(\mathfrak{p}_{\lambda}:=\operatorname{Lie}(P_{\lambda})\) and \(\mathfrak{l}_{\lambda}:=\operatorname{Lie}(L_{\lambda})\). We require some standard facts concerning Lie algebras of parabolic and Levi subgroups of \(G\) (cf. [34, SS2.1]).
**Lemma 2.4**.: _Let \(x\in\mathfrak{g}\). Then with the notation from above we have_
1. \(x\in\mathfrak{p}_{\lambda}\) _if and only if_ \(\lim\limits_{a\to 0}\lambda(a)\cdot x\) _exists;_
2. \(x\in\mathfrak{l}_{\lambda}\) _if and only if_ \(\lim\limits_{a\to 0}\lambda(a)\cdot x=x\) _if and only if the image of_ \(\lambda\) _centralizes_ \(x\)_;_
3. \(x\in\operatorname{Lie}(R_{u}(P_{\lambda}))\) _if and only if_ \(\lim\limits_{a\to 0}\lambda(a)\cdot x\) _exists and equals_ \(0\)_._
The map \(c_{\lambda}:=c_{\mathfrak{l}_{\lambda}}:\mathfrak{p}_{\lambda}\to\mathfrak{l}_ {\lambda}\) given by \(x\mapsto\lim\limits_{a\to 0}\lambda(a)\cdot x\), where the action is given by adjoint action, coincides with the usual projection of \(\mathfrak{p}_{\lambda}\) onto \(\mathfrak{l}_{\lambda}\). The kernel of \(c_{\lambda}\) is \(\operatorname{Lie}(R_{u}(P_{\lambda}))\), and the set of fixed points of \(c_{\lambda}\) is \(\mathfrak{l}_{\lambda}\).
Let \(m\in\mathbb{N}\) and recall that \(G\) acts on \(\mathfrak{g}^{m}\) by simultaneous conjugation. Given \(\lambda\in Y(G)\), we have a map \(\mathfrak{p}_{\lambda}^{m}\to\mathfrak{l}_{\lambda}^{m}\) given by \(\mathbf{x}\mapsto\lim_{a\to 0}\lambda(a)\cdot\mathbf{x}\); we abuse notation slightly and also call this map \(c_{\lambda}\). For any \(\mathbf{x}\in\mathfrak{p}_{\lambda}^{m}\), there exists a Levi \(k\)-subgroup \(L\) of \(P_{\lambda}\) with \(\mathbf{x}\in\mathfrak{l}^{m}\) if and only if \(c_{\lambda}(\mathbf{x})=u\cdot\mathbf{x}\) for some \(u\in R_{u}(P_{\lambda})(k)\), by [6, Prop. 2.11].
### Smoothness of centralisers and normalisers
We are concerned with linear algebraic groups \(G\) over \(k\): these are affine group schemes of finite type over \(k\) (assumed to be smooth). Note that smoothness is automatic if \(\operatorname{char}(k)=0\), by a result of Cartier [19, II, SS6, 1.1]. We do need to consider certain non-smooth subgroup schemes of \(G\), as the lack of smoothness has implications for \(G\)-complete reducibility. If \(\mathfrak{h}\) is a subalgebra of \(\mathfrak{g}\) then we define \(N_{G}(\mathfrak{h})\) to be the _scheme-theoretic_ normaliser of \(\mathfrak{h}\) in \(G\), and \(C_{G}(\mathfrak{h})\) to be the _scheme-theoretic_ centraliser of \(\mathfrak{h}\) in \(G\). Define
\[\mathfrak{n}_{\mathfrak{g}}(\mathfrak{h}):=\{x\in\mathfrak{g}\mid[x, \mathfrak{h}]\subseteq\mathfrak{h}\}\]
and
\[\mathfrak{c}_{\mathfrak{g}}(\mathfrak{h}):=\{x\in\mathfrak{g}\mid[x,y]=0 \text{ for all }y\in\mathfrak{h}\}.\]
It can be shown that \(N_{G}(\mathfrak{h})\) is smooth if and only if \(\mathfrak{n}_{\mathfrak{g}}(\mathfrak{h})=\operatorname{Lie}(N_{G}(\mathfrak{ h}))\) and \(C_{G}(\mathfrak{h})\) is smooth if and only if \(\mathfrak{c}_{\mathfrak{g}}(\mathfrak{h})=\operatorname{Lie}(C_{G}( \mathfrak{h}))\). For further discussion, see [20] and [21]. For exact conditions for \(C_{G}(\mathfrak{h})\) to be smooth, see [11, Thm. 1.2] and [20]; for explicit lower bounds on \(\operatorname{char}(k)\) for \(N_{G}(\mathfrak{h})\) to be smooth, see [21, Thm. A].
**Definition 2.5**.: Define \(\operatorname{char}(k)=p>0\) to be _fabulous_ for \(G\) if the following hold:
(a) Centralisers and normalisers of subgroups of \(G\) and subalgebras of \(\mathfrak{g}\) are smooth.
(b) If \(\mathfrak{h}\) is a subalgebra of \(\mathfrak{g}\) consisting of nilpotent elements then \(\mathfrak{h}\subseteq\operatorname{Lie}(B)\) for some Borel subgroup \(B\) of \(G\).
_Remarks 2.6_.: (i). If \(x\in\mathfrak{g}\) is nilpotent and \(B\) is a Borel subgroup of \(G\) then \(x\in\operatorname{Lie}(B)\) if and only if \(x\in\operatorname{Lie}(R_{u}(B))\). So we can reformulate condition (b) as follows: if \(\mathfrak{h}\) is a subalgebra of \(\mathfrak{g}\) consisting of nilpotent elements then \(\mathfrak{h}\subseteq\operatorname{Lie}(U)\) for some maximal unipotent subgroup \(U\) of \(G\).
(ii). If \(\operatorname{char}(k)=0\), then both conditions above are satisfied, see [19, II, SS6, 1.1], [17, Ch. VIII, SS10, Cor. 2].
(iii). Suppose \(k=\overline{k}\). Owing to [20, Thm. 1.1], centralisers of subgroups of \(G\) and centralisers of subalgebras of \(\mathfrak{g}\) are smooth if and only if \(\operatorname{char}(k)\) is \(0\) or a "pretty good prime" \(p\) for \(G\), ([20, Def. 2.11]). The latter implies that \(p\) is not a torsion prime for the root system of \(G\). It then follows from [27, Thm. 2.2 and Rems.(a)] that condition (b) holds. Hence condition (a) implies condition (b) in this instance. See also [21, Thm. 10.1].
(iv). Suppose \(k=\overline{k}\). If \(\mathfrak{h}\) consists of nilpotent elements and \(\dim(\mathfrak{h})=1\) then \(\mathfrak{h}\subseteq\operatorname{Lie}(B)\) for some Borel subgroup \(B\) of \(G\): this follows from [15, IV.14.25 Prop.]. This can fail if \(k\neq\overline{k}\). For example, let \(G=\operatorname{PGL}_{2}(k)\). For \(A=\left(\begin{array}{cc}a&b\\ c&d\end{array}\right)\in\operatorname{GL}_{2}(k)\), we denote the image of \(A\) in \(G\) by \(\overline{\left(\begin{array}{cc}a&b\\ c&d\end{array}\right)}\), and likewise for elements of \(\mathfrak{gl}_{2}(k)\). If \(\operatorname{char}(k)=2\) and we choose \(a\in k^{\frac{1}{2}}\) such that \(a\not\in k\) then the subspace of \(\operatorname{Lie}(\operatorname{PGL}_{2})\) spanned by \(\overline{\left(\begin{array}{cc}0&1\\ a^{2}&0\end{array}\right)}\) consists of nilpotent elements but is not contained in \(\operatorname{Lie}(B)\) for any Borel subgroup \(B\) of \(\operatorname{PGL}_{2}\) (compare [13, Rem. 5.10]). This is the Lie algebra counterpart of the phenomenon of "non-plongeabilite" for unipotent elements of the group, cf. [39].
(v). If \(p\) is fabulous for \(G\), then \(Z(G)=C_{G}(G)\) is smooth, so \(\mathfrak{z}(\mathfrak{g})=\operatorname{Lie}(Z(G))\); in particular, \(\mathfrak{z}(\mathfrak{g})=0\) if \(G\) is semisimple.
**Example 2.7**.: Suppose \(k\) is algebraically closed of characteristic \(p=2\) and let \(G=\operatorname{PGL}_{2}(k)\). Let \(\mathfrak{h}\) be the abelian subalgebra of \(\mathfrak{g}\) spanned by \(\overline{\left(\begin{array}{cc}0&1\\ 0&0\end{array}\right)}\) and \(\overline{\left(\begin{array}{cc}0&0\\ 1&0\end{array}\right)}\). It is easy to check that \(\mathfrak{h}\) is not contained in \(\operatorname{Lie}(B)\) for any Borel subgroup \(B\) of \(G\). Similar examples are produced in [27] for any \(G\) such that \(p\) is a torsion prime for \(G\).
Now let \(\mathfrak{m}\) be the subalgebra of \(\mathfrak{g}\) spanned by \(\overline{\left(\begin{array}{cc}0&1\\ 0&0\end{array}\right)}\). Then \(C_{G}(\mathfrak{m})(k)\) is (the image of) the group of upper unitriangular matrices and \(N_{G}(\mathfrak{m})(k)\) is (the image of) the group of upper triangular matrices, but \(\mathfrak{c}_{\mathfrak{g}}(\mathfrak{m})=\mathfrak{h}\) and \(\mathfrak{n}_{\mathfrak{g}}(\mathfrak{m})=\mathfrak{g}\). Hence neither \(C_{G}(\mathfrak{m})\) nor \(N_{G}(\mathfrak{m})\) is smooth. A similar calculation shows that neither \(C_{G}(\mathfrak{h})\) nor \(N_{G}(\mathfrak{h})\) is smooth.
## 3. Cocharacter-closed orbits and \(G\)-complete reducibility
We introduce the notion of complete reducibility for subalgebras of \(\mathfrak{g}\) and explain the link with geometric invariant theory (GIT). At the end of the section, we extend to arbitrary \(k\) the main results from [29]. As in [10], our main tool from GIT is the notion of cocharacter-closure, introduced in [13] and [6].
**Definition 3.1**.: Let \(X\) be a \(k\)-defined affine \(G\)-variety and let \(x\in X\) (we do not require \(x\) to be a \(k\)-point). We say that the orbit \(G(k)\cdot x\) is _cocharacter-closed over \(k\)_ if for all \(\lambda\in Y_{k}(G)\) such that \(x^{\prime}:=\lim_{a\to 0}\lambda(a)\cdot x\) exists, \(x^{\prime}\) belongs to \(G(k)\cdot x\).
If \(k=\overline{k}\) then it follows from the Hilbert-Mumford Theorem [25, Thm. 1.4] that \(G(k)\cdot x\) is cocharacter-closed over \(k\) if and only if \(G(k)\cdot x\) is closed.
The following is [6, Thm. 1.3].
**Theorem 3.2** (Rational Hilbert-Mumford Theorem).: _Let \(G\), \(X\), \(x\) be as above. Then there is a unique \(G(k)\)-orbit \(\mathcal{O}\) such that (a) \(\mathcal{O}\) is cocharacter-closed over \(k\), and (b) there exists \(\lambda\in Y_{k}(G)\) such that \(\lim_{a\to 0}\lambda(a)\cdot x\) belongs to \(\mathcal{O}\)._
Next recall the notion of \(G\)-complete reducibility for subgroups of \(G\).
**Definition 3.3**.: Let \(H\) be a subgroup of \(G\). We say that \(H\) is _\(G\)-completely reducible over \(k\)_ (\(G\)-cr over \(k\)) if for any parabolic \(k\)-subgroup \(P\) of \(G\) such that \(P\) contains \(H\), there is a
Levi \(k\)-subgroup \(L\) of \(P\) such that \(L\) contains \(H\). We say that \(H\) is \(G\)_-irreducible over \(k\)_ (\(G\)-ir over \(k\)) if \(H\) is not contained in any proper R-parabolic \(k\)-subgroup of \(G\) at all.
We say that \(H\) is \(G\)_-completely reducible_ (\(G\)-cr) if \(H\) is \(G\)-completely reducible over \(\overline{k}\).
For more on \(G\)-complete reducibility for subgroups of \(G\), see [35], [36], [7]; our main focus in this paper is the analogue for Lie algebras. We first recall the definition of \(G\)-complete reducibility for Lie algebras and then also the link between this concept and GIT using generating tuples, due to Richardson. Most of what follows was originally written down in [5, SS5].
**Definition 3.4** ([5, Def. 5.3]).: A Lie subalgebra \(\mathfrak{h}\) of \(\mathfrak{g}\) is _\(G\)-completely reducible over \(k\)_ (\(G\)-cr over \(k\)) if for any parabolic \(k\)-subgroup \(P\) of \(G\) such that \(\mathfrak{h}\subseteq\operatorname{Lie}(P)\), there is a Levi \(k\)-subgroup \(L\) of \(P\) such that \(\mathfrak{h}\subseteq\operatorname{Lie}(L)\). We say that \(\mathfrak{h}\) is _\(G\)-irreducible over \(k\)_ (\(G\)-ir over \(k\)) if \(\mathfrak{h}\) is not contained in \(\operatorname{Lie}(P)\) for any proper parabolic \(k\)-subgroup of \(G\) at all. We say that \(\mathfrak{h}\) is _\(G\)-indecomposable over \(k\)_ (\(G\)-ind over \(k\)) if \(\mathfrak{h}\) is not contained in \(\operatorname{Lie}(L)\) for any proper Levi \(k\)-subgroup \(L\) of \(G\).
As in the subgroup case, we say that \(\mathfrak{h}\) is _\(G\)-completely reducible_ (resp., _\(G\)-irreducible_, _\(G\)-indecomposable_) if it is \(G\)-completely reducible over \(\overline{k}\) (resp., \(G\)-irreducible over \(\overline{k}\), \(G\)-indecomposable over \(\overline{k}\)).
For \(k=\overline{k}\), this notion is due to McNinch, see [29] and also [13, SS5.3].
**Example 3.5**.: Let \(k\), \(G\), \(\mathfrak{h}\) and \(\mathfrak{m}\) be as in Example 2.7. Then \(\mathfrak{h}\) is \(G\)-ir but the ideal \(\mathfrak{m}\) of \(\mathfrak{h}\) is not \(G\)-cr.
**Example 3.6**.: If \(p\) is fabulous for \(G\) and \(0\neq\mathfrak{h}\) consists of nilpotent elements then \(\mathfrak{h}\) is not \(G\)-cr: for \(\mathfrak{h}\subseteq\operatorname{Lie}(B)\) for some Borel subgroup \(B\) of \(G\), but clearly \(\mathfrak{h}\not\subseteq\operatorname{Lie}(T)\) for any maximal torus \(T\) of \(B\).
_Remark 3.7_.: The notion of \(G\)-complete reducibility for subgroups and subalgebras generalizes the concept of semisimplicity from representation theory in the following sense. If \(H\) is a subgroup of \(\operatorname{GL}(V)\) or \(\operatorname{SL}(V)\), or \(\mathfrak{h}\) is a subalgebra of \(\mathfrak{gl}(V)\) or \(\mathfrak{sl}(V)\), then \(H\) (resp. \(\mathfrak{h}\)) is \(\operatorname{GL}(V)\)-cr or \(\operatorname{SL}(V)\)-cr if and only if \(V\) is a semisimple \(H\)-module (resp. \(\mathfrak{h}\)-module). See [35], [36, (3.2.2)] for the case of subgroups; the case of subalgebras is almost identical.
_Remark 3.8_.: Let \(H\) be a subgroup of \(G\) and \(\mathfrak{h}\) a subalgebra of \(\mathfrak{g}\). If \(k^{\prime}/k\) is an algebraic field extension then we may also regard \(G\) as a \(k^{\prime}\)-group, and it therefore makes sense to ask whether \(H\) and \(\mathfrak{h}\) are \(G\)-cr over \(k^{\prime}\) as well as whether they are \(G\)-cr over \(k\).
_Remark 3.9_.: Note that Definition 3.4 makes sense even if \(\mathfrak{h}\) is not \(k\)-defined. We also note that since \(\mathfrak{p}_{g\cdot\lambda}=g\cdot\mathfrak{p}_{\lambda}\) and \(\mathfrak{l}_{g\cdot\lambda}=g\cdot\mathfrak{l}_{\lambda}\) for any \(\lambda\in Y(G)\) and any \(g\in G\) (see, e.g., [7, SS6]), it follows that \(\mathfrak{h}\) is \(G\)-cr over \(k\) if and only if every \(\operatorname{Ad}(G(k))\)-conjugate of \(\mathfrak{h}\) is. More generally, one can show that if \(\mathfrak{h}\) is \(G\)-cr over \(k\) (resp., \(G\)-ir over \(k\), resp., \(G\)-ind over \(k\)) then so is \(d\phi(\mathfrak{h})\), for any \(k\)-defined automorphism \(\phi\) of \(G\). Similar observations hold for subgroups.
We now recall the link to GIT. For this, we need the following definition.
**Definition 3.10**.: Let \(\mathfrak{h}\) be a Lie algebra. We call \(\mathbf{x}=(x_{1},\ldots,x_{m})\in\mathfrak{h}^{m}\), for some \(m\in\mathbb{N}\), a _generating tuple for \(\mathfrak{h}\)_ if \(x_{1},\ldots,x_{m}\) is a generating set for \(\mathfrak{h}\) as a Lie algebra.
The next theorem shows the relevance of the previous definition to the study of complete reducibility. It is [5, Thm. 5.4]; see also [29, Thm. 1(i)] for the case \(k=\overline{k}\).
**Theorem 3.11**.: _Let \(\mathfrak{h}\) be a subalgebra of \(\mathfrak{g}=\operatorname{Lie}(G)\). Let \(\mathbf{x}\in\mathfrak{g}^{m}\) be a generating tuple for \(\mathfrak{h}\), and let \(G\) act on \(\mathfrak{g}^{m}\) by simultaneous conjugation. Then \(\mathfrak{h}\) is \(G\)-completely reducible over \(k\) if and only if the \(G(k)\)-orbit of \(\mathbf{x}\) is cocharacter-closed in \(\mathfrak{g}^{m}\) over \(k\)._
The next result is [5, Thm. 5.5]. Note that if \(k\) is perfect, then part (i) implies the equivalence of \(G\)-complete reducibility over \(k\) and \(G\)-complete reducibility (over \(\overline{k}\)), because the extension \(\overline{k}/k\) is separable for perfect \(k\).
**Proposition 3.12**.: _Let \(\mathfrak{h}\) be a subalgebra of \(\mathfrak{g}\)._
1. _Suppose_ \(\mathfrak{h}\) _is_ \(k\)_-defined, and let_ \(k^{\prime}/k\) _be a separable algebraic field extension. Then_ \(\mathfrak{h}\) _is_ \(G\)_-completely reducible over_ \(k^{\prime}\) _if and only if_ \(\mathfrak{h}\) _is_ \(G\)_-completely reducible over_ \(k\)_._
2. _Let_ \(S\) _be a_ \(k\)_-defined torus of_ \(C_{G}(\mathfrak{h})\) _and set_ \(L=C_{G}(S)\)_. Then_ \(\mathfrak{h}\) _is_ \(G\)_-completely reducible over_ \(k\) _if and only if_ \(\mathfrak{h}\) _is_ \(L\)_-completely reducible over_ \(k\)_._
The final ingredient we need for our first main result is the connection between complete reducibility of subgroups of \(G\) and the notion of complete reducibility in the spherical building of \(G\). Recall that the spherical building \(\Delta_{k}=\Delta_{k}(G)\) of \(G\) over \(k\) can be identified as a simplicial complex with the poset of \(k\)-parabolic subgroups of \(G\), with inclusion reversed [38]; the parabolic subgroup \(G\) corresponds to the empty simplex. For each \(k\)-parabolic subgroup \(P\) of \(G\), we let \(\sigma_{P}\) denote the corresponding simplex in \(\Delta_{k}\). We note that \(k\)-points of \(G\) induce simplicial automorphisms of \(\Delta_{k}\): for \(g\in G(k)\) and a \(k\)-parabolic subgroup \(P\) of \(G\), we can define \(g\cdot\sigma_{P}=\sigma_{gPg^{-1}}\). Recall that the simplicial complex \(\Delta_{k}\) has a _geometric realisation_, which we denote by \(\overline{\Delta_{k}}\)[1, A.1.1]. We have the following definitions.
**Definition 3.13**.: ([35, 2.1.4, 2.1.5, 2.2.1])
1. Two simplices \(\sigma,\tau\) in \(\Delta_{k}\) are called _opposite_ if when we write \(\sigma=\sigma_{P}\) and \(\tau=\sigma_{Q}\) for \(k\)-parabolic subgroups \(P\) and \(Q\) of \(G\), then \(P\) and \(Q\) are opposite in \(G\); that is, \(P\cap Q\) is a common Levi subgroup of \(P\) and \(Q\).
2. A subcomplex \(\Sigma\) of \(\Delta_{k}\) is called _convex_ if the corresponding subset \(\overline{\Sigma}\subseteq\overline{\Delta_{k}}\) of the geometric realisation is convex.
3. A convex subcomplex \(\Sigma\) of \(\Delta_{k}\) is said to be _\(\Delta_{k}\)-completely reducible_ (\(\Delta_{k}\)-cr) if for every \(\sigma\in\Sigma\), there exists \(\tau\in\Sigma\) with \(\tau\) opposite \(\sigma\).
The following argument forms part of the proof of [29, Lem. 4]; it provides the key link between \(G\)-complete reducibility and the building \(\Delta_{k}\). We give the idea of the proof for completeness.
**Proposition 3.14**.: _Suppose \(\mathfrak{h}\) is a subalgebra of \(\mathfrak{g}\). Then \(\Delta_{k}^{\mathfrak{h}}:=\{\sigma_{P}\mid\mathfrak{h}\subseteq\operatorname {Lie}(P)\}\) is a convex subcomplex of \(\Delta_{k}\), and it is \(\Delta_{k}\)-completely reducible if and only if \(\mathfrak{h}\) is \(G\)-completely reducible over \(k\)._
Sketch Proof.: For the first assertion, the key observation is that the intersection of two \(k\)-parabolic subgroups \(P\) and \(Q\) of \(G\) is smooth, and hence \(\operatorname{Lie}(P\cap Q)=\operatorname{Lie}(P)\cap\operatorname{Lie}(Q)\), e.g., [24, SS10.3]. Now the result follows from Serre's criterion [36, Prop. 3.1] for recognising a convex subcomplex of \(\Delta_{k}\): a collection \(\Sigma\) of simplices is a convex subcomplex if whenever \(P,Q,R\) are \(k\)-parabolic subgroups of \(G\) with \(\sigma_{P},\sigma_{Q}\in\Sigma\) and \(P\cap Q\subseteq R\), then \(\sigma_{R}\in\Sigma\).
To see that \(\mathfrak{h}\) is \(G\)-cr over \(k\) if and only if \(\Delta_{k}^{\mathfrak{h}}\) is \(\Delta_{k}\)-cr, the key again is the smoothness of the intersection of two \(k\)-parabolic subgroups. This implies that for two \(k\)-parabolic subgroups
\(P\) and \(Q\) of \(G\), we can detect whether or not they are opposite (and hence correspond to opposite simplices of \(\Delta_{k}\)) on the level of their Lie algebras.
_Remark 3.15_.: An analogous result holds for subgroups, with a very similar proof [36].
Since each subalgebra \(\mathfrak{h}\) of \(\mathfrak{g}\) gives rise to a subcomplex \(\Delta_{k}^{\mathfrak{h}}\) of \(\Delta_{k}\), we can use the so-called Centre Conjecture of Tits, which in fact is a theorem for subcomplexes, see [31] (\(G\) of classical type or type \(G_{2}\)), [26] (\(G\) of type \(F_{4}\) or \(E_{6}\)) and [33] (\(G\) of type \(E_{7}\) or \(E_{8}\)). We give a version which is suitable for our purposes.
**Theorem 3.16** (Tits' Centre Theorem).: _Let \(\Delta_{k}\) be a thick spherical building, and \(\Sigma\) a convex subcomplex of \(\Delta_{k}\). Then (at least) one of the following holds:_
1. \(\Sigma\) _is_ \(\Delta_{k}\)_-completely reducible;_
2. _there exists a nonempty simplex_ \(\sigma\in\Sigma\) _which is fixed by all simplicial automorphisms of_ \(\Delta_{k}\) _which stabilize_ \(\Sigma\)_._
The typical use of this theorem (as we see in the next proof below) is in guaranteeing the existence of a simplex as in part (ii) when \(\Sigma\) is _not_\(\Delta_{k}\)-cr; such a simplex is often referred to as a _centre_ of \(\Sigma\). We can now prove the main result of this section. Over \(\overline{k}\), this is [29, Thm. 1(ii)]; see also [13, Thm. 5.27, Ex. 5.29]. However, the proofs given there, which use the technology of optimal destabilising subgroups, do not go through over arbitrary \(k\). Instead, we use the Centre Theorem.
**Theorem 3.17**.: _Suppose \(H\) is a \(k\)-defined subgroup of \(G\). If \(H^{0}\) is \(G\)-completely reducible over \(k\), then so is \(\operatorname{Lie}(H)\)._
Proof.: Since \(\operatorname{Lie}(H)=\operatorname{Lie}(H^{0})\), and the hypothesis is that \(H^{0}\) is \(G\)-cr, it is no loss to assume that \(H\) is connected. Let \(\mathfrak{h}=\operatorname{Lie}(H)\). Using Proposition 3.12(i) (and its analogue for subgroups [8, Thm. 1.1]), it is enough to prove the result over the separable closure \(k_{s}\). These reductions imply that we may assume that \(H(k)\) is dense in \(H\), because this is true for a smooth connected \(k\)-group over a separably closed field \(k\)[30, Prop. 1.26(b), A.48]. Now suppose \(L\) is a minimal Levi \(k\)-subgroup of \(G\) containing \(H\). Since \(L=L_{\lambda}=C_{G}(\operatorname{Im}(\lambda))\) for some \(\lambda\in Y_{k}(C_{G}(H))\), Proposition 3.12(ii) (and its analogue for subgroups [5, Thm. 1.4]) implies that \(H\) and \(\mathfrak{h}\) are \(G\)-cr if and only if they are \(L\)-cr, so we may replace \(G\) with \(L\). Then \(H\) is \(G\)-cr over \(k\) but is not contained in any proper Levi \(k\)-subgroup of \(G\), so \(H\) is \(G\)-ir over \(k\).
We now proceed with the proof of the theorem by contradiction. Suppose that \(\mathfrak{h}\) is not \(G\)-cr over \(k\). The subgroup \(H(k)\) acts on the building \(\Delta_{k}\) by simplicial automorphisms, and the subcomplex \(\Delta_{k}^{\mathfrak{h}}\) is stabilized by the action of \(H(k)\) since \(H\) stabilizes its own Lie algebra. We are assuming that \(\mathfrak{h}\) is not \(G\)-cr over \(k\), so this subcomplex is not \(\Delta_{k}\)-cr, by Proposition 3.14, and hence there is a nonempty \(H(k)\)-fixed simplex \(\sigma\) in \(\Delta_{k}^{\mathfrak{h}}\) by Theorem 3.16. This simplex has the form \(\sigma_{P}\) for some proper \(k\)-parabolic subgroup \(P\) of \(G\), and the fact that \(\sigma_{P}\) is \(H(k)\)-fixed translates into the fact that \(P\) is normalized by \(H(k)\). Since parabolic subgroups are self-normalizing, this shows that \(H(k)\subseteq P\), which in turn gives \(H\subseteq P\) because \(H(k)\) is dense in \(H\). This contradicts the conclusion of the first paragraph, that \(H\) is \(G\)-ir over \(k\), so we are done.
_Remarks 3.18_.: (i). We observe that the converse of Theorem 3.17 is false, already in the algebraically closed case. See the counterexample due to the third author in [29, SS1]; see also Example 5.13 below.
(ii). If \(k\) is perfect, then Theorem 3.17 also holds with the hypothesis that \(H\) (instead of \(H^{0}\)) is \(G\)-cr. For, as has already been observed, if \(k\) is perfect then \(H\) is \(G\)-cr over \(k\) if and only if \(H\) is \(G\)-cr, and the same for \(\operatorname{Lie}(H)\). Now over \(\overline{k}\), if \(H\) is \(G\)-cr then \(H^{0}\) is \(G\)-cr too [7, Thm. 3.10], and the hypotheses of the theorem hold.
## 4. Maps induced by inclusion
In this section we assume \(k\) is algebraically closed. We need some material on quotients in GIT. Recall that if \(G\) acts on an affine variety \(X\) then we can form the quotient variety \(X/\!\!/G\). The coordinate algebra \(k[X/\!\!/G]\) is by definition the subring of invariants \(k[X]^{G}\) of the coordinate ring \(k[G]\), and the inclusion of \(k[X]^{G}\) in \(k[G]\) induces a morphism \(\pi_{X}\) (or \(\pi_{X,G}\)) from \(X\) to \(X/\!\!/G\). If \(x\in X\), there is a unique closed \(G\)-orbit \(C(x)\) contained in the closure of the orbit \(G\cdot x\). By the Hilbert-Mumford Theorem, there exists \(\lambda\in Y(G)\) such that \(\lim_{a\to 0}\lambda(a)\cdot x\) belongs to \(C(x)\). Given \(x_{1},x_{2}\in X\), we have \(\pi_{X}(x_{1})=\pi_{X}(x_{2})\) if and only if \(C(x_{1})=C(x_{2})\) if and only if \(f(x_{1})=f(x_{2})\) for all \(f\in k[X]^{G}\). In particular, if \(G\cdot x_{1}\) and \(G\cdot x_{2}\) are closed then \(C(x_{1})=G\cdot x_{1}\) and \(C(x_{2})=G\cdot x_{2}\), so \(\pi_{X}(x_{1})=\pi_{X}(x_{2})\) if and only if \(x_{1}\) and \(x_{2}\) lie in the same \(G\)-orbit. Hence points of \(X/\!\!/G\) correspond to closed \(G\)-orbits in \(X\). For background on quotient varieties and GIT, see [32, Ch. 3] or [3]. Here is a particular instance of this set-up. The group \(G\) acts on \(G^{m}\) by simultaneous conjugation, so we can form the quotient \(G^{m}/\!\!/G\). Likewise we can form the quotient \(\mathfrak{g}^{m}/\!\!/G\). There is a direct connection with \(G\)-complete reducibility arising from Theorem 3.11 as follows: points in \(\mathfrak{g}^{m}/\!\!/G\) correspond to closed \(G\)-orbits \(G\cdot(x_{1},\ldots,x_{m})\), and \((x_{1},\ldots,x_{m})\in\mathfrak{g}^{m}\) yields a closed orbit if and only if the subalgebra \(\mathfrak{h}\) generated by the \(x_{i}\) is \(G\)-cr. Analogous statements hold for \(G^{m}/\!\!/G\).
Now let \(H\) be a reductive subgroup of \(G\). The inclusion \(\iota\) of \(H^{m}\) in \(G^{m}\) induces a map \(\Psi:H^{m}/\!\!/H\to G^{m}/\!\!/G\). The third author proved that \(\Psi\) is a finite morphism of varieties [28, Thm. 1.1]; this has various consequences for the theory of \(G\)-completely reducible subgroups (see, e.g., [7, Cor. 3.8]). It follows that the image of \(\Psi\) is closed. In particular, \(\pi_{G^{m}}(\iota(H^{m}))\) is a closed subset of \(G^{m}/\!\!/G\). Now consider the analogous situation for Lie algebras. The derivative \(d\iota\) of \(\iota\) maps \(\mathfrak{h}^{m}\) to \(\mathfrak{g}^{m}\). Now \(d\iota\) does gives rise to a map \(\psi\) from \(\mathfrak{h}^{m}/\!\!/H\) to \(\mathfrak{g}^{m}/\!\!/G\) mapping \(\pi_{\mathfrak{h}^{m},H}(x_{1},\ldots,x_{m})\) to \(\pi_{\mathfrak{g}^{m},G}(d\iota(x_{1},\ldots,x_{m}))\), but we see in Example 4.2 below that this map need not be finite.
First we need a preliminary result.
**Proposition 4.1**.: _Let \((x_{1},\ldots,x_{m})\in\mathfrak{g}^{m}\) and let \(\mathfrak{m}\) be the subalgebra spanned by the \(x_{i}\). Then the following are equivalent:_
1. \(\pi_{\mathfrak{g}^{m},G}(x_{1},\ldots,x_{m})=\pi_{\mathfrak{g}^{m},G}(0, \ldots,0)\)_._
2. _For every nonconstant homogeneous_ \(f\in k[\mathfrak{g}^{m}]^{G}\)_, we have_ \(f(x_{1},\ldots,x_{m})=0\)_._
3. _There exists_ \(\lambda\in Y(G)\) _such that_ \(\lim_{a\to 0}\lambda(a)\cdot(x_{1},\ldots,x_{m})=(0,\ldots,0)\)_._
4. _There exists a maximal unipotent subgroup_ \(U\) _of_ \(G\) _such that_ \(\mathfrak{m}\subseteq\operatorname{Lie}(U)\)_._
Proof.: The equivalence of (a), (b) and (c) follows from the results given at the start of the section. The equivalence of (c) with (d) follows from Lemma 2.4(iii), since a maximal unipotent subgroup of \(G\) is the unipotent radical of a Borel subgroup.
**Example 4.2**.: Let \(p=2\) and let \(G=\operatorname{SL}_{3}(k)\). Let \(H=\operatorname{PGL}_{2}(k)\). The adjoint representation of \(\operatorname{SL}_{2}(k)\) on its Lie algebra gives rise to an embedding \(i\) of \(H\) in \(G\); with a suitable choice of basis, this takes the form \(i\left(\left(\begin{array}{cc}a&b\\ c&d\end{array}\right)\right)=\left(\begin{array}{ccc}a^{2}&c^{2}&0\\ b^{2}&d^{2}&0\\ ac&bd&1\end{array}\right)\).
For \(a\neq 0\), define \(x_{1}=\overline{\left(\begin{array}{cc}0&1\\ 0&0\end{array}\right)}\) and \(x_{2}(a)=\overline{\left(\begin{array}{cc}0&0\\ a&0\end{array}\right)}\). Then the pairs \((x_{1},x_{2}(a))\) span the subalgebra \(\mathfrak{h}\) from Example 2.7, which is \(H\)-ir by Example 3.5. It follows from Theorem 3.11 that \(H\cdot(x_{1},x_{2}(a))\) is closed for each \(a\). It is easily seen that the \((x_{1},x_{2}(a))\) are pairwise non-conjugate under the \(H\)-action. Hence the points \(\pi_{\mathfrak{h}^{2},H}(x_{1},x_{2}(a))\) are pairwise distinct in \(\mathfrak{h}^{2}/\!\!/H\). On the other hand, \(di(\mathfrak{h})\subseteq\operatorname{Lie}(B)\), where \(B\) is the Borel subgroup of upper triangular matrices in \(G\), so \(\pi_{\mathfrak{g}^{2},G}(x_{1},x_{2}(a))=\pi_{\mathfrak{g}^{2},G}(0,0)\) for all \(a\), by Proposition 4.1. This shows that the fibre \(\psi^{-1}(\pi_{\mathfrak{g}^{2},G}(0,0))\) is infinite, so \(\psi\) cannot be finite.
If we put a restriction on \(p\) then the situation improves. Recall that if \(V\) is a rational \(G\)-module then \(v\in V\) is said to be _unstable_ if \(0\) belongs to the closure of \(G\cdot v\), and _semistable_ otherwise.
**Proposition 4.3**.: _Suppose \(p\) is fabulous for \(G\). Then the image of \(\psi\) is closed._
Proof.: It is enough by [4, Prop. 5.2] to show that if \((x_{1},\ldots,x_{m})\in\mathfrak{h}^{m}\) is semistable for the \(H\)-action then it is semistable for the \(G\)-action. We prove the contrapositive. Suppose \((x_{1},\ldots,x_{m})\) is unstable for the \(G\)-action. Then the subalgebra \(\mathfrak{m}\) generated by the \(x_{i}\) consists of nilpotent elements, by Proposition 4.1. The hypothesis on \(p\) implies that \(\mathfrak{m}\subseteq\operatorname{Lie}(U)\) for some maximal unipotent subgroup \(U\) of \(H\). It follows from Proposition 4.1 that \((x_{1},\ldots,x_{m})\) is unstable for the \(H\)-action, so we are done.
_Remark 4.4_.: Suppose \(p\) is fabulous for \(G\). Choose any \(f_{1},\ldots,f_{t}\in k[\mathfrak{g}^{m}/\!\!/G]\) such that \(\psi(\mathfrak{h}^{m}/\!\!/H)=\{z\in\mathfrak{h}^{m}/\!\!/G\mid f_{1}(z)= \cdots=f_{t}(z)=0\}\); such \(f_{i}\) exist by Proposition 4.3. Note that \(\pi_{\mathfrak{g}^{m},G}(\mathfrak{h}^{m})=\psi(\mathfrak{h}^{m}/\!\!/H)\) by construction. Regarding the \(f_{i}\) as elements of \(k[\mathfrak{g}^{m}]^{G}\), we see that for any \((y_{1},\ldots,y_{m})\in\mathfrak{g}^{m}\), \(\pi_{\mathfrak{g}^{m},G}(y_{1},\ldots,y_{m})\) belongs to \(\pi_{\mathfrak{g}^{m},G}(\mathfrak{h}^{m})\) if and only if \(\pi_{\mathfrak{g}^{m},G}(y_{1},\ldots,y_{m})\) belongs to \(\psi(\mathfrak{h}^{m}/\!\!/H)\) if and only if \(f_{i}(y_{1},\ldots,y_{m})=0\) for \(1\leq i\leq t\).
## 5. \(k\)-semisimplification
First, we recall the notion of \(k\)-semisimplification for subgroups of \(G\) and the main theorem from [10].
**Definition 5.1**.: Let \(H\) be a subgroup of \(G\). We say that a subgroup \(H^{\prime}\) of \(G\) is a \(k\)_-semisimplification of \(H\) (for \(G\))_ if there exists a parabolic \(k\)-subgroup \(P\) of \(G\) and a Levi \(k\)-subgroup \(L\) of \(P\) such that \(H\subseteq P\), \(H^{\prime}=c_{L}(H)\) and \(H^{\prime}\) is \(G\)-completely reducible (or equivalently \(L\)-completely reducible, by [10, Prop. 3.6]) over \(k\). We say _the pair \((P,L)\) yields \(H^{\prime}\)_.
_Remarks 5.2_.: (i). Let \(H\) be a subgroup of \(G\). If \(H\) is \(G\)-cr over \(k\) then clearly \(H\) is a \(k\)-semisimplification of itself, yielded by the pair \((G,G)\).
(ii). Given any subgroup \(H\) of \(G\), [10, Rem. 4.3] guarantees the existence of a \(k\)-semi-simplification of \(H\).
Here is the main result [10, Thm. 4.5] from [10], which was proved in the special case \(k=\overline{k}\) in [13, Prop. 5.14(i)], cf. [36, Prop. 3.3(b)]. The uniqueness asserted in Theorem 5.3 is akin to the theorem of Jordan-Holder.
**Theorem 5.3**.: _Let \(H\) be a subgroup of \(G\). Then any two \(k\)-semisimplifications of \(H\) are \(G(k)\)-conjugate._
We now come to the analogue of Definition 5.1 for subalgebras of \(\mathfrak{g}\).
**Definition 5.4**.: Let \(\mathfrak{h}\) be a Lie subalgebra of \(\mathfrak{g}\). We say that a Lie subalgebra \(\mathfrak{h}^{\prime}\) of \(\mathfrak{g}\) is a \(k\)_-semisimplification of \(\mathfrak{h}\) (for \(G\))_ if there exists a parabolic \(k\)-subgroup \(P\) of \(G\) and a Levi \(k\)-subgroup \(L\) of \(P\) such that \(\mathfrak{h}\subseteq\operatorname{Lie}(P)\), \(\mathfrak{h}^{\prime}=c_{\operatorname{Lie}(L)}(\mathfrak{h})\) and \(\mathfrak{h}^{\prime}\) is \(G\)-completely reducible (or equivalently, by Proposition 3.12(ii), \(L\)-completely reducible) over \(k\). We say _the pair \((P,L)\) yields \(\mathfrak{h}^{\prime}\)_.
_Remarks 5.5_.: (i). Let \(\mathfrak{h}\) be a subalgebra of \(\mathfrak{g}\). If \(\mathfrak{h}\) is already \(G\)-cr over \(k\) then clearly \(\mathfrak{h}\) is a \(k\)-semisimplification of itself, yielded by the pair \((G,G)\).
(ii). Suppose \((P,L)\) yields a \(k\)-semisimplification \(\mathfrak{h}^{\prime}\) of \(\mathfrak{h}\). Let \(L_{1}\) be another Levi \(k\)-subgroup of \(P\). Then \(L_{1}=uLu^{-1}\) for some \(u\in R_{u}(P)(k)\) by Lemma 2.3(iii), so consequently \(c_{\operatorname{Lie}(L_{1})}(\mathfrak{h})=u\cdot c_{\operatorname{Lie}(L)}( \mathfrak{h})\). Hence \((P,L_{1})\) also yields a \(k\)-semisimplification of \(\mathfrak{h}\). Because of this, when the choice of \(L\) doesn't matter we simply say that \(P\)_yields a \(k\)-semisimplification of \(\mathfrak{h}\)_.
(iii). It is straightforward to check that if \(\phi\) is an automorphism of \(G\) (as a \(k\)-group), \(\mathfrak{h}\) is a subalgebra of \(\mathfrak{g}\) and \((P,L)\) yields a \(k\)-semisimplification \(\mathfrak{h}^{\prime}\) of \(\mathfrak{h}\) then \(d\phi(\mathfrak{h}^{\prime})\) is a \(k\)-semisimplification of \(d\phi(\mathfrak{h})\), yielded by \((\phi(P),\phi(L))\).
The following is immediate from Lemma 2.1.
**Lemma 5.6**.: _Suppose that \(\mathfrak{h}^{\prime}\) is a \(k\)-semisimplification of \(\mathfrak{h}\). Then there is \(\lambda\in Y_{k}(G)\) such that \(\mathfrak{h}^{\prime}\) is yielded by the pair \((P_{\lambda},L_{\lambda})\)._
As in the group case (Remark 5.2(ii)) we always have the existence of a \(k\)-semisimplification of an arbitrary subalgebra of \(\mathfrak{g}\), due to the rational Hilbert-Mumford Theorem 3.2, as the following remark shows.
_Remark 5.7_.: Suppose \(\mathfrak{h}\) is a subalgebra of \(\mathfrak{g}\). Let \(\mathbf{h}=(h_{1},\dots,h_{m})\in\mathfrak{h}^{m}\) be a generating tuple for \(\mathfrak{h}\). Then \(c_{\lambda}(\mathbf{h})=(c_{\lambda}(h_{1}),\dots,c_{\lambda}(h_{m}))\) is a generating tuple for \(c_{\lambda}(\mathfrak{h})\), for any \(\lambda\in Y_{k}(G)\), and hence \(c_{\lambda}(\mathfrak{h})\) is a \(k\)-semisimplification of \(\mathfrak{h}\) if and only if \(G(k)\cdot c_{\lambda}(\mathbf{h})\) is cocharacter-closed over \(k\), by Theorem 3.11. It follows from Theorem 3.2 that \(\mathfrak{h}\) admits at least one \(k\)-semisimplification: for we can choose \(\lambda\in Y_{k}(G)\) such that \(G(k)\cdot c_{\lambda}(\mathbf{h})\) is cocharacter-closed over \(k\), so \(c_{\lambda}(\mathfrak{h})\) is a \(k\)-semisimplification of \(\mathfrak{h}\), yielded by \((P_{\lambda},L_{\lambda})\).
Here is the analogue of the main result Theorem 4.5 from [10] in the Lie algebra setting, which can be viewed as a kind of Jordan-Holder theorem. Since the adjoint action is \(k\)-linear, the proof is easier to the one in [10], where a descending chain argument is needed.
**Theorem 5.8**.: _Let \(\mathfrak{h}\) be a subalgebra of \(\mathfrak{g}\). Then any two \(k\)-semisimplifications of \(\mathfrak{h}\) are \(\operatorname{Ad}(G(k))\)-conjugate._
Proof.: Let \(\mathfrak{h}_{1},\mathfrak{h}_{2}\) be \(k\)-semisimplifications of \(\mathfrak{h}\). By Lemma 5.6, there exist \(\lambda_{1},\lambda_{2}\in Y_{k}(G)\) such that \((P_{\lambda_{1}},L_{\lambda_{1}})\) realizes \(\mathfrak{h}_{1}\) and \((P_{\lambda_{2}},L_{\lambda_{2}})\) realizes \(\mathfrak{h}_{2}\). Let \(\mathbf{x}\in\mathfrak{h}^{m}\) be a generating tuple
for \(\mathfrak{h}\). Then \(c_{\lambda_{i}}(\mathbf{x})\) is a generating tuple for \(\mathfrak{h}_{i}\) for \(i=1,2\), and each orbit \(G(k)\cdot c_{\lambda_{i}}(\mathbf{x})\) is cocharacter-closed over \(k\). It follows from the uniqueness result in the rational Hilbert-Mumford Theorem 3.2 that the two orbits \(G(k)\cdot c_{\lambda_{1}}(\mathbf{x})\) and \(G(k)\cdot c_{\lambda_{2}}(\mathbf{x})\) have to be equal. Thus there exists \(g\in G(k)\) such that \(g\cdot c_{\lambda_{1}}(\mathbf{x})=c_{\lambda_{2}}(\mathbf{x})\). This means that the spanning set \(c_{\lambda_{1}}(\mathbf{x})\) of \(\mathfrak{h}_{1}\) is \(\operatorname{Ad}(G(k))\)-conjugate to the one of \(\mathfrak{h}_{2}\). Since the adjoint action is \(k\)-linear, \(\mathfrak{h}_{1}\) and \(\mathfrak{h}_{2}\) are also \(\operatorname{Ad}(G(k))\)-conjugate.
Next we study the connection between the notions of \(k\)-semisimplifications for subgroups and subalgebras. It turns out that they are compatible in a natural fashion.
**Theorem 5.9**.: _Let \(H\) be a subgroup of \(G\) and let \(H^{\prime}\) be a \(k\)-semisimplification of \(H\). Then \(\operatorname{Lie}(H^{\prime})\) is a \(k\)-semisimplification of \(\operatorname{Lie}(H)\)._
Proof.: By Lemma 2.1, there is a \(\lambda\in Y_{k}(G)\) such that \((P_{\lambda},L_{\lambda})\) yields \(H^{\prime}=c_{\lambda}(H)\). It follows from Theorem 3.17 that \(\operatorname{Lie}(H^{\prime})\) is \(G\)-cr over \(k\). Since the differential of conjugation is the adjoint action, we have \(dc_{L_{\lambda}}=c_{\mathfrak{l}_{\lambda}}\). Using Lemma 2.4, it follows that \(\operatorname{Lie}(H^{\prime})=c_{\mathfrak{l}_{\lambda}}(\operatorname{Lie} (H))\), and so the pair \((P_{\lambda},L_{\lambda})\) also yields \(\operatorname{Lie}(H^{\prime})\).
Next we revisit the example of Remark 3.18(i) in the context of Theorems 5.8 and 5.9, cf. [29].
**Example 5.10**.: Suppose \(\operatorname{char}(k)=p>0\). Let \(H\) be a non-trivial connected semisimple group and let \(\varrho_{i}:H\to\operatorname{SL}(V_{i})\) be representations of \(H\) for \(i=1,2\) with \(\varrho_{1}\) semisimple and \(\varrho_{2}\) not semisimple. Let \(\varrho:H\to G:=\operatorname{SL}(V_{1}\oplus V_{2})\) be the representation given by \(h\mapsto\varrho_{1}(h)\oplus\varrho_{2}(F(h))\), where \(F:H\to H\) is the Frobenius endomorphism of \(H\). Let \(J\) be the image of \(H\) under \(\varrho\) in \(G\). Since \(V_{1}\oplus V_{2}\) is not semisimple as a \(J\)-module, \(J\) is not \(G\)-cr (cf. Remark 3.7). However, \(\operatorname{Lie}(J)=\operatorname{Im}d\varrho_{1}\oplus 0\subseteq\mathfrak{g}\)_is_\(G\)-cr, showing that the converse of Theorem 3.17 fails.
Now let \(J^{\prime}\) be a \(k\)-semisimplification of \(J\). It follows from Theorem 5.9 that \(\operatorname{Lie}(J^{\prime})\) is a \(k\)-semisimplification of \(\operatorname{Lie}(J)\) and thus, by Remark 5.5(a) and Theorem 5.8, that \(\operatorname{Lie}(J^{\prime})\) and \(\operatorname{Lie}(J)\) are \(\operatorname{Ad}(G(k))\)-conjugate.
The following is the analogue of [10, Def. 4.6].
**Definition 5.11**.: Let \(\mathfrak{h}\) be a subalgebra of \(\mathfrak{g}\). We define \(\mathcal{D}_{k}(\mathfrak{h})\) to be the set of \(\operatorname{Ad}(G(k))\)-conjugates of any \(k\)-semisimplification of \(\mathfrak{h}\) in \(\mathfrak{g}\). This is well-defined by Theorem 5.8.
In the following two examples we show that not every element of \(\mathcal{D}_{k}(\mathfrak{h})\) need be a \(k\)-semisimplification of \(\mathfrak{h}\) and that there is no direct relation between the notions of \(k\)-semisimplification and \(\overline{k}\)-semisimplification of a subalgebra of \(\mathfrak{g}\).
**Example 5.12**.: Let \(\mathfrak{h}\) be a subalgebra of \(\mathfrak{g}\). As noted in Remark 5.5(a), if \(\mathfrak{h}\) is \(G\)-cr over \(k\) then \(\mathfrak{h}\) is a \(k\)-semisimplification of itself, yielded by the pair \((G,G)\). If \(\mathfrak{h}\) is \(G\)-ir, then \(\mathfrak{h}\) is not contained in \(\operatorname{Lie}(P)\) for any proper parabolic subgroup \(P\) of \(G\), so \(\mathfrak{h}\) is the **only**\(k\)-semisimplification of itself.
**Example 5.13**.: There are many examples in the literature of the following: a reductive group \(G\) over an imperfect field \(k\), and a subgroup \(H\) of \(G\) such that \(H\) is \(G\)-cr over \(k\) but not \(G\)-cr, or \(H\) is \(G\)-cr but not \(G\)-cr over \(k\); see [2, Thm. 1.3], for example. Not all of these instances give rise to similar ones on the level of Lie algebras, even when the subgroup \(H\) is
connected, because of problems like that in Example 5.10. However, one of the most basic families of examples does work, as we now describe.
Let \(k\) be an imperfect field of characteristic \(p\), and let \(a\in k\setminus k^{p}\). Let \(t\) be a \(p^{\text{th}}\) root of \(a\) in \(\overline{k}\); then the extension \(k^{\prime}=k(t)\) is purely inseparable over \(k\) of degree \(p\). Given a \(k^{\prime}\)-group \(H^{\prime}\), we may form the _Weil restriction_\(\operatorname{R}_{k^{\prime}/k}(H^{\prime})\), which is a \(k\)-group, see [18, A.5] for example. (It is easiest to describe Weil restriction functorially: if we view \(H^{\prime}\) as a functor from \(k^{\prime}\)-algebras to groups, then \(\operatorname{R}_{k^{\prime}/k}(H^{\prime})\) is the corresponding functor from \(k\)-algebras to groups with \(\operatorname{R}_{k^{\prime}/k}(H^{\prime})(A):=H^{\prime}(A\otimes_{k}k^{ \prime})\) for each \(k\)-algebra \(A\).) Now let \(H^{\prime}=\mathbb{G}_{m}\) be the multiplicative group over \(k^{\prime}\). Then \(H:=\operatorname{R}_{k^{\prime}/k}(\mathbb{G}_{m})\) is a \(p\)-dimensional abelian \(k\)-group with a \((p-1)\)-dimensional unipotent radical, but no connected normal unipotent \(k\)-subgroup (it is a basic example of a so-called _pseudo-reductive group_, see [18, Ex. 1.1.3]). The natural action of \(H^{\prime}\) on \(k^{\prime}\) by multiplication Weil restricts to give a \(p\)-dimensional representation of \(H\) over \(k\) which is irreducible: in terms of coordinates, this arises by writing down a \(k\)-basis for \(k^{\prime}\) and interpreting the action through that basis, see [14, SS5.2]. However, after base changing to \(\overline{k}\), this module becomes indecomposable and not irreducible [14, Rem. 4.7(i)]. This means that \(H\) is \(\operatorname{GL}_{p}\)-cr over \(k\) but not \(\operatorname{GL}_{p}\)-cr. This example is due to McNinch (see [7, Ex. 5.11]).
If we turn our attention to the Lie algebras, we may identify the Lie algebra \(\operatorname{Lie}(H^{\prime})\) with the additive group over \(k^{\prime}\), and we may view the multiplicative group \(H^{\prime}\) as an open subset. Doing this is compatible with the action of \(H^{\prime}\) and \(\operatorname{Lie}(H^{\prime})\) on \(k^{\prime}\) by multiplication. Therefore, after Weil restricting, we may identify the Lie algebra of \(H\) with a \(p\)-dimensional abelian Lie algebra over \(k\) containing \(H\) as an open subset, with the same compatibility between the corresponding representations (see [18, A.7.6] for more details on the Lie algebra of a Weil restriction). Therefore, in this case, we also have that \(\operatorname{Lie}(H)\) is \(\operatorname{GL}_{p}\)-cr over \(k\) but not \(\operatorname{GL}_{p}\)-cr. Note that \(C_{\operatorname{GL}_{p}}(\operatorname{Lie}(H))\) is smooth.
The following theorem shows that under a mild restriction on \(k\), the process of \(k\)-semi-simplification behaves well under passing to ideals.
**Theorem 5.14**.: _Suppose \(p\) is fabulous for \(G\). Let \(\mathfrak{h}\) be a subalgebra of \(\mathfrak{g}\) and let \(\mathfrak{m}\) be an ideal of \(\mathfrak{h}\). Then:_
1. _If_ \(\mathfrak{h}\) _is_ \(G\)_-completely reducible over_ \(k\) _then so is_ \(\mathfrak{m}\)_._
2. _Every parabolic subgroup_ \(P\) _of_ \(G\) _which yields a_ \(k\)_-semisimplification of_ \(\mathfrak{h}\) _also yields one for_ \(\mathfrak{m}\)_. In particular, there exist_ \(k\)_-semisimplifications_ \(\mathfrak{h}^{\prime}\) _of_ \(\mathfrak{h}\) _and_ \(\mathfrak{m}^{\prime}\) _of_ \(\mathfrak{m}\) _such that_ \(\mathfrak{m}^{\prime}\) _is an ideal in_ \(\mathfrak{h}^{\prime}\)_._
Proof.: For (a), by the same reductions as at the start of the proof of Theorem 3.17, we may reduce to the case that \(k=k_{s}\) and \(\mathfrak{h}\) is \(G\)-ir over \(k\). We again proceed from this point using a contradiction argument invoking Tits' Centre Theorem 3.16. So suppose \(\mathfrak{m}\) is not \(G\)-cr over \(k\). Then the subcomplex \(\Delta_{k}^{\mathfrak{m}}\) of the building \(\Delta_{k}\) is not \(\Delta_{k}\)-cr, by Proposition 3.14, and hence there is a proper \(k\)-parabolic subgroup \(P\) of \(G\) such that \(\sigma_{P}\in\Delta_{k}^{\mathfrak{m}}\) is fixed by all simplicial automorphisms of \(\Delta_{k}\) stabilizing \(\Delta_{k}^{\mathfrak{m}}\), by Theorem 3.16. Since \(N_{G}(\mathfrak{m})(k)\) clearly stabilizes \(\Delta_{k}^{\mathfrak{m}}\), we can conclude that \(N_{G}(\mathfrak{m})(k)\subseteq P\). Now, because \(p\) is fabulous, \(N_{G}(\mathfrak{h})\) is smooth, and because \(k=k_{s}\) this means that the \(k\)-points of \(N_{G}(\mathfrak{m})\) are dense by [15, AG.13.3 Cor.]. Therefore, we may conclude that \(N_{G}(\mathfrak{m})\subseteq P\). Finally, this gives
\[\mathfrak{m}\subseteq\mathfrak{h}\subseteq\mathfrak{n}_{\mathfrak{g}}( \mathfrak{m})=\operatorname{Lie}(N_{G}(\mathfrak{m}))\subseteq\operatorname{Lie }(P),\]
where the equality in the middle follows from smoothness of \(N_{G}(\mathfrak{m})\) again. This gives the required contradiction, as we had reduced to the case that \(\mathfrak{h}\) is \(G\)-ir over \(k\).
For (b), pick any \(\lambda\in Y_{k}(G)\) such that \((P_{\lambda},L_{\lambda})\) yields a \(k\)-semisimplification \(\mathfrak{h}^{\prime}:=c_{\lambda}(\mathfrak{h})\) of \(\mathfrak{h}\). Then \(c_{\lambda}(\mathfrak{h})\) is \(G\)-cr over \(k\) and, as \(c_{\lambda}\) is a Lie algebra homomorphism, \(c_{\lambda}(\mathfrak{m})\) is an ideal in \(c_{\lambda}(\mathfrak{h})\). Now \(c_{\lambda}(\mathfrak{h})\) and \(c_{\lambda}(\mathfrak{m})\) satisfy the hypotheses of the theorem, so \(c_{\lambda}(\mathfrak{m})\) is \(G\)-cr over \(k\) by (a). Hence \((P_{\lambda},L_{\lambda})\) yields a semisimplification \(\mathfrak{m}^{\prime}:=c_{\lambda}(\mathfrak{m})\) of \(\mathfrak{m}\) as well, and \(\mathfrak{m}^{\prime}\) is an ideal in \(\mathfrak{h}^{\prime}\).
_Remark 5.15_.: Both parts of Theorem 5.14 are false without the assumption on \(p\): see Example 3.5.
## 6. \(G\)-toral and solvable subalgebras
We assume in this section that \(k\) is algebraically closed. We study \(G\)-complete reducibility properties of solvable and \(G\)-toral subalgebras \(\mathfrak{h}\) of \(\mathfrak{g}\) (see Definition 6.4 for the latter).
**Definition 6.1**.: We call a Lie subalgebra \(\mathfrak{h}\) of \(\mathfrak{g}\)_Jordan-closed_ if for every \(x\in\mathfrak{h}\), the semisimple part \(x_{s}\) and nilpotent part \(x_{n}\) of \(x\) both belong to \(\mathfrak{h}\). We define the _Jordan closure_\(\mathfrak{h}^{J}\) of \(\mathfrak{h}\) to be the smallest Jordan-closed Lie subalgebra of \(\mathfrak{g}\) that contains \(\mathfrak{h}\).
_Remarks 6.2_.: (a). It is clear that \(\mathfrak{h}^{J}\) is well-defined. Here is an explicit construction. We define an increasing chain of subalgebras \(\mathfrak{h}_{i}\) of \(\mathfrak{g}\) as follows. Set \(\mathfrak{h}_{0}=\mathfrak{h}\). Given \(\mathfrak{h}_{i}\), let \(\mathfrak{h}_{i+1}\) be the subalgebra generated by the elements of the form \(x_{s}\) and \(x_{n}\) for \(x\in\mathfrak{h}_{i}\). For dimension reasons, the chain becomes stationary and we have \(\mathfrak{h}_{n}=\mathfrak{h}^{J}\) for \(n\) sufficiently large.
(b). If \(\mathfrak{h}\) is algebraic then \(\mathfrak{h}\) is Jordan-closed. In particular, if \(P\) is a parabolic subgroup of \(G\) and \(L\) is a Levi subgroup of \(P\) then \(\operatorname{Lie}(P)\) and \(\operatorname{Lie}(L)\) are Jordan-closed. It follows easily that for any subalgebra \(\mathfrak{m}\) of \(\mathfrak{g}\), \(\mathfrak{m}\) is \(G\)-cr (resp., \(G\)-ir, resp., \(G\)-ind) if and only if \(\mathfrak{m}^{J}\) is. Also, if \(\operatorname{char}(k)=0\), \(\mathfrak{h}\) is a subalgebra of \(\mathfrak{g}\) and \(\mathfrak{h}\) is semisimple then \(\mathfrak{h}\) is algebraic (see [34, Lem. 3.2]), so \(\mathfrak{h}\) is Jordan-closed.
_Remark 6.3_.: Let \(\mathfrak{h}\) be a subalgebra of \(\mathfrak{g}\) and let \(f\colon G\to M\) be an epimorphism of connected reductive groups. We have \(df(x)_{s}=df(x_{s})\) and \(df(x)_{n}=df(x_{n})\) for any \(x\in\mathfrak{h}\), so \(df(\mathfrak{h})\) is Jordan-closed if \(\mathfrak{h}\) is. The converse also holds if \(f\) is an embedding, for then \(df\) is injective.
**Definition 6.4**.: A subalgebra \(\mathfrak{h}\) of \(\mathfrak{g}\) is \(G\)_-toral_ if \(\mathfrak{h}\subseteq\operatorname{Lie}(S)\) for some torus \(S\) of \(G\).
_Remark 6.5_.: Recall that a Lie algebra \(\mathfrak{h}\) is said to be _toral_ if every element of \(\mathfrak{h}\) is ad-semisimple; in this case, \(\mathfrak{h}\) is abelian [22, Lem. 8.1]. Clearly, if \(\mathfrak{h}\) is \(G\)-toral then \(\mathfrak{h}\) is toral, but the converse is false: e.g., take \(\mathfrak{h}\) to be a nonzero abelian subalgebra consisting of nilpotent elements.
The following is the counterpart of [24, Lem. 11.24] for subalgebras of \(\mathfrak{g}\).
**Lemma 6.6**.: _Let \(\mathfrak{h}\) be a \(G\)-toral subalgebra of \(\mathfrak{g}\). Then \(\mathfrak{h}\) is \(G\)-completely reducible._
Proof.: Choose a torus \(S\) such that \(\mathfrak{h}\subseteq\operatorname{Lie}(S)\). By Proposition 3.12(ii) it suffices to prove that \(\mathfrak{h}\) is \(C_{G}(S)\)-cr. But this is clear because \(S\) is central in \(C_{G}(S)\), so \(\operatorname{Lie}(S)\) is contained in \(\operatorname{Lie}(P)\) and \(\operatorname{Lie}(L)\) for every parabolic subgroup \(P\) and every Levi subgroup \(L\) of \(C_{G}(S)\).
Let \(\mathfrak{m}\) be a \(G\)-toral subalgebra of \(\mathfrak{g}\): say, \(\mathfrak{m}\subseteq\operatorname{Lie}(S)\), where \(S\) is a torus of \(G\). Then \(N_{G}(\mathfrak{m})\) is reductive. To see this, note that \(N_{G}(\mathfrak{m})\) contains \(T\), where \(T\) is a maximal torus of \(G\) containing \(S\), so \(N_{G}(\mathfrak{m})^{0}\) is generated by \(T\) and by certain root groups \(U_{\alpha}\) with respect
to \(T\). But if \(\alpha\) is a root, \(g\in U_{\alpha}\) and \(x\in\operatorname{Lie}(S)\) then \(g\cdot x-x\in\operatorname{Lie}(U_{\alpha})\), so \(U_{\alpha}\) normalises \(\mathfrak{m}\) if and only if \(U_{\alpha}\) centralises \(\mathfrak{m}\) if and only if \(d\alpha\) annihilates \(\mathfrak{m}\). We deduce that \(U_{\alpha}\subseteq N_{G}(\mathfrak{m})\) if and only if \(U_{-\alpha}\subseteq N_{G}(\mathfrak{m})\), and reductivity of \(N_{G}(\mathfrak{m})\) follows. This argument also shows that \(N_{G}(\mathfrak{m})^{0}=C_{G}(\mathfrak{m})^{0}\).
Suppose further that \(p\) is fabulous for \(G\). Then centralisers and normalisers are smooth, so
\[\mathfrak{m}\subseteq\mathfrak{n}_{\mathfrak{g}}(\mathfrak{m})=\mathfrak{c}_ {\mathfrak{g}}(\mathfrak{m})=\operatorname{Lie}(C_{G}(\mathfrak{m})^{0}). \tag{6.7}\]
Moreover, let \(K=C_{G}(\mathfrak{m})^{0}\), let \(Z=C_{G}(K)^{0}\) and let \(L=C_{G}(Z)\). As \(p\) is fabulous for \(G\), \(Z\) is smooth. We have \(Z\subseteq Z(K)^{0}\): for if \(g\in C_{G}(K)\) then \(g\) centralises \(\mathfrak{m}\) since \(\mathfrak{m}\subseteq\operatorname{Lie}(K)\) by (6.7), so \(C_{G}(K)^{0}\) is a connected subgroup of \(C_{G}(\mathfrak{m})\). This implies that \(Z\) is a torus, as \(K\) is reductive. Hence \(L\) is a Levi subgroup of \(G\). We have \(\mathfrak{m}\subseteq\operatorname{Lie}(Z)\) by smoothness of \(Z\) since \(K\) centralises \(\mathfrak{m}\). It follows that \(\mathfrak{m}\subseteq\operatorname{Lie}(L)\) and that if \(\mathfrak{m}\not\subseteq\mathfrak{z}(\mathfrak{g})\) then \(L\) is proper.
**Lemma 6.8**.: _Let \(\mathfrak{h}\) be a subalgebra of \(\mathfrak{g}\) such that every element of \(\mathfrak{h}\) is semisimple. Then \(\mathfrak{h}\) is \(G\)-toral._
Proof.: Note that \(\mathfrak{h}\) is abelian by Remark 6.5. We use induction on \(\dim(G)\). The result holds trivially if \(\dim(G)=0\), so let \(G\) be arbitrary. If \(\mathfrak{h}\subseteq\mathfrak{z}(\mathfrak{g})\) then \(\mathfrak{h}\subseteq\operatorname{Lie}(T)\) for any maximal torus \(T\) of \(G\), so we are done. Otherwise there exists \(x\in\mathfrak{h}\) such that \(C_{G}(x)^{0}\) is a proper reductive subgroup of \(G\). Now \(C_{G}(x)^{0}\) is smooth by [15, 9.1 Prop.] since \(x\) is semisimple, so \(\mathfrak{h}\subseteq\operatorname{Lie}(C_{G}(x)^{0})\). By our induction hypothesis, \(\mathfrak{h}\) is \(C_{G}(x)^{0}\)-toral, so \(\mathfrak{h}\) is \(G\)-toral.
**Lemma 6.9**.: _Let \(\mathfrak{h}\) be a \(G\)-toral subalgebra of \(\mathfrak{g}\) and let \(x\in\mathfrak{g}\) such that \(x\) is semisimple. Suppose that \(x\) centralises \(\mathfrak{h}\), or that \(\operatorname{char}(k)=0\) and \(x\) normalises \(\mathfrak{h}\). Then \(k\cdot x+\mathfrak{h}\) is \(G\)-toral._
Proof.: Choose an embedding of \(G\) in \(\operatorname{GL}_{n}\) for some \(n\in\mathbb{N}\). If \(x\) centralises \(\mathfrak{h}\) then \(k\cdot x\cup\mathfrak{h}\) consists of pairwise commuting semisimple matrices. Hence by a standard theorem from linear algebra, the matrices in \(k\cdot x\cup\mathfrak{h}\) are simultaneously diagonalisable in \(M_{n}(k)\). This implies that every element of \(k\cdot x+\mathfrak{h}\) is semisimple, so \(k\cdot x+\mathfrak{h}\) is \(G\)-toral by Lemma 6.8.
Now suppose \(\operatorname{char}(k)=0\) and \(x\) normalises \(\mathfrak{h}\). Since \(x\) is semisimple, \(x\) acts semisimply on \(\mathfrak{h}\). Let \(y\in\mathfrak{h}\) be any nonzero eigenvector of \(\operatorname{ad}(x)\), with eigenvalue \(a\). A simple calculation shows that \(\operatorname{ad}(x)(y^{t})=tay^{t}\) for any \(t\in\mathbb{N}\), where \(y^{t}\) denotes the usual matrix power of \(y\) in \(M_{n}(k)=\operatorname{Lie}(\operatorname{GL}_{n})\). But \(y^{t}\neq 0\) for any \(t\in\mathbb{N}\) as \(y\) is semisimple, which forces \(ta=0\) for all but finitely many \(t\). This implies that \(a=0\) since we are in characteristic \(0\). We deduce that \(x\) centralises \(\mathfrak{h}\), and the result follows from the previous paragraph.
**Lemma 6.10**.: _Suppose \(p\) is fabulous for \(G\). Let \(\mathfrak{h}\) be an abelian \(G\)-cr subalgebra of \(\mathfrak{g}\). Then \(\mathfrak{h}\) is \(G\)-toral._
Proof.: Let \(x\in\mathfrak{h}\). Then \(\mathfrak{c}_{\mathfrak{g}}(x)=\operatorname{Lie}(C_{G}(x))\), so if \(y\in\mathfrak{c}_{\mathfrak{g}}(x)\) then \(y_{s}\in\mathfrak{c}_{\mathfrak{g}}(x)\) and \(y_{n}\in\mathfrak{c}_{\mathfrak{g}}(x)\), so \(y_{s}\) and \(y_{n}\) centralise \(x\). By a similar argument, \(y_{s}\) and \(y_{n}\) centralise \(x_{s}\) and \(x_{n}\). It follows from the construction described in Remark 6.2(a) that \(\mathfrak{h}^{J}\) is abelian. Now \(\mathfrak{h}^{J}\) is \(G\)-cr by Remark 6.2(b), and it is enough to prove that \(\mathfrak{h}^{J}\) is \(G\)-toral. Hence we can assume without loss that \(\mathfrak{h}\) is Jordan-closed.
By Lemma 6.8, it suffices to show that \(\mathfrak{h}\) consists of semisimple elements. Suppose not. Then there exists \(0\neq y\in\mathfrak{h}\) such that \(y\) is nilpotent. The \(1\)-dimensional subspace \(\mathfrak{m}\) spanned by \(y\) is an ideal of \(\mathfrak{h}\), so \(\mathfrak{m}\) is \(G\)-cr by Theorem 5.14. But this is impossible by Example 3.6. This completes the proof.
**Lemma 6.11**.: _Suppose \(p\) is fabulous for \(G\). Let \(\mathfrak{h}\) be a solvable \(G\)-completely reducible subalgebra of \(\mathfrak{g}\). Then \(\mathfrak{h}\) is \(G\)-toral._
Proof.: We use induction on \(\dim(G)+\dim(\mathfrak{h})\). The ideal \([\mathfrak{h},\mathfrak{h}]\) is \(G\)-cr by Theorem 5.14 and \([\mathfrak{h},\mathfrak{h}]\subseteq[\mathfrak{g},\mathfrak{g}]=\operatorname {Lie}([G,G])\). Now \([\mathfrak{h},\mathfrak{h}]\) is a proper subalgebra of \(\mathfrak{h}\), so \([\mathfrak{h},\mathfrak{h}]\) is \(G\)-toral by our induction hypothesis. If \([\mathfrak{h},\mathfrak{h}]=0\) then \(\mathfrak{h}\) is abelian, so \(\mathfrak{h}\) is \(G\)-toral by Lemma 6.10. Otherwise \(0\neq[\mathfrak{h},\mathfrak{h}]\) is not contained in \(\mathfrak{z}(\mathfrak{g})\) as \([G,G]\) is semisimple (see Remarks 2.6(v)), so by the discussion following (6.7), \(\mathfrak{h}\subseteq N_{\mathfrak{g}}([\mathfrak{h},\mathfrak{h}])\subseteq \operatorname{Lie}(L)\) for some proper Levi subgroup \(L\) of \(G\). Then \(\mathfrak{h}\) is \(L\)-cr by Proposition 3.12(ii) and \(\dim(L)<\dim(G)\), so \(\mathfrak{h}\) is \(L\)-toral by our induction hypothesis. Hence \(\mathfrak{h}\) is \(G\)-toral, as required.
We can now give a classification result for maximal solvable subalgebras of \(\mathfrak{g}\) when \(p\) is fabulous for \(G\): compare [21, Thm. D(b)].
**Proposition 6.12**.: _Suppose \(p\) is fabulous for \(G\). Let \(\mathfrak{h}\) be a solvable subalgebra of \(\mathfrak{g}\). Then \(\mathfrak{h}\subseteq\mathfrak{h}^{J}\subseteq\operatorname{Lie}(B)\) for some Borel subgroup \(B\) of \(G\). In particular, a maximal solvable subalgebra of \(\mathfrak{g}\) is the Lie algebra of some Borel subgroup._
Proof.: Since \(\operatorname{Lie}(B)\) is Jordan-closed for any Borel subgroup \(B\) of \(G\) (Remarks 6.2(b)), it suffices to prove that \(\mathfrak{h}\subseteq\operatorname{Lie}(B)\) for some Borel subgroup \(B\). Let \(\lambda\in Y(G)\) such that \((P_{\lambda},L_{\lambda})\) yields a \(k\)-semisimplification of \(\mathfrak{h}\). Then \(\mathfrak{s}:=c_{\lambda}(\mathfrak{h})\subseteq\mathfrak{l}_{\lambda}\) is solvable and \(G\)-cr and \(\mathfrak{h}\subseteq\mathfrak{s}+\operatorname{Lie}(R_{u}(P_{\lambda}))\). Now \(\mathfrak{s}\) is \(L_{\lambda}\)-cr by Proposition 3.12(ii), so \(\mathfrak{s}\) is \(L_{\lambda}\)-toral by Lemma 6.8: say, \(\mathfrak{s}\subseteq\operatorname{Lie}(S)\) for some torus \(S\) of \(L_{\lambda}\). It follows that \(\mathfrak{h}\subseteq\operatorname{Lie}(S)+\operatorname{Lie}(R_{u}(P_{ \lambda}))=\operatorname{Lie}(SR_{u}(P_{\lambda}))\). But \(SR_{u}(P_{\lambda})\) is contained in a Borel subgroup of \(G\), so the first assertion follows. The second assertion is immediate.
**Corollary 6.13**.: _Suppose \(p\) is fabulous for \(G\). Let \(\mathfrak{h}\) be a Jordan-closed solvable subalgebra of \(\mathfrak{g}\) and let \(\mathfrak{n}\) be the set of nilpotent elements of \(\mathfrak{h}\). Let \(\lambda\in Y(G)\) such that \((P_{\lambda},L_{\lambda})\) yields a semisimplification of \(\mathfrak{h}\). Then:_
* \(\mathfrak{n}=\mathfrak{h}\cap\operatorname{Lie}(R_{u}(P_{\lambda}))\)_. In particular,_ \(\mathfrak{n}\) _is an ideal of_ \(\mathfrak{h}\)_._
* _There is a_ \(G\)_-toral subalgebra_ \(\mathfrak{s}\) _of_ \(\mathfrak{h}\) _such that_ \(\mathfrak{h}=\mathfrak{s}\oplus\mathfrak{n}\)_._
Proof.: (a). Since \(c_{\lambda}(\mathfrak{h})\) is solvable and \(G\)-cr, \(c_{\lambda}(\mathfrak{h})\) consists of semisimple elements by Lemma 6.11. Hence \(c_{\lambda}\) kills every nilpotent element of \(\mathfrak{h}\), so \(\mathfrak{n}\subseteq\operatorname{Lie}(R_{u}(P_{\lambda}))\). But \(\operatorname{Lie}(R_{u}(P_{\lambda}))\) consists of nilpotent elements, so \(\mathfrak{n}=\mathfrak{h}\cap\operatorname{Lie}(R_{u}(P_{\lambda}))\). This is an ideal of \(\mathfrak{h}\) because \(\mathfrak{h}\subseteq\mathfrak{p}_{\lambda}\).
(b). Let \(\mathfrak{s}\) be a maximal \(G\)-toral subalgebra of \(\mathfrak{h}\). We claim that \(c_{\lambda}(\mathfrak{s})=c_{\lambda}(\mathfrak{h})\). Suppose not. Now \(\mathfrak{s}\) is abelian and consists of ad-semisimple elements, so \(\mathfrak{s}\) acts completely reducibly on \(\mathfrak{h}\). Let \(\mathfrak{h}_{0}=\mathfrak{c}_{\mathfrak{h}}(\mathfrak{s})\) be the trivial weight space for the action of \(\mathfrak{s}\) on \(\mathfrak{h}\); then \(c_{\lambda}(\mathfrak{h}_{0})=c_{\lambda}(\mathfrak{h})\). Note that \(\mathfrak{s}\subseteq\mathfrak{h}_{0}\) as \(\mathfrak{s}\) is abelian.
By hypothesis, there exists \(x\in\mathfrak{h}_{0}\) such that \(c_{\lambda}(x)\not\in c_{\lambda}(\mathfrak{s})\). As \(p\) is fabulous for \(G\), \(\mathfrak{c}_{\mathfrak{g}}(\mathfrak{s})=\operatorname{Lie}(C_{G}(\mathfrak{s}))\) is Jordan-closed, by Remarks 6.2(b), and \(\mathfrak{h}\) is Jordan-closed by assumption, so \(\mathfrak{h}_{0}=\mathfrak{c}_{\mathfrak{h}}(\mathfrak{s})=\mathfrak{c}_{ \mathfrak{g}}(\mathfrak{s})\cap\mathfrak{h}\) is Jordan-closed. Hence \(x_{s}\in\mathfrak{h}_{0}\) and \(c_{\lambda}(x_{s})=c_{\lambda}(x)\not\in c_{\lambda}(\mathfrak{s})\); in particular, \(x_{s}\not\in\mathfrak{s}\). Since \(x_{s}\) is semisimple and commutes with \(\mathfrak{s}\), the subalgebra \(\mathfrak{s}\oplus k\cdot x_{s}\) of \(\mathfrak{h}\) is toral by Lemma 6.9. But this contradicts the choice of \(\mathfrak{s}\). We deduce that \(c_{\lambda}(\mathfrak{s})=c_{\lambda}(\mathfrak{h})\), which implies that \(\mathfrak{h}=\mathfrak{s}\oplus\mathfrak{n}\).
## 7. Characteristic \(0\)
We keep our assumption that \(k\) is algebraically closed. If \(H\) is a \(G\)-cr subgroup of \(G\) then \(H\) is reductive, and the converse also holds if \(\operatorname{char}(k)=0\)[36, Prop. 4.2]. This gives an intrinsic characterisation of \(G\)-cr subgroups in characteristic \(0\): a subgroup \(H\) is \(G\)-cr if and only if it is reductive. (Note that a reductive subgroup of \(G\) need not be \(G\)-cr in positive characteristic: e.g., see [7, Ex. 3.45].) The situation for Lie algebras is not so clear-cut.
**Example 7.1**.: Consider the \(1\)-dimensional Lie algebra \(k\). We may embed \(k\) in \(\mathfrak{sl}_{2}\) as the subalgebra of traceless diagonal matrices, or as the subalgebra of strictly upper triangular matrices. It is clear that in the first case, the image of \(k\) is \(\operatorname{SL}_{2}\)-cr, while in the second it is not.
This example shows that even in characteristic \(0\), we cannot determine whether a subalgebra \(\mathfrak{h}\) of \(\mathfrak{g}\) is \(G\)-cr by looking only at the intrinsic properties of \(\mathfrak{h}\). We can trace the problem to the lack of a Jordan decomposition for \(\mathfrak{h}\): there is no intrinsic notion of semisimple and nilpotent elements. We do have notions of ad-semisimple and ad-nilpotent elements, but they don't detect properties of \(\mathfrak{z}(\mathfrak{h})\).
There is an intrinsic characterisation for when a restricted subalgebra \(\mathfrak{h}\) of \(\mathfrak{g}\) is \(G\)-cr when the characteristic is positive and sufficiently large, using the notion of "\(p\)-reductive" subalgebras: see [21, Cor. 10.3]. In _loc. cit._ one makes use of the extra structure arising from the \(p\)-power map on \(\mathfrak{h}\). Below we give a counterpart to [21, Cor. 10.3] by providing a characterisation of \(G\)-cr subalgebras of \(\mathfrak{g}\) in characteristic \(0\) (Theorem 7.3), and we give an explicit description for a semisimplification of an arbitrary subalgebra in characteristic \(0\). Different methods from those of _op. cit._ are required, as there is no notion of restricted structure here.
The content of Theorem 7.3 in characteristic \(0\) follows quickly from work of Richardson -- see Remark 7.8 below. We give an independent proof, however, as some of our results hold under the weaker hypothesis that \(p\) is fabulous for \(G\).
The next result is a Lie algebra counterpart of [7, Thm. 3.46]; the analogues to the separability conditions in _loc. cit._ hold here because of the assumption on \(p\).
**Proposition 7.2**.: _Suppose \(p\) is fabulous for \(G\). Let \(\mathfrak{h}\) be a subalgebra of \(\mathfrak{g}\) such that \(\mathfrak{h}\) acts semisimply on \(\mathfrak{g}\). Then \(\mathfrak{h}\) is \(G\)-completely reducible._
Proof.: Suppose for a contradiction that \(\mathfrak{h}\) is not \(G\)-cr. Choose \(\mathbf{x}=(x_{1},\ldots,x_{m})\in\mathfrak{h}^{m}\) for some \(m\in\mathbb{N}\) such that the \(x_{i}\) span \(\mathfrak{h}\). The orbit \(G\cdot\mathbf{x}\) is not closed (Theorem 3.11), so there exists \(\lambda\in Y(G)\) such that \(\mathbf{x}^{\prime}:=\lim_{a\to 0}\lambda(a)\cdot\mathbf{x}\) exists and \(G\cdot\mathbf{x}^{\prime}\) is closed. This implies that \(\dim(G\cdot\mathbf{x})>\dim(G\cdot\mathbf{x}^{\prime})\). Consequently, letting \(\mathfrak{h}^{\prime}\) be the subalgebra generated by the components of \(\mathbf{x}^{\prime}\), we have \(\dim(C_{G}(\mathfrak{h}^{\prime}))>\dim(C_{G}(\mathfrak{h}))\), so
\[\dim(\mathfrak{c}_{\mathfrak{g}}(\mathfrak{h}^{\prime}))=\dim(\operatorname{ Lie}(C_{G}(\mathfrak{h}^{\prime})))>\dim(\operatorname{Lie}(C_{G}(\mathfrak{h})))= \dim(\mathfrak{c}_{\mathfrak{g}}(\mathfrak{h})),\]
where the equalities hold because \(p\) is fabulous for \(G\).
Now \(\operatorname{ad}_{G}(\mathfrak{h})\) is \(\operatorname{GL}(\mathfrak{g})\)-cr, so \(\operatorname{GL}(\mathfrak{g})\cdot\operatorname{ad}_{G}(\mathbf{x})\) is closed. Hence \(\operatorname{ad}_{G}(\mathbf{x}^{\prime})=\operatorname{ad}_{G}\left(\lim_{ a\to 0}\lambda(a)\cdot\mathbf{x}\right)=\lim_{a\to 0}\lambda(a)\cdot\operatorname{ad}_{G}( \mathbf{x})\) is \(\operatorname{GL}(\mathfrak{g})\)-conjugate to \(\operatorname{ad}_{G}(\mathbf{x})\). But \(\mathfrak{c}_{\mathfrak{g}}(\mathfrak{h}^{\prime})\) and \(\mathfrak{c}_{\mathfrak{g}}(\mathfrak{h})\) are precisely the subsets of \(\mathfrak{g}\) annihilated by \(\operatorname{ad}_{G}(\mathfrak{h}^{\prime})\) and \(\operatorname{ad}_{G}(\mathfrak{h})\), respectively, so \(\mathfrak{c}_{\mathfrak{g}}(\mathfrak{h}^{\prime})\) and \(\mathfrak{c}_{\mathfrak{g}}(\mathfrak{h})\) are \(\operatorname{GL}(\mathfrak{g})\)-conjugate, which implies they have the same dimension. This gives a contradiction. We conclude that \(\mathfrak{h}\) must be \(G\)-cr after all, so we are done.
Before we proceed to Theorem 7.3, we need to recall some standard Lie algebra theory. If \(\mathfrak{h}\) is a Lie algebra then we define \(\operatorname{rad}(\mathfrak{h})\) to be the _solvable radical_ of \(\mathfrak{h}\): that is, the unique largest solvable ideal of \(\mathfrak{h}\). We say that \(\mathfrak{h}\) is _semisimple_ if \(\operatorname{rad}(\mathfrak{h})=0\), [22, SS3.1]. If \(\operatorname{char}(k)=0\) then any finite-dimensional representation of a semisimple Lie algebra is completely reducible [22, Thm. 6.3], and any Lie algebra \(\mathfrak{h}\) has a _Levi decomposition_\(\mathfrak{h}=\mathfrak{k}\oplus\operatorname{rad}(\mathfrak{h})\), where \(\mathfrak{k}\) is a semisimple subalgebra of \(\mathfrak{h}\) (not necessarily an ideal) [16, SS6.8 Thm. 5]. In this case, if \(\operatorname{rad}(\mathfrak{h})\) is \(G\)-toral then \(\operatorname{rad}(\mathfrak{h})=\mathfrak{z}(\mathfrak{h})\). For if \(x\in\mathfrak{h}\) is semisimple then \(x\) centralises \(\operatorname{rad}(\mathfrak{h})\) by Lemma 6.9; but the set of semisimple elements of \(\mathfrak{k}\) is dense as \(\mathfrak{k}\) is semisimple, so \(\mathfrak{k}\) centralises \(\operatorname{rad}(\mathfrak{h})\), so \(\operatorname{rad}(\mathfrak{h})\subseteq\mathfrak{z}(\mathfrak{h})\). The reverse inclusion follows because \(\mathfrak{z}(\mathfrak{k})=0\).
**Theorem 7.3**.: _Let \(\mathfrak{h}\) be a subalgebra of \(\mathfrak{g}\). Consider the following conditions:_
1. \(\mathfrak{h}\) _acts semisimply on_ \(\mathfrak{g}\)_._
2. \(\mathfrak{h}\) _is_ \(G\)_-completely reducible._
3. \(\operatorname{rad}(\mathfrak{h})\) _is_ \(G\)_-toral._
_If \(p\) is fabulous for \(G\) then (i) \(\implies\) (ii) \(\implies\) (iii). If \(\operatorname{char}(k)=0\) then (i)-(iii) are equivalent._
Proof.: Suppose \(p\) is fabulous for \(G\). If (i) holds then (ii) holds by Proposition 7.2, while if (ii) holds then (iii) holds by Theorem 5.14 and Lemma 6.11. Now suppose \(\operatorname{char}(k)=0\). To complete the proof, it's enough to show that (iii) implies (i). So suppose \(\operatorname{rad}(\mathfrak{h})\) is \(G\)-toral. Write \(\mathfrak{h}=\mathfrak{k}\oplus\operatorname{rad}(\mathfrak{h})\), where \(\mathfrak{k}\) is semisimple. Since \(\operatorname{char}(k)=0\), \(\mathfrak{k}\) acts semisimply on \(\mathfrak{g}\). Now \(\operatorname{rad}(\mathfrak{h})\) consists of pairwise commuting ad-semisimple elements, so \(\operatorname{rad}(\mathfrak{h})\) acts semisimply on \(\mathfrak{g}\). But \(\operatorname{rad}(\mathfrak{h})\) commutes with \(\mathfrak{k}\), so it follows easily that \(\mathfrak{h}\) acts semisimply on \(\mathfrak{g}\), as required.
Let \(f\colon G\to M\) be a homomorphism of connected reductive groups. We say that \(f\) is _non-degenerate_ if \(\ker(f)^{0}\) is a torus. The following is the Lie algebra counterpart of [36, Cor. 4.3], see also [7, Lem. 2.12(ii)].
**Corollary 7.4**.: _Suppose \(\operatorname{char}(k)=0\). Let \(f\colon G\to M\) be a homomorphism of connected reductive groups. Let \(\mathfrak{h}\) be a subalgebra of \(\mathfrak{g}\). If \(\mathfrak{h}\) is \(G\)-completely reducible, then \(df(\mathfrak{h})\) is \(M\)-completely reducible. Conversely, if \(f\) is non-degenerate and \(df(\mathfrak{h})\) is \(M\)-completely reducible then \(\mathfrak{h}\) is \(G\)-completely reducible._
Proof.: Write \(\mathfrak{h}=\mathfrak{k}\oplus\mathfrak{z}\), where \(\mathfrak{k}\) is semisimple and \(\mathfrak{z}=\operatorname{rad}(\mathfrak{h})\). Then \(df(\mathfrak{k})\) is semisimple and \(df(\mathfrak{z}))\) is solvable, so \(df(\mathfrak{h})=df(\mathfrak{k})\oplus df(\mathfrak{z})\). It follows that \(\operatorname{rad}(df(\mathfrak{h}))=df(\mathfrak{z})\). If \(\mathfrak{h}\) is \(G\)-cr then \(\mathfrak{z}\) is \(G\)-toral by Theorem 7.3, so there is a torus \(S\) of \(G\) such that \(\mathfrak{z}\subseteq\mathfrak{s}\). Then \(df(\mathfrak{z})\) belongs to the Lie algebra of the torus \(f(S)\), so \(df(\mathfrak{z})\) is \(G\)-toral. Hence \(df(\mathfrak{h})\) is \(M\)-cr by Theorem 7.3.
Conversely, suppose \(f\) is non-degenerate and \(df(\mathfrak{h})\) is \(M\)-cr. Then \(df(\mathfrak{z})=\operatorname{rad}(df(\mathfrak{h}))\) is \(M\)-toral by Theorem 7.3. Since \(\ker(df)=\operatorname{Lie}(\ker(f))\) is central in \(\mathfrak{g}\) and consists of semisimple elements, it follows that \(\mathfrak{z}\) consists of semisimple elements. Hence \(\mathfrak{z}\) is \(G\)-toral (Lemma 6.8), so \(\mathfrak{h}\) is \(G\)-cr by Theorem 7.3.
_Remark 7.5_.: Applying Theorem 3.11, we can translate Corollary 7.4 into more geometric language. Let \(m\in\mathbb{N}\) and let \((x_{1},\ldots,x_{m})\in\mathfrak{g}^{m}\). It follows from Corollary 7.4 applied to the subalgebra \(\mathfrak{h}\) generated by the \(x_{i}\) that if \(G\cdot(x_{1},\ldots,x_{m})\) is closed then \(M\cdot df(x_{1},\ldots,x_{m})\) is closed, and that the converse also holds if \(f\) is non-degenerate.
**Corollary 7.6**.: _Suppose \(\operatorname{char}(k)=0\). Let \(\mathfrak{h}\) be a \(G\)-completely reducible subalgebra of \(\mathfrak{g}\). Then \(\mathfrak{h}\) is Jordan-closed._
Proof.: We can write \(\mathfrak{h}=\mathfrak{k}\oplus\operatorname{rad}(\mathfrak{h})\) for some semisimple subalgebra \(\mathfrak{k}\) of \(\mathfrak{g}\). Now \(\operatorname{rad}(\mathfrak{h})\) is \(G\)-toral by Theorem 7.3, so \(\operatorname{rad}(\mathfrak{h})\) is Jordan-closed and \(\operatorname{rad}(\mathfrak{h})=\mathfrak{z}(\mathfrak{h})\). By [34, Lem. 3.2], \(\mathfrak{k}\) is algebraic, so \(\mathfrak{k}\) is Jordan-closed. It now follows easily that \(\mathfrak{h}\) is Jordan-closed.
_Remark 7.7_.: The equivalence of (ii) and (iii) in Theorem 7.3 can fail in positive characteristic. For example, let \(p\), \(G\) and \(\mathfrak{h}\) be as in Example 2.7. We observed earlier that \(\mathfrak{h}\) is \(G\)-ir, but it is easy to check that every element of \(\mathfrak{h}\) is nilpotent.
Conversely, let \(k\) be algebraically closed of characteristic \(3\), let \(M=\operatorname{SL}_{2}\), let \(V\) be the natural module for \(M\), and consider the action of \(M\) on the third symmetric power \(W:=S^{3}V\)--this gives an embedding of \(M\) inside \(G:=\operatorname{GL}_{4}\) and gives rise to a faithful representation of the Lie algebra \(\mathfrak{m}\). We claim that \(W\) is not semisimple as an \(\mathfrak{m}\)-module, so \(\mathfrak{m}\) is not \(G\)-cr even though is has trivial radical. The four-dimensional module \(W\) has a basis \(x^{3},x^{2}y,xy^{2},y^{3}\), where \(x\) and \(y\) can be identified with the standard basis vectors for \(V\). The basis vectors \(x^{3}\) and \(y^{3}\) are killed by the action of \(\mathfrak{m}\), so span a two-dimensional submodule with a trivial \(\mathfrak{m}\)-action. The quotient by this submodule is a copy of the natural module \(V\), which is simple, but there is no submodule of \(W\) isomorphic as an \(\mathfrak{m}\)-module to \(V\), as direct calculation with the standard generators for \(\mathfrak{m}\cong\mathfrak{sl}_{2}\) will easily verify.
Corollary 7.6 can also fail in positive characteristic. For, let \(\operatorname{char}(k)=2\) and let \(G=\operatorname{PGL}_{2}(k)\times k^{*}\times k^{*}\) and let \(\mathfrak{h}\) be the subalgebra of \(\mathfrak{g}\) spanned by the elements of the form \(\left(\left(\begin{array}{cc}0&a\\ 0&0\end{array}\right),a,0\right)\) and \(\left(\left(\left(\begin{array}{cc}0&0\\ b&0\end{array}\right),0,b\right)\right)\) for \(a,b\in k\). Clearly \(\mathfrak{h}\) is \(G\)-cr but is not Jordan-closed.
_Remark 7.8_.: Richardson's seminal paper [34] laid the foundations for the study via GIT of \(G\)-complete reducibility for subgroups and subalgebras. We explain how to obtain Theorem 7.3 in the characteristic \(0\) case from Richardson's results. Suppose \(\operatorname{char}(k)=0\). If \(\mathfrak{h}\) is a subalgebra of \(\mathfrak{g}\) then there is a unique smallest subalgebra \(\mathfrak{h}^{\operatorname{alg}}\) of \(\mathfrak{g}\) such that \(\mathfrak{h}\subseteq\mathfrak{h}^{\operatorname{alg}}\) and \(\mathfrak{h}^{\operatorname{alg}}\) is algebraic, and there is a unique connected subgroup \(A(\mathfrak{h})\) of \(G\) such that \(\mathfrak{h}^{\operatorname{alg}}=\operatorname{Lie}(A(\mathfrak{h}))\). In fact, the map \(K\mapsto\operatorname{Lie}(K)\) gives an inclusion-preserving bijection from the set of connected subgroups of \(G\) to the set of algebraic subalgebras of \(\mathfrak{g}\)[23, 13.1 Thm.].
Now let \(m\in\mathbb{N}\), let \(x_{1},\ldots,x_{m}\in\mathfrak{g}\), let \(\mathfrak{h}\) be the subalgebra of \(\mathfrak{g}\) generated by the \(x_{i}\) and let \(M=A(\mathfrak{h})\). Richardson showed that the following are equivalent (see [34, Lem. 3.5 and Thm. 3.6]): (a) \(G\cdot(x_{1},\ldots,x_{m})\) is closed; (b) \(M\) is reductive; (c) \(M\) acts semisimply on \(\mathfrak{g}\). These conditions are equivalent to \(\mathfrak{h}\) being \(G\)-cr, by Theorem 3.11. Note that if \(M\) is reductive then \(\operatorname{rad}(\mathfrak{h}^{\operatorname{alg}})=\operatorname{rad}( \operatorname{Lie}(M))=\operatorname{Lie}(Z(M)^{0})\) is \(G\)-toral, and it is not hard to see that the converse holds. Moreover, it also straightforward to show that \(\operatorname{rad}(\mathfrak{h}^{\operatorname{alg}})=\operatorname{rad}( \mathfrak{h})^{\operatorname{alg}}\), and that \(\operatorname{rad}(\mathfrak{h})^{\operatorname{alg}}\) is \(G\)-toral if and only if \(\operatorname{rad}(\mathfrak{h})\) is \(G\)-toral. The characteristic \(0\) case of Theorem 7.3 now follows.
**Example 7.9**.: Assume \(\operatorname{char}(k)=0\). We now give an explicit description of the \(k\)-semi-simplification of a subalgebra \(\mathfrak{h}\) of \(\mathfrak{g}\). Write \(\mathfrak{h}=\mathfrak{k}\oplus\mathfrak{m}\), where \(\mathfrak{k}\) is semisimple and \(\mathfrak{m}=\operatorname{rad}(\mathfrak{h})\). Then \(\mathfrak{k}\) is Jordan-closed (see the proof of Corollary 7.6), and it follows easily that \(\mathfrak{h}^{J}=\mathfrak{k}\oplus\mathfrak{m}^{J}\). Now \(\mathfrak{m}^{J}\) is solvable by Proposition 6.12. Write \(\mathfrak{m}^{J}=\mathfrak{s}\oplus\mathfrak{n}\) as in Corollary 6.13. We claim that \(\mathfrak{k}\oplus\mathfrak{s}\) is a semisimplification of both \(\mathfrak{h}\) and \(\mathfrak{h}^{J}\).
To see this, choose a parabolic subgroup \(P\) of \(G\) such that \(P\) yields a semisimplification of \(\mathfrak{h}^{J}\). Since \(\mathfrak{k}\oplus\mathfrak{s}\) is \(G\)-cr by Theorem 7.3, we can choose \(\lambda\in Y(G)\) such that \(P_{\lambda}=P\) and \(\lambda\) centralises \(\mathfrak{k}\oplus\mathfrak{s}\). Now \(\mathfrak{m}^{J}\) is an ideal of \(\mathfrak{h}^{J}\), so \((P_{\lambda},L_{\lambda})\) also yields a semisimplification of \(\mathfrak{m}^{J}\) by Theorem 5.14(b). Hence \(c_{\lambda}\) kills \(\mathfrak{n}\) by Corollary 6.13(a). It follows that \(c_{\lambda}(\mathfrak{h}^{J})=\mathfrak{k}\oplus\mathfrak{s}\).
If \(a\in k^{*}\) then \(\lambda(a)\cdot\mathfrak{m}\subseteq\mathfrak{p}_{\lambda}\) and \(c_{\lambda}(\lambda(a)\cdot\mathfrak{m})=c_{\lambda}(\mathfrak{m})\) since \(\lambda\) centralises \(\mathfrak{s}\). Hence \(c_{\lambda}(\mathfrak{m})=c_{\lambda}(\mathfrak{m}^{\prime})\), where \(\mathfrak{m}^{\prime}\) is the subspace of \(\mathfrak{p}_{\lambda}\) spanned by the subspaces \(\lambda(a)\cdot\mathfrak{m}\) for \(a\in k^{*}\). It is clear that \(\mathfrak{m}^{\prime}\) contains \(\mathfrak{m}^{J}\), since \(\mathfrak{n}\) is contained in the sum of the nonzero weight spaces for \(\lambda\) acting on \(\mathfrak{p}_{\lambda}\). Hence \(c_{\lambda}(\mathfrak{m})=c_{\lambda}(\mathfrak{m}^{\prime})=c_{\lambda}( \mathfrak{m}^{J})=\mathfrak{s}\), and we conclude that \(c_{\lambda}(\mathfrak{h})=\mathfrak{k}\oplus\mathfrak{s}\). But \(\mathfrak{k}\oplus\mathfrak{s}\) is \(G\)-cr, so the rest of the claim follows.
We finish by giving a refinement of Remark 4.4 in characteristic \(0\).
**Proposition 7.10**.: _Let \(k\), \(H\), \(G\), \(\psi\) and \(f_{1},\ldots,f_{t}\) be as in Remark 4.4. We suppose further that \(\operatorname{char}(k)=0\) and we keep our assumption that \(k\) is algebraically closed. Let \((y_{1},\ldots,y_{m})\in\mathfrak{g}^{m}\) such that the subalgebra \(\mathfrak{m}\) generated by the \(y_{i}\) is \(G\)-completely reducible. Then \((y_{1},\ldots,y_{m})\) belongs to \(G\cdot d\iota(\mathfrak{h}^{m})\) if and only if \(f_{i}(y_{1},\ldots,y_{m})=0\) for \(1\leq i\leq t\)._
Proof.: Suppose \((y_{1},\ldots,y_{m})\) belongs to \(G\cdot d\iota(\mathfrak{h}^{m})\). Then there exist \((x_{1},\ldots,x_{m})\in\mathfrak{h}^{m}\) and \(g\in G\) such that \(g\cdot(y_{1},\ldots,y_{m})=d\iota(x_{1},\ldots,x_{m})\). So
\[\pi_{\mathfrak{g}^{m},G}(y_{1},\ldots,y_{m})=\pi_{\mathfrak{g}^{m},G}(d\iota(x _{1},\ldots,x_{m}))=\psi(\pi_{\mathfrak{h}^{m},H}(x_{1},\ldots,x_{m}))\in\psi( \mathfrak{h}^{m}/\!/H).\]
Remark 4.4 implies that \(f_{i}(y_{1},\ldots,y_{m})=0\) for \(1\leq i\leq t\).
Conversely, suppose \(f_{i}(y_{1},\ldots,y_{m})=0\) for \(1\leq i\leq t\). Then
\[\pi_{\mathfrak{g}^{m},G}(y_{1},\ldots,y_{m})=\psi(\pi_{\mathfrak{h}^{m},H}(x_ {1},\ldots,x_{m}))=\pi_{\mathfrak{g}^{m},G}(d\iota(x_{1},\ldots,x_{m}))\]
for some \((x_{1},\ldots,x_{m})\in\mathfrak{h}^{m}\) by the choice of the \(f_{i}\). Without loss we can assume that the orbit \(H\cdot(x_{1},\ldots,x_{m})\) is closed. Then \(G\cdot d\iota(x_{1},\ldots,x_{m})\) is closed by Remark 7.5. Now \(G\cdot(y_{1},\ldots,y_{m})\) is closed by Theorem 3.11. It follows that \(G\cdot d\iota(x_{1},\ldots,x_{m})=G\cdot(y_{1},\ldots,y_{m})\), so \((y_{1},\ldots,y_{m})\in G\cdot d\iota(\mathfrak{h}^{m})\).
Proposition 7.10 will be used in forthcoming work of the third author with A.R. Gover.
**Acknowledgments**: The research of this work was supported in part by the DFG (Grant #RO 1072/22-1 (project number: 498503969) to G. Rohrle).
|
2306.08795
|
Identifying and Explaining the Resilience of Ecological Networks
|
Resilient ecological systems will be better able to maintain their structure
and function in the emerging Anthropocene. Estimating the resilience of
different systems will therefore provide valuable insight for conservation
decision-makers, and is a priority goal of resilience theory. Current
estimation methods rely on the accurate parameterisation of ecosystem models,
or the identification of important motifs in the structure of the ecological
system network. However, both of these methods face significant empirical and
theoretical challenges. In this paper, we adapt tools developed for the
analysis of biochemical regulatory networks to prove that a form of resilience
- robust perfect adaptation - is a property of particular ecological networks,
and to explain the specific process by which the ecosystem maintains its
resilience. We undertake an exhaustive search for robust perfect adaptation
across all possible three-species ecological networks, under a generalised
Lotka-Volterra framework. From over 20,000 possible network structures, we
identify 23 network structures that are capable of robust perfect adaptation.
The resilient properties of these networks provide important insights into the
potential mechanisms that could promote resilience in ecosystems, and suggest
new avenues for measuring and understanding the property of ecological
resilience in larger, more realistic socioecological networks.
|
Cailan Jeynes-Smith, Michael Bode, Robyn P. Araujo
|
2023-06-15T00:44:54Z
|
http://arxiv.org/abs/2306.08795v1
|
# Identifying and Explaining the Resilience of Ecological Networks
###### Abstract
Resilient ecological systems will be better able to maintain their structure and function in the emerging Anthropocene. Estimating the resilience of different systems will therefore provide valuable insight for conservation decision-makers, and is a priority goal of resilience theory. Current estimation methods rely on the accurate parameterisation of ecosystem models, or the identification of important motifs in the structure of the ecological system network. However, both of these methods face significant empirical and theoretical challenges. In this paper, we adapt tools developed for the analysis of biochemical regulatory networks to prove that a form of resilience - robust perfect adaptation - is a property of particular ecological networks, and to explain the specific process by which the ecosystem maintains its resilience. We undertake an exhaustive search for robust perfect adaptation across all possible three-species ecological networks, under a generalised Lotka-Volterra framework. From over 20,000 possible network structures, we identify 23 network structures that are capable of robust perfect adaptation. The resilient properties of these networks provide important insights into the potential mechanisms that could promote resilience in ecosystems, and suggest new avenues for measuring and understanding the property of ecological resilience in larger, more realistic socioecological networks.
**Keywords: resilience, adaptation, socioecological, ecosystem, management Author contributions statement**
CJS conceptualised the idea, RPA contributed to the modelling framework, CJS implemented methods, analysed results, created visualisations and produced the first draft of the manuscript, all authors contributed to the review and editing of the manuscript, and MB, and RPA supervised the project.
**Acknowledgements:**
Computational resources and services used in this work were provided by the eResearch Office, Queensland University of Technology, Brisbane, Australia.
**Funding:** Robyn P. Araujo is the recipient of an Australian Research Council (ARC) Future Fellowship (project number FT190100645) funded by the Australian Government, Cailan Jeynes-Smith is supported by an Australian Government Research Training Program Scholarship.
**Conflict of Interest Statement:**
We declare we have no competing interests.
**Data Statement**
Data sharing is not applicable to this article as no new data were created or analyzed in this study. All code used to generate the results and figures in this article can be found on Github:
[https://github.com/JeynesSmith/PerfectResilience.git](https://github.com/JeynesSmith/PerfectResilience.git)
## Introduction
As the Anthropocene drives accelerating global change, resilience is an important and desirable characteristic of ecological and socioecological systems [1, 2, 3]. Resilience has a range of definitions, but it primarily refers to the ability of an ecosystem to return to a particular set of ecological states following changes in environmental or social (policy) pressures (Figure 1). Resilience is generally used to describe a property exhibited by a particular ecological or socioecological system [4, 5, 6, 7, 8, 9, 10, 11], or as a measure of how well the population will recover from perturbation [12, 13, 14, 15, 16]. Understanding whether, and to what extent, an ecological or socioecological system is resilient helps us understand whether an ecosystem is at risk of collapse, or how far it can be pushed by environmental or social changes [1, 2, 3].
Figure 1: An example of the perfect resilience behaviour, where a species’ abundance is able to consistently return exactly to its original abundance following a disturbance. This behaviour is equivalent to robust perfect adaptation in biochemical reaction networks. Resilience, or imperfect adaptation are similar behaviours in which the abundance does not strictly return to the exact original abundance, but instead returns to within some surrounding region. This figure is generated by simulating a version of the network in Figure 3(i). We simulate the system for one hundred time steps before doubling a stimulus (\(S\)) which disturbs the networks, specifically \(S=\{1,2,4\}\). Parameters: \(r_{I}=0.61\), \(d_{O}=-0.07\), \(a_{13}=-0.37\), \(a_{21}=0.27\), \(a_{22}=-0.81\), \(a_{31}=0.32\), \(a_{32}=-0.69\).
A major hurdle to the use of resilience theory in conservation decision-making is its measurement. A primary approach is to create and parameterise a dynamical systems model of the ecological system, and then to simulate its response to perturbations [4, 5]. However, the complexity of ecological systems, paired with sparse and noisy data sets, make the process of parameter estimation incredibly difficult [17, 18]. If resilience could be defined as a property of the structure of the ecological system, then parameter identifiability problems could be overcome.
Previous studies into resilience of socioecological systems have proposed theoretical structures - network motifs - that underpin resilience [19, 20]. However, this research is relatively atheoretical, with resilient network motifs being proposed on the basis of intuition, and justified by statistical association with observations of resilient dynamics. This limits the insights that these methods can offer.
By contrast, resilience has been studied extensively in biochemical reaction networks, where it is called adaptation. Resilience/adaptation is a relatively common dynamical property in cellular systems, where it has evolved to maintain the processes of life in a stochastic environment where external stimuli are constantly perturbing the abundances of molecules in the system. It has been observed in applications scaling from chemotaxis in single-celled organisms [21, 22, 23, 24, 25, 26, 27, 28, 29] to complex sensory systems [30, 31, 32, 33], while the loss of adaptation has been linked to cancer progression, and substance abuse [34, 35, 36]. Resilience/adaptation is further categorised as either imperfect adaptation, where a particular element of the system returns within some accepted tolerance of its original (pre-stimulus) abundance [37, 38, 39, 40, 41], or as robust perfect adaptation, where that target element returns precisely to the exact same (pre-stimulus) abundance [42, 43, 44, 45, 46, 47, 48, 49, 50, 51] (see Figure 1). Importantly, robust perfect adaptation is a property of the network interaction structure, as opposed to a property of a specific parameterisation of the network [42].
In this study we apply analytical tools from biochemical reaction network theory to understand the process of resilience in ecological networks, and to assess whether ecological systems can exhibit robust perfect adaptation, and under what circumstances. To more closely match similar concepts in the existing ecological literature, we herein refer to 'perfect resilience' as a mathematically equivalent property to robust perfect adaptation. We study ecological systems at the 'operational layer' [19], where the effect of policies directly impact populations in an ecosystem. We perform an extensive search of all three-species ecosystems using a novel application of the generalised Lotka-Volterra equations - a commonly used mechanistic framework in ecological modelling. We ask, are there network motifs that promote perfect resilience under an ecological framework, and will the corresponding networks still be representative of _in situ_ ecological systems? If possible, these specific motifs could be identified in ecosystems instead of current, more generalised motifs [19].
## Methods
The structures capable of robust perfect adaptation (RPA) in biochemical reaction networks have recently been identified in full generality [42, 44]. Ma et al. [37] performed extensive numerical simulations of three-node chemical reaction (signalling) networks with Michaelis-Menten kinetics, and identified that there were two motifs structures that support RPA (at least approximately, as determined by tight numerical thresholds): a negative feedback loop with buffer node (NFBLB), and an incoherent feedforward loop with proportioner node (IFFLP). Araujo and Liotta [42] later determined that for networks of arbitrary size and complexity, and for arbitrary interaction kinetics, two-well defined subnetwork structures, or'modules', constitute a topological basis for RPA in any network: Opposer modules, which are generalisations of three-node NFBLBs, and are feedback structured subnetworks; and Balancer modules, which are generalisations of three-node IFFLPs, and are feedforward-structured subnetworks. All RPA-capable networks, no matter how large or complex, and no matter the 'kinetics' of the interacting elements, are necessarily decomposable into these two special modular subnetwork structures. See [42] for a detailed description of these mechanisms. More recently, the intricate biochemical reaction structures that are compatible with these overarching RPA-conferring mechanisms have also been determined [44].
The biochemical networks that are capable of adaptation are often built from enzyme-mediated reactions [37, 39, 40, 45, 42], where an enzyme combines with a substrate molecule to form an intermediate complex. This complex can either dissociate into the original molecules, or convert the substrate into a modified (product) form, releasing the enzyme unmodified. Importantly, there is no equivalent to these reactions in ecological systems, so it is unclear how, or if, ecological networks could generate adaptive behaviours without these fundamental reactions.
We provide an extensive study of all three-species interaction networks under a generalised Lotka-Volterra framework containing the three arbitrary species (or functional groups [52]), \(I\), \(M\), and \(O\), and a stimulus (cause of the change), \(S\). We check for network structures in which the species, \(O\), has the perfect resilience property. The stimulus represents an external influence on the system, which could be an intervention such as a harvest, or an environmental factor such as a heatwave, and can affect any combination of species in the system.
The full set of interactions in a three-species network modelled with the generalised Lotka-Volterra equations are given by,
\[\frac{\mathrm{d}I}{\mathrm{d}t} =\overbrace{r_{I}I}^{\text{Intrinsic Growth}}-\overbrace{a_{II}I^{2}}^{ \text{Intrarspecific Competition}}+\overbrace{a_{IM}I^{2}}^{\text{Interspecific Interactions}}+\overbrace{a_{IM}IM+a_{IO}IO}^{\text{Stimulus}}+\overbrace{d_{I}SI}^{\text{Stimulus}}, \tag{1}\] \[\frac{\mathrm{d}M}{\mathrm{d}t} =r_{M}M-a_{MM}M^{2}+a_{MI}IM+a_{MO}MO+d_{M}SM,\] (2) \[\frac{\mathrm{d}O}{\mathrm{d}t} =r_{O}O-a_{OO}O^{2}+a_{OI}IO+a_{OM}MO+d_{O}SO, \tag{3}\]
where: \(I,M\), and \(O\) are the abundances of the interacting species (see Figure 2(a)); \(r_{i}\) is the intrinsic growth rate of species \(i\); \(a_{ij}\) is the (_per-capita_) interaction constant for how species \(i\) is affected by species \(j\); and \(d_{i}\) is the interaction constant for how the stimulus, S, affects species \(i\) (\(d_{i}=0\) when species \(i\) is not affected by the stimulus).
The generalised Lotka-Volterra equations consist of three key terms: intrinsic growth, intra-specific competition (self-regulation), and inter-specific interactions. The intrinsic growth term models positive influences on a populations growth. This is often included for lower trophic-level species such as vegetation, where influences like rainfall and soil nutrients are implicitly modelled. Intra-specific competition represents a limiting or dampening term, whereby competition for the same resources within a population will ultimately limit its growth. Lastly, inter-specific interactions are all of the interactions between species \(i\) and \(j\). These can be positive or negative, where the signs of a pair of terms, \((a_{ij},a_{ji})\), are indicative of the type of interaction between species. Some common examples include: competition, \(a_{ij},a_{ji}<0\); mutualism, \(a_{ij},a_{ji}>0\); and predator-prey, \(a_{ij}>0\), \(a_{ji}<0\) where population \(i\) is the predator. We illustrate an application of the generalised Lotka-Volterra equations and the graphical representation of a specific network in Figure 2.
We systematically examine the capacity for perfect resilience in networks constructed from every possible combination of terms in Equations (1)-(3) based on a full factorial design. In the case where interactions are present or absent, i.e. not considering the sign of interactions, there are a total of \(5\times 2^{12}=20,480\) possible network structures, since there are twelve parameters and five combinations in which the stimulus affects the network. When accounting for the sign of interactions, i.e. considering interactions as either absent, positive, or negative, there is a total of \(20\times 2^{6}\times 3^{6}=933120\) possible network structures. We therefore require efficient methods to test for the capacity of these networks to exhibit perfect resilience.
Ma et al. [37] determined the capacity for imperfect adaptation in over 16,000 networks using metrics based on the simulated behaviour of 20,000 parameter sets for each network. This method is computationally expensive and depends heavily on an extensive sampling of parameter space for every individual network motif. In the current study, we automate algebraic methods recently developed by Araujo and Liotta [44] which can definitively and accurately determine whether a given network motif has the capacity for RPA while handling all parameters symbolically.
The cornerstone of this method is the recognition that any RPA-capable system, with dynamical rate equations \(f_{1},\ldots,f_{n}\) (assumed to be polynomial functions of the interacting elements, \(x_{1},\ldots,x_{n}\)), is characterised by an _ideal_ whose two-variable geometric projection assumes the special form of an RPA polynomial (see [44] for full details). In this case, RPA, and thus perfect
Figure 2: (a) The graphical representation of a three-species network with the species, \(I,M\), and \(O\), and a stimulus \(S\), and (b) the associated generalised Lotka-Volterra equations. This network depicts a system in which the species \(O\) predates on both \(I\) and \(M\). Species \(I\) is affected by the stimulus, and \(O\) is the output of the network. In the network diagram (a) we represent a positive interaction by a pointed arrow and a negative interaction by a flat-ended arrow. The interactions and constants have been coloured based on the type of interaction: intraspecific competition, intrinsic growth rates, interspecific interactions (coloured light to dark blue respectively), and stimulus interactions (coloured red). The green arrow indicates that species \(O\) is the output of the network, however this notation is dropped in later networks to simplify diagrams.
resilience, requires the existence of three polynomials, \((p_{1},p_{2},p_{3})\subset\mathbb{R}[I,M,O]\), such that
\[p_{1}\frac{dI}{dt}+p_{2}\frac{dM}{dt}+p_{3}\frac{dO}{dt}=f(S,O)(O-k), \tag{4}\]
where \(k\) is a rational function of parameters, and \(f(S,O)\) is a polynomial (known as the 'pairing function' [44]) that is non-vanishing on a suitably extensive region of the positive orthant. The right-hand side of Equation (4) has the form of an RPA polynomial. If \((p_{1},\ p_{2},\ p_{3})\) can be found that satisfy Equation (4) for any given network, then the network in question has the capacity for perfect resilience, with the output steady-state value of \(k\) for all disturbances and all parameter choices. For the small network structures considered here, the existence of suitable \(p_{1},\ p_{2},\ p_{3}\) can be determined algorithmically by computation of a Grobner basis [44] with an elimination monomial ordering, and with variables \(S\) and \(O\) ordered last. We automate this process in Matlab using the _lexicographic_ monomial ordering, and then use Matlab's symbolic toolkit to automatically determine if a non-zero projection onto \(S\) and \(O\) exists, and if the projection can be factorised into an RPA polynomial (Equation (4)) based on the monomials. We reject any networks for which there is no non-trivial two-variable projection, or for which its two-variable projection is not an RPA polynomial. All code developed for this study is provided at [https://github.com/JeynesSmith/PerfectResilience.git](https://github.com/JeynesSmith/PerfectResilience.git).
We provide two examples in which a network does, and does not, have the capacity for perfect resilience based on the above projection test. The network in Figure 3(i) (without \(r_{O}\)) is capable of achieving perfect resilience. By calculating the Grobner basis for this network, we obtain the following projection,
\[p_{1}\frac{dI}{dt}+p_{2}\frac{dM}{dt}+p_{3}\frac{dO}{dt}=c_{1}\mathbf{O^{2}S^{ 2}}+c_{2}\mathbf{O^{2}S}+c_{3}\mathbf{O^{2}}+c_{4}\mathbf{OS^{2}}+c_{5} \mathbf{OS}+c_{6}\mathbf{O},\]
where \(c_{i}\) is some function of the network parameters. The projection can be factorised into a form which matches the right-hand-side of Equation (4),
\[c_{1}\mathbf{O^{2}S^{2}}+c_{2}\mathbf{O^{2}S}+c_{3}\mathbf{O^{2}}+c_{4} \mathbf{OS^{2}}+c_{5}\mathbf{OS}+c_{6}\mathbf{O}=O(S^{2}+c_{7}S+c_{8})(O-k), \tag{5}\]
where \(f(O,S)=O(S^{2}+c_{7}S+c_{8})\) to match Equation (4). Since the factorisation exists, this network has the capacity for perfect resilience.
As a second example, the network from Figure 2 has the following Grobner basis projection,
\[p_{1}\frac{dI}{dt}+p_{2}\frac{dM}{dt}+p_{3}\frac{dO}{dt}=\] \[c_{1}\mathbf{O^{5}}+c_{2}\mathbf{O^{4}S}+c_{3}\mathbf{O^{4}}+c_{ 4}\mathbf{O^{3}S^{2}}+c_{5}\mathbf{O^{3}S}+c_{6}\mathbf{O^{3}}+c_{7}\mathbf{O^ {2}S^{2}}+c_{8}\mathbf{O^{2}S}+c_{9}\mathbf{O^{2}}+c_{10}\mathbf{OS^{2}}+c_{1 1}\mathbf{OS}+c_{12}\mathbf{O}.\]
There is no factorisation in which the right-hand-side of this equation will match the form of Equation (4), even when based on the monomials (\(S\) and \(O\)) alone, and therefore this network does not have the capacity for perfect resilience.
While the above condition is necessary for perfect resilience, we must still determine whether the setpoint, \(O=k\), is a feasible and stable steady state. Feasibility ensures that all species have a positive abundance at steady state, while stability ensures that our ecosystem can move towards that steady state. Since the projection test has significantly reduced the number of networks (20480 to 1072 networks, see Supplementary Figure 3), we test for stability and feasibility using a sampling approach similar to Ma et al. [37] in which we randomly select \(4\times 10^{3}\) parameter sets. Parameters are selected from uniform distributions, \(x\in(-1,1)\) for interspecific or stimulus interaction constants, \(x\in(0,1)\) for intrinsic growth rates, and \(x\in(-1,0)\) for intraspecific competition constants. We calculate the steady states of the network then substitute in random parameter values and check for feasibility i.e. there exists a steady state in which every species has a positive abundance. If the steady state is feasible, then we check stability using Lyaponuv stability i.e. negative real components for all eigenvalues of the Jacobian matrix [53]. If both stability and feasibility conditions are met, then the parameter set is saved as successful and we continue for the remaining \(4\times 10^{3}\) parameter sets or until we obtain ten successful parameter sets. We define that a network is capable of perfect resilience if it has any successful parameter sets. Note, we aim to ensure that networks with perfect resilience do not require strict constraints on the parameters, and therefore do not require an intensive parameter search for stability and feasibility.
We lastly ensure that successful systems have perfect resilience, and not a trivial form in which the output species has no reaction to changes in stimulus [37]. We simulate each system and check that the output reacts to a change in stimulus by at least 1% of its pre-stimulus abundance and then returns to within 1% of its pre-stimulus abundance at steady state. For networks which pass all of the above tests, we generate 2000 parameter sets that enable feasible and stable steady states, and use these for further analysis. See Supplementary Figure 3 for a graphical representation of this process and the number of networks which proceed after each test.
## Results
In the following sections we identify all network configurations which have the perfect resilience property under the generalised Lotka-Volterra framework. We examine the structural constraints on ecosystem networks capable of perfect resilience, and how these constraints and the associated transient dynamics affect the possibility of observing perfect resilience in _in situ_ ecosystems.
Our extensive analysis of networks required testing a total of 20,480 possible network structures. In Supplementary Figure 1, we provide an extensive list of all 23 networks which are capable of perfect resilience before considering the sign of interactions between species. This is approximately 0.1% of possible networks which are capable of perfect resilience. In Figure 3 we present the 23 networks as ten unique, condensed network motifs which illustrate the general trends in these structures.
each motif is the number of variations on the motif which are capable of perfect resilience when the sign structure is accounted for - a total of 82 networks. The full list of 23 networks (without assigned sign structure) which are capable of perfect resilience can be found in Supplementary Figure 1. The 23 networks are condensed into ten motifs by including optionally present growth rates, where a growth rate may be present or absent and still have perfect resilience in the associated network. For example, (f) could be split into two networks with or without the \(r_{O}\) growth rate. The motif in (c) is an exception where perfect resilience is present in networks in which: only \(r_{O}\) is present; \(r_{O}\) and \(r_{I}\) are present; and when \(r_{O}\), \(r_{I}\) and \(r_{M}\) are present. The motifs in (f)-(i) have non-unique variations in which species \(I\) and \(M\) are switched. The interaction signs are determined by 2000 parameter sets which permitted stable, feasible steady states, as discussed in the Methods section.
After identifying the capacity for perfect resilience, we determine random parameter regimes which enable feasible and stable steady states for these networks (see Methods). Some parameters, like the growth rates and intraspecific competition constants, can only take on one sign (strictly positive or strictly negative respectively). To ensure feasibility and stability, some
Figure 3: Unique network motifs which enable perfect resilience in a generalised Lotka-Volterra three-species system. Below each motif is the number of variations on the motif which are capable of perfect resilience when the sign structure is accounted for - a total of 82 networks. The full list of 23 networks (without assigned sign structure) which are capable of perfect resilience can be found in Supplementary Figure 1. The 23 networks are condensed into ten motifs by including optionally present growth rates, where a growth rate may be present or absent and still have perfect resilience in the associated network. For example, (f) could be split into two networks with or without the \(r_{O}\) growth rate. The motif in (c) is an exception where perfect resilience is present in networks in which: only \(r_{O}\) is present; \(r_{O}\) and \(r_{I}\) are present; and when \(r_{O}\), \(r_{I}\) and \(r_{M}\) are present. The motifs in (f)-(i) have non-unique variations in which species \(I\) and \(M\) are switched. The interaction signs are determined by 2000 parameter sets which permitted stable, feasible steady states, as discussed in the Methods section.
interaction and stimulus constants must take a specific sign, however most motifs had interactions that could take on positive or negative values (arrows with rounded heads in Figure 3(a,c,e-h,j)). In these motifs, we studied the correlation between parameter sets (associated with feasibility and stability) and found that there is either a high correlation between the sign of interactions, or no correlation. This can then be used to further classify the motifs based on the sign of interactions - the sign structure. For example, the motif in Figure 3(a) has three constants which can take on either sign: \(a_{IM}\), \(d_{I}\), and \(a_{OI}\). These three constants have strong correlation which we use to classify the motif into two possible sign structures (Figure 4(a)). This can also be observed for the motif in Figure 3(g), but one of the three interactions is weakly correlated with the others and therefore we can define four sign structures (Figure 4(b)). The number of sign structures are identified at the bottom of each motif in Figure 3, and the specific sign structures can be found in Supplementary Figure 2. In total, when we specify the sign of every interaction, we identified 82 networks which had the perfect resilience property. These 82 networks represent less than 0.01% of the possible network structures (when interaction sign is included).
Figure 4: Two examples of how the correlation between parameters can be used to specify the sign structure of networks capable of perfect resilience. The motifs in (a) and (b) are from Figure 3(a) and (g) respectively. The correlation of parameter sets is represented in the grid where a value of \(-1\) (black) represents a strong, negative correlation, and a value of 1 (white) represents a strong, positive correlation. Strong correlations between parameters indicate a dependent relationship between the sign of those parameters, whereas a weak correlation (around 0) indicates independence. These can then be used to create the associated tables of possible sign structure combinations.
While further classifying motifs based on sign structure can be used to provide more detailed motifs, we found that this did not provide distinguishable insights into the structure or dynamics of the ten unique network motifs in Figure 3 without the sign structure fully specified. In the following section, we draw conclusions on the structural requirements of these motifs and relate these to conditions we would expect from real ecosystems.
### Topological Constraints and Ecosystem Implications
In this extensive study, we examined the five possible configurations for how the stimulus can interact with populations in the network (excluding the sign and strength of those interactions), however only three were found to have motifs capable of perfect resilience when the stimulus affects: \(I\), \(O\), and \(I\) and \(O\) (Figure 3, red arrows). Crucially there was no capacity for perfect resilience if the stimulus affects \(I\) and \(M\), or when it affects all three species. This is particularly important in the latter case as a stimulus representing environmental events, such as heatwaves or cyclones, are likely to directly affect all species in the network. This indicates that perfect resilience is never possible under these crucial types of stimuli which are increasing in frequency with climate change[54].
When we translate the perfect resilience motifs into the mechanisms which obtain robust perfect adaptation in biochemical reactions, we identified that our motifs only used opposer mechanisms to generate perfect resilience - no balancer mechanisms. Araujo and Liotta[42] determined that networks cannot have balancer mechanisms if the stimulus directly affects the output, because it requires multiple paths connecting the stimulus to the output to effectively 'balance out' a change in stimulus. We do not observe multiple paths in any of the possible networks where the stimulus does not directly affect the output, \(O\) (Figure 3(a)-(e)).
The structure of all ten motifs are dependent on a sequence of one-way interactions between species (Figure 3, dark blue arrows). For example, the network in Figure 3(a) has a one-way interaction connecting \(M\) to \(I\) followed by \(I\) to \(O\). In ecological systems having a one-way interaction is possible, an example being that an orchid on a tree benefits from the tree, but the tree has no significant benefit or harm from the orchid[55]. However, having **multiple** one-way interactions, in sequence, is highly unlikely in real ecological systems.
Lastly, in all of the motifs we observe either intrinsic growth or self-regulation terms in only a subset of the species i.e. not for every species. It is not uncommon to exclude intrinsic growth terms for some species since, by definition, these terms are included to represent implicit increases for that population. However, self-regulation terms are more often included for all species as this ensures a carrying capacity for that species[56, 57, 58]. In our perfect resilience motifs, only one species in the network
ever has a self-regulation term (Figure 3, light blue looping arrow), resulting in the potential for unbounded growth for two species. Self-regulation terms also play an important role in the stability of the steady state, and dampening oscillations of a species following a perturbation [56, 57, 58]. In the following section we explore the dynamics of these networks and any implications that these have on perfect resilience.
### Large Oscillations and Transient Dynamic Limitations
In all ten motifs we observed a particular variety of integral control - constrained integral control [59, 44] - in which perfect resilience is present in the output species, \(O\), as long as it does not go extinct. This occurs when there is an isolated factor of \(O\) in the projection (Equation (4)), which can be observed in the example in Equation (5). While this is realistic for an ecosystem, it is a stark contrast to chemical reaction networks where a molecule can be created from reactions independent of its abundance. Ecosystem networks are therefore limited by the strength of perturbations which the output is able to recover from while avoiding extinction.
Generalised Lotka-Volterra models are prone to generating transient dynamics which are highly oscillatory [57, 58] and the networks which we identified as having perfect resilience are not exempt from this behaviour (Figure 5(a)). When the stimulus changes, these networks rapidly oscillate in response and can take significant time to return to their pre-stimulus abundance. If another perturbation occurs within this oscillatory period it is possible that the abundance of the species can be perturbed to extinction and perfect resilience will be lost (Figure 5(b)). Moreover, in reality having oscillations which repeatedly bring the output close to extinction have a higher risk of a stochastic event killing off the population.
## Discussion
In this study we present a novel attempt to find perfect resilience in ecological systems. Moreover we have developed fully automated methods for identifying perfect resilience which are widely applicable to networks with any number of species, or functional groups, and under alternate modelling frameworks. We have identified that there is potential for ecological systems to obtain perfect resilience - albeit in only 23 network structures out of a possible 20,480 configurations in a three-species system. The motifs that we have identified can provide an important insight into ecological management decisions, particularly since this only requires an understanding of the structure of a network - an easier task compared to determining the interaction strengths in a network. By understanding the motif structure, it can then assist in highlighting interventions to ensure perfect resilience either does or does not occur for a target species or ecosystem function [1, 2, 3].
The 23 networks that we identified as being capable of perfect resilience were all based on opposer mechanisms [42], instead of the mixture of opposers and balancers which can promote robust perfect adaptation in biochemical reaction networks. This
Figure 5: Two examples of the oscillatory behaviours present in ecosystem networks capable of perfect resilience. In (a) we demonstrate a representative example of the highly oscillatory behaviour from the motif in Figure 3(g) which is present in all motifs. Note that the oscillations have an increasing frequency and amplitude with repeated perturbations. In (b) we demonstrate how repeated perturbations before the network has returned to steady state results in extinction (motif from Figure 3(h)). In both cases we simulate the system for one hundred time steps before doubling the stimulus, specifically \(S=\{1,2,4,8\}\). Parameters: (a) \(r_{I}=0.6\), \(r_{M}=0.46\), \(r_{O}=0.5\), \(d_{O}=-0.74\), \(a_{13}=-0.02\), \(a_{21}=0.46\), \(a_{22}=-1\), \(a_{31}=0.85\), \(a_{32}=-0.18\), (b) \(r_{M}=0.04\), \(r_{O}=0.09\), \(d_{O}=-0.01\), \(a_{12}=-0.18\), \(a_{13}=0.83\), \(a_{22}=-0.26\), \(a_{23}=0.64\), \(a_{31}=-0.42\).
matches observations in biochemical literature, since the oscillatory behaviours of our networks are only possible under the feedback structures found in opposer mechanisms [43]. In biochemical networks there is a significantly better understanding of balancer mechanisms [60, 61, 62, 63, 26, 27, 28, 29, 24, 25, 26, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63], compared to opposer mechanisms [64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 222, 213, 214, 215, 216, 217, 218, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 282, 286, 287, 289, 288, 289, 291, 280, 287, 282, 289, 280, 283, 285, 286, 288, 287, 288, 289, 281, 282, 284, 286, 289, 282, 287, 283, 288, 289, 280, 284, 285, 286, 287, 288, 289, 280, 289, 281, 282, 285, 286, 287, 288, 289, 280, 281, 289, 282, 283, 285, 287, 286, 289, 287, 288, 289, 280, 281, 289, 282, 286, 287, 288, 289, 280, 282, 281, 283, 285, 289, 286, 287, 288, 289, 282, 289, 280, 283, 284, 285, 287, 286, 288, 289, 281, 289, 282, 283, 285, 286, 287, 289, 280, 287, 288, 289, 281, 288, 289, 280, 282, 281, 286, 287, 288, 289, 282, 283, 284, 289, 285, 286, 287, 288, 289, 280, 289, 282, 286, 287, 288, 289, 280, 281, 282, 289, 283, 286, 287, 281, 288, 282, 289, 284, 285, 286, 287, 288, 289, 287, 288, 289, 280, 282, 283, 285, 286, 287, 289, 280, 284, 281, 288, 285, 286, 287, 289, 288, 286, 287, 288, 289, 289, 280, 281, 282, 283, 286, 287, 288, 289, 281, 28, 282, 285, 286, 289, 287, 288, 289, 280, 289, 281, 28, 282, 283, 286, 287, 288, 289, 282, 284, 288, 285, 286, 287, 289, 288, 289, 280, 289, 281, 28, 282, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 29, 28, 28, 28, 29, 28, 28, 29, 28, 29, 28, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 30, 32, 34, 35, 36, 38, 39, 31, 35, 37, 39, 32, 36, 39, 33, 37, 38, 39, 30, 33, 38, 39, 32, 33, 34, 35, 36, 39, 34, 37, 39, 33, 38,
between species [67]. This poses a big problem with identifying these motifs in real ecosystems. While these problems are likely not limited to generalised Lotka-Volterra equations, it is important to note that the methods used here for identifying perfect resilience can readily be applied to other modelling frameworks, with only some constraints required for calculating the Grobner basis [68, 44].
The network motifs identified in this study have important implications for conservation management and socioecological systems. While these motifs can be used to identify resilient ecological networks, a pivotal problem that managers will face is attempting to design, or alter, ecosystems. In biochemical reactions identifying robust perfect adaptation can be used to suggest targeted treatments which can later undergo _in vitro_ or clinical testing. However, for managers, ecosystem engineering faces a slew of problems which cannot be tested before implementation. Removing species through culling can be controversial for native species [69, 70, 71, 72, 73], and the introduction of non-native species to an ecosystem has repeatedly had adverse effects [74, 75, 76]. In this study, we only performed an extensive search of three-species ecosystems. An existing study from biochemical literature [42] identifies how smaller mechanisms can be embedded within larger networks while ensuring that robust perfect adaptation, or perfect resilience, still obtains. There is still a benefit in searching for perfect resilience in larger networks to identify possible mechanisms that may only be possible with a larger number of species (potentially balancer mechanisms), and the methods developed here constitute an important foundation for further work in this area. Future work may also be to identify the frequency with which these motifs occur _in situ_[19].
|
2310.15744
|
Analyzing Single Cell RNA Sequencing with Topological Nonnegative Matrix
Factorization
|
Single-cell RNA sequencing (scRNA-seq) is a relatively new technology that
has stimulated enormous interest in statistics, data science, and computational
biology due to the high dimensionality, complexity, and large scale associated
with scRNA-seq data. Nonnegative matrix factorization (NMF) offers a unique
approach due to its meta-gene interpretation of resulting low-dimensional
components. However, NMF approaches suffer from the lack of multiscale
analysis. This work introduces two persistent Laplacian regularized NMF
methods, namely, topological NMF (TNMF) and robust topological NMF (rTNMF). By
employing a total of 12 datasets, we demonstrate that the proposed TNMF and
rTNMF significantly outperform all other NMF-based methods. We have also
utilized TNMF and rTNMF for the visualization of popular Uniform Manifold
Approximation and Projection (UMAP) and t-distributed stochastic neighbor
embedding (t-SNE).
|
Yuta Hozumi, Guo-Wei Wei
|
2023-10-24T11:36:41Z
|
http://arxiv.org/abs/2310.15744v1
|
# Analyzing Single Cell RNA Sequencing with Topological Nonnegative Matrix Factorization
###### Abstract
Single-cell RNA sequencing (scRNA-seq) is a relatively new technology that has stimulated enormous interest in statistics, data science, and computational biology due to the high dimensionality, complexity, and large scale associated with scRNA-seq data. Nonnegative matrix factorization (NMF) offers a unique approach due to its meta-gene interpretation of resulting low-dimensional components. However, NMF approaches suffer from the lack of multiscale analysis. This work introduces two persistent Laplacian regularized NMF methods, namely, topological NMF (TNMF) and robust topological NMF (rTNMF). By employing a total of 12 datasets, we demonstrate that the proposed TNMF and rTNMF significantly outperform all other NMF-based methods. We have also utilized TNMF and rTNMF for the visualization of popular Uniform Manifold Approximation and Projection (UMAP) and t-distributed stochastic neighbor embedding (t-SNE).
keywords: Algebraic topology, Persistent Laplacian, scRNA-seq, dimensionality reduction, machine learning.
Introduction
Single-cell RNA sequencing (scRNA-seq) is a relatively new technology that has unveiled the heterogeneity within cell populations, providing valuable insights into complex biological interactions and pathways, such as cell-cell interactions, differential gene expression, signal transduction pathways, and more [1].
Unlike traditional microarray analysis, often referred to as bulk sequencing, scRNA-seq offers the transcriptomic profile of individual cells. With current technology, it's possible to sequence more than 20,000 genes and 10,000 samples simultaneously. Standard experimental procedures involve cell isolation, RNA extraction, sequencing, library preparation, and data analysis.
Over the years, numerous data analysis pipelines have been proposed, typically encompassing data preprocessing, batch correction, normalization, dimensionality reduction, feature selection, cell type identification, and downstream analyses to uncover relevant biological functions and pathways [2, 3, 4, 5, 6].
However, scRNA-seq data, in addition to their high dimensionality, are characterized by nonuniform noise, sparsity due to drop-out events and low reading depth, as well as unlabeled data [7]. Consequently, dimensionality reduction and feature selection are essential for successful downstream analysis.
Principal components analysis (PCA), uniform manifold approximation and projection (UMAP), and t-distributed stochastic neighbor embedding (t-SNE) are among the most commonly used dimensionality reduction tools for scRNA-seq data. PCA is often employed as an initial step in analysis pipelines, such as trajectory analysis and data integration [8, 9, 10, 11]. In PCA, the first few components are referred to as the principal components, where the variance of the projected data is maximized. In PCA, each \(i\)th component is orthogonal to all the \(i-1\) components, maximizing the residual data projected onto the \(i\)th component [12, 13]. Numerous successful extensions to the original formulation have been proposed [14, 15, 16, 17]. However, due to the orthogonality constraint of PCA, the reduced data may contain negative values, making it challenging to interpret.
UMAP and t-SNE are nonlinear dimensionality reduction methods often used for visualization. UMAP constructs a \(k\)-dimensional weighted graph based on \(k\)-nearest neighbors and computes the edge-wise cross-entropy between the embedded low-dimensional weighted graph representation, utilizing the fuzzy set cross-entropy loss function [18]. t-SNE computes the pairwise similarity between cells by constructing a conditional probability distribution over pairs of cells. Then, a student t-distribution is used to obtain the probability distribution in the embedded space, and the Kullback-Leibler (KL) divergence between the two probability distributions is minimized to obtain the reduced data [19, 20, 21, 22]. However, due to the stochastic nature of these methods and their instability at dimensions greater than 3 [23], they may not be suitable for downstream analysis.
Nonnegative matrix factorization (NMF) is another dimensionality reduction method in which the objective is to decompose the original count matrix into two nonnegative factor matrices [24, 25]. The resulting basis matrices are often referred to as meta-genes and represent nonnegative linear combinations of the original genes. Consequently, NMF results are highly interpretable. However, the original formulation employs a least-squares optimization scheme, making the method susceptible to outlier errors [26].
To address this issue, Kong et al. [27] introduced robust NMF (rNMF), or \(l_{2,1}\)-NMF, which utilizes the \(l_{2,1}\)-norm and can better handle outliers while maintaining comparable computational efficiency to standard NMF. Manifold regularization has also been employed to incorporate geometric structures into dimensionality reduction, utilizing a graph Laplacian, leading to Graph Regularized NMF (GNMF) [28]. Semi-supervised methods, such as those incorporating marker genes [29], similarity and dissimilarity constraints [30], have been proposed to enhance NMF's robustness. Additionally, various other NMF derivatives have been introduced [31, 32, 33].
Despite these advancements in NMF, manifold regularization remains an essential component to ensure
that the lower-dimensional representation of the data can form meaningful clusters. However, using graph Laplacians can only capture a single scale of the data, specifically the scaling factor in the heat kernel. Therefore, single-scale graph Laplacians lack multiscale information.
Eckmann et al. [34] introduced simplicial complexes to the graph Laplacian defined on point cloud data, leading to the combinatorial Laplacian. This can be viewed as a discrete counterpart of the de Rham-Hodge Laplacian on manifolds. Both the Hodge Laplacian and the combinatorial Laplacian are topological Laplacians that give rise to topological invariants in their kernel space, specifically the harmonic spectra. However, the nonharmonic spectra contain algebraic connectivity that cannot be revealed by the topological invariants [35].
A significant development in topological Laplacians occurred in 2019 with the introduction of persistent topological Laplacians. Specifically, evolutionary de Rham theory was introduced to obtain persistent Hodge Laplacians on manifolds [36]. Meanwhile, persistent combinatorial Laplacian [37], also known as the persistent spectral graph or persistent Laplacian (PL), was introduced for point cloud data. These methods have spurred numerous theoretical developments [38, 39, 40, 41, 42] and code construction [43], as well as remarkable applications in various fields, including protein engineering [44], forecasting emerging SARS-CoV-2 variants BA.4/BA.5 [45], and predicting protein-ligand binding affinity [46]. Recently, PL has been shown to improve PCA performance [14, 47].
This growing interest arises from the fact that persistent topological Laplacians represent a new generation of topological data analysis (TDA) methods that address certain limitations of the popular persistent homology [48, 49]. In persistent homology, the goal is to represent data as a topological space, often as simplicial complexes. Then, ideas from algebraic topology, such as connected components, holes, and voids, are used to extract topological invariants during a multiscale filtration. Persistent homology has facilitated topological deep learning (TDL), an emerging field [50]. However, persistent homology is unable to capture the homotopic shape evolution of data. PLs overcome this limitation by tracking changes in non-harmonic spectra, revealing the homotopic shape evolution. Additionally, the persistence of PL's harmonic spectra recovers all topological invariants from persistent homology.
In this work, we introduce PL-regularized NMF, namely the topological NMF (TNMF) and robust topological NMF (rTNMF). Both TNMF and rTNMF can better capture multiscale geometric information than the standard GNMF and rGNMF. To achieve improved performance, PL is constructed by observing cell-cell interactions at multiple scales through filtration, creating a sequence of simplicial complexes. We can then view the spectra at each complex associated with a filtration to capture both topological and geometric information. Additionally, we introduce \(k\)-NN based PL to TNMF and rTNMF, referred to as \(k\)-TNMF and \(k\)-rTNMF, respectively. The \(k\)-NN based PL reduces the number of hyperparameters compared to the standard PL algorithm.
The outline of this work is as follows. First, we provide a brief overview of NMF, rNMF, GNMF, and rGNMF. Next, we present a concise theoretical formulation of PL and derive the multiplicative updating scheme for TNMF and rTNMF. Additionally, we introduce an alternative construction of PL, termed \(k\)-NN PL. Following that, we present a benchmark using 12 publicly available datasets. We have observed that PL can improve NMF performance by up to 0.16 in ARI, 0.08 in NMI, 0.04 in purity, and 0.1 in accuracy.
## 2 Methods
In this section, we provide a brief overview of NMF methods, namely NMF, rNMF, GNMF, and rGNMF. We then give persistent Laplacian and its construction. Finally, we formulate various PL regularized NMF methods.
### Prior Work
2.1.0.1 NMFThe original formulation of NMF utilizes the Frobenius norm, which assumes that the noise of the data is sample from Gaussian distribution.
\[\min_{W,H}\|X-WH\|_{F}^{2},\quad\text{s.t. }W,H\geq 0 \tag{1}\]
where \(\|A\|_{F}^{2}=\sum_{i,j}a_{ij}^{2}\). Lee et al. proposed a multiplicative updating scheme, which preserves the nonnegativity [24]. For the \(t+1\)th iteration,
\[w^{t+1}=w_{ij}^{t}\frac{(XH^{T})_{ij}}{(WH^{T})_{ij}} \tag{2}\]
\[h^{t+1}=h_{ij}^{t}\frac{(W^{T}X)_{ij}}{(W^{T}WH)_{ij}} \tag{3}\]
Although the updating scheme is simple and effective in many biological data applications, scRNA-seq data is sparse and contains large amount of noise. Therefore, a model that is more robust to noise is necessary for feature selection and dimensionality reduction
2.1.0.2 rNMFThe robust NMF (rNMF) utilizes the \(l_{2,1}\) norm, which assumes that the noise of the data is sampled from a Laplace distribution, which may be more suitable for a count-based data matrix, like scRNA-seq. The minimization function is given as the following
\[\min_{W,H}\|X-WH\|_{2,1},\quad\text{s.t. }W,H\geq 0,\]
where \(\|A\|_{2,1}=\sum_{j}\|\mathbf{a}_{j}\|_{2}\). Because \(l_{2,1}\)-norm utilizes summation over the \(l_{2}\) distance of the original cell feature and the reduced feature, the effect of the outlier will not dominate the loss function as much as the Frobenius norm formulation. RNMF has the following updating scheme
\[w_{ij}^{t+1}=w_{ij}^{t}\frac{(XQH^{T})_{ij}}{(WHQH^{T})_{ij}} \tag{4}\]
\[h_{ij}^{t+1}=h_{ij}^{t}\frac{(W^{T}XQ)_{ij}}{(W^{T}WHQ)_{ij}}, \tag{5}\]
where \(Q_{jj}=1/\|X-W\mathbf{h}_{j}\|_{2}\).
2.1.0.3 GNMF amd rGNM Manifold regularization has been widely utilized in scRNA-seq. Let \(G(V,E,W)\) be a graph, where \(V=\{\mathbf{x}_{j}\}_{j=1}^{N}\) is the set of vertices, \(E=\{(\mathbf{x}_{i},\mathbf{x}_{j})|\mathbf{x}_{i}\in\mathcal{N}_{k}(\mathbf{ x}_{j})\cup\mathbf{x}_{j}\in\mathcal{N}_{k}(\mathbf{x}_{i})\}\) is the set of edges, and \(W\) is the weight associated with the edges. Here, \(\mathcal{N}_{k}(\mathbf{x}_{j})\) denotes the \(k\)-th nearest neighbors of vertex \(j\). The heat kernel is often used to construct the weight, and we can construct the adjacency matrix \(A\) as the following.
\[A_{ij}=\begin{cases}\exp\left(-\frac{\|\mathbf{x}_{i}-\mathbf{x}_{j}\|^{2}}{ \sigma}\right)&\mathbf{x}_{j}\in\mathcal{N}_{k}(\mathbf{x}_{i})\\ 0,&\text{otherwise}.\end{cases} \tag{6}\]
Since heat kernel satisfies the conditions \(W_{ij}\to 0\) as \(\|\mathbf{x}_{i}-\mathbf{x}_{j}\|\rightarrow\infty\) and \(W_{ij}\to 1\) as \(\|\mathbf{x}_{i}-\mathbf{x}_{j}\|\to 0\), we can construct the graph regularization term, \(R_{G}\), by looking at the distance \(\|\mathbf{h}_{i}-\mathbf{h}_{j}\|^{2}\).
\[R_{G} =\frac{1}{2}\sum_{i,j}A_{ij}\|\mathbf{h}_{i}-\mathbf{h}_{j}\|^{2}\] \[=\sum_{i}D_{ii}\mathbf{h}_{i}^{T}\mathbf{h}_{i}-\sum_{ij}A_{ij} \mathbf{h}_{i}^{T}\mathbf{h}_{j}\] \[=\text{Tr}(HDH^{T})-\text{Tr}(HAH^{T})\] \[=\text{Tr}(HLH^{T}).\]
Here, \(L\) and \(D\) are the Laplacian and the degree matrix, given by \(L=D-A\) and \(D_{ii}=\sum_{j}A_{ij}\), respectively. \(\mathrm{Tr}(\cdot)\) denotes the trace of the matrix. Utilizing the regularization parameters, \(\lambda\geq 0\), we get the objective function of GNMF
\[\min_{W,H}\|X-WH\|_{F}^{2}+\lambda\mathrm{Tr}(HLH^{T}). \tag{7}\]
and the objective function for rGNMF
\[\min_{W,H}\|X-WH\|_{2,1}+\lambda\mathrm{Tr}(HLH^{T}). \tag{8}\]
### Topological NMF
While graph regularization improves the traditional NMF and rNMF, the choice of \(\sigma\) and vastly change the result. Furthermore, graph regularization only captures a single scale, and may not be able to capture the mutliscale geometric information in data. Here, we give a brief introduction to persistent homology and persistent Laplacian and derive the updating scheme for the topological NMF.
#### 2.2.1 Persistent Laplacians
Persistent homology and persistent spectral graphs have been successfully used in biomolecular data [44, 46, 48, 49, 50, 44]. Similar to persistent homology, persistent spectral graphs track birth and death of topological features, i.e., holes, over different scales. However, unlike persistent homology, persistent spectral graphs can further capture the homotopic shape evolution of data during the filtration. Through the filtration process, these methods offer the multiscale analysis of data.
We begin by the definition of simplex. Let \(\sigma_{q}=[v_{0},\cdots,v_{q}]\) denote \(q\)-simplex, where \(v_{i}\) is a vertex. \(\sigma_{0}\) is a node, \(\sigma_{1}\) is an edge, \(\sigma_{2}\) is a triangle \(\sigma_{3}\) is a tetrahedron, and so on. A simplicial complex \(K\) is a union of simplicies such that
1. If \(\sigma_{q}\in K\) and \(\sigma_{p}\) is a face of \(\sigma_{q}\), then \(\sigma_{p}\in K\)
2. The nonempty intersection of any 2 simplicies in \(K\) is a face of both simplicies.
We can think of \(K\) as gluing lower dimensional simplicies that satisfies the above 2 properties.
A \(q\)-chain is a formal sum of \(q\)-simplicies in \(K\) with the coefficients \(\mathbb{Z}_{2}=\{0,1\}\). The set of all \(q\)-chains has contains the basis for the set of \(q\)-simplicies in \(K\). Such set forms a finitely generated free Abelian group \(C_{q}(K)\). We can relate the chain groups via a boundary operator, which is a group homomorphism \(\partial_{q}:C_{q}(K)\to C_{q-1}(K)\). The boundary operator is defined as the following.
\[\partial_{q}\sigma_{q}:=\sum_{i=0}^{q}(-1)^{i}\sigma_{q-1}^{i} \tag{9}\]
where \(\sigma_{q-1}^{i}=[v_{0},,,,v_{i}^{*},...,v_{q}]\), where \(\sigma_{q-1}^{i}\) is a \((q-1)\)-simplex with vertex \(v_{i}\) removed. The sequence of chain group connected by the boundary operator defines the chain complex.
\[...\xrightarrow{\partial_{q+2}}C_{q+1}\xrightarrow{\partial_{q+1}}C_{q}(K) \xrightarrow{\partial_{q}}... \tag{10}\]
The chain complex associated with a simplicial complex \(K\) defines the \(q\)-th homology group \(H_{q}=\mathrm{Ker}\partial_{q}/\mathrm{Im}\partial_{q}\), and the dimension of \(H_{q}\) is the \(q\)-dimensional holes, or the \(q\)th Betti number denoted as \(\beta_{q}\). For example, \(\beta_{0}\) is the number of connected components, \(\beta_{1}\) is the number of loops and \(\beta_{2}\) is the number of cavities.
We can now define the dual chain complex through the adjoint operator of \(\partial_{q}\). The dual space is defined as \(C^{q}(K)\cong C_{q}^{*}(K)\), and the coboundary operator \(\partial_{q}^{*}\) is defined as \(\partial_{q}^{*}:C^{q-1}(K)\to C^{q}(K)\). For
\(\omega^{q-1}\in C^{q-1}(K)\) and \(c_{q}\in C_{q}(K)\), the coboundary operator is defined as
\[\partial^{*}\omega^{q-1}(c_{q})\equiv\omega^{q-1}(\partial c_{q}). \tag{11}\]
Here \(\omega^{q-1}\) is a \((q-1)\) cochain, or a homomorphic mapping from a chain to the coefficient group. The homology of the dual chain complex is called the cohomology.
We then define the \(q\)-combinatorial Laplacian operator \(\triangle_{q}:C^{q}(K)\to C^{q}(K)\)
\[\triangle_{q}:=\partial_{q+1}\partial_{q+1}^{*}+\partial_{q}^{*}\partial_{q}. \tag{12}\]
Let \(\mathcal{B}_{q}\) be the standard basis for the matrix representation of \(q\)-boundary operator from \(C_{q}(K)\) and \(C_{q-1}(K)\), and \(\mathcal{B}_{q}^{T}\) be the \(q\)-coboundary operator. The matrix representation of the \(q\)-th order Laplacian operator \(\mathcal{L}_{q}\) is defined as
\[\mathcal{L}_{q}=\mathcal{B}_{q+1}\mathcal{B}_{q+1}^{T}+\mathcal{B}_{q}^{T} \mathcal{B}_{q}. \tag{13}\]
The multiplicity of zero eigenvalue of \(\mathcal{L}_{q}\) is the \(q\)-th Betti number of the simplicail complex. The nonzero eigenvalues (non-harmonic spectrum) contains other topological and geometrical features.
As stated before, simplicial complex does not provide sufficient information to understand the geometry of the data. To this end, we utilize simplicial complex induced by filtration
\[\{\emptyset\}=K_{0}\subseteq K_{1}\subseteq\cdots\subseteq K_{p}=K, \tag{14}\]
where \(p\) is the number of filtration.
For each \(K_{t}\)\(0\leq t\leq p\), denote \(C_{q}(K_{t})\) as chain group induced by \(K_{t}\), and the corresponding boundary operator \(\partial_{q}^{t}:C_{q}(K_{t})\to C_{q-1}(K_{t})\), resulting in
\[\partial_{q}^{t}\sigma_{q}=\sum_{i=1}^{q}(-1)^{i}\sigma_{q-1}^{i-1}, \tag{15}\]
for \(\sigma_{q}\in K_{t}\). The adjoint operator of \(\partial_{q}^{t}\) is similarity defined as \(\partial_{q}^{t*}:C^{q-1}(K_{t})\to C^{q}(K_{t})\), which we regard as the mapping \(C_{q-1}(K_{t})\to C_{q}(K_{t})\) via the isomorphism between cochain and chain groups. Through these 2 operators, we can define the chain complexes induced by \(K_{t}\).
Utilizing filtration with simplicial complex, we can define persistence Laplacian spectra. Let \(C_{q}^{t+p}\) whose boundary is in \(C_{q-1}^{t}\) be \(\mathbf{C}_{q}^{t+p}\), assuming an inclusion mapping \(C_{q-1}^{t}\to C_{q-1}^{t+p}\). On this set, we can define the \(p\)-persistent \(q\)-boundary operator denoted \(\hat{\partial}_{q}^{t,p}:\mathcal{C}_{q}^{t,p}\to C_{q-1}^{t}\) and the corresponding adjoint operator \((\hat{\partial}^{t,p})^{*}:C_{q-1}^{t}\to\mathbb{C}_{q}^{t,p}\). Then, the \(q\)-order \(p\)-persistent Laplacian operator is computed as
\[\triangle_{q}^{t,p}=\hat{\partial}_{q+1}^{t,p}(\hat{\partial}_{q+1}^{t,p})^{*} +(\hat{\partial}_{q}^{t})^{*}\hat{\partial}_{q}^{t}, \tag{16}\]
and its matrix representation as
\[\mathcal{L}_{q}^{t,p}=\mathcal{B}_{q+1}^{t,p}(\mathcal{B}_{q+1}^{t,p})^{T}+( \mathcal{B}_{q}^{t})^{T}\mathcal{B}_{q}^{t}. \tag{17}\]
Likewise as before, the multiplicity of the zero-eigenvalue is the \(q\)-th order \(p\)-persistent Betti number \(\beta_{q}^{t,p}\), which is the \(q\)-dimensional hole in \(K_{t}\) that persists in \(K_{t+p}\). Moreover, the \(q\)-th order Laplacian is just a particular case of \(\mathcal{L}_{q}^{t,p}\), where \(p=0\), which is a snapshot of the topology at the filtration step \(t\)[37, 43].
We can utilize the \(0\)-persistent Laplacian to capture the interactions between the data at different filtration values. In particular, we can perform filtration by computing a family of subgraphs induced by a threshold distance \(r\), which is called the Vietoris Rips complex. Alternatively, we can compute a Gaussian Kernel induced distance to construct the subgraphs.
#### 2.2.2 TNMF and rTNMF
For scRNA-seq data, we calculate the 0-persistent Laplacian using the Vietoris-Rips (VR) complexes by increasing the filtration distance. We can then take a weighted sum over the 0-persistent Laplacian induced by the changes in the filtration distance. For persistent Laplacian enhanced NMF, we will provide a computationally efficient algorithm to construct the persistent Laplacian matrix.
Let \(L\) be a Laplacian matrix induced by some weighted graph, and note the following
\[L=\begin{cases}l_{ij},&i\neq j\\ -\sum_{j=1}^{N}l_{ij}&i=j.\end{cases}\]
Then, let \(l_{\max}=\max_{i\neq j}l_{ij}\), \(l_{\min}=\min_{i\neq j}l_{ij}\) and \(d=l_{\max}-l_{\min}\). The \(t\)-th Persistent Laplacian \(L^{t}\), \(t=1,...,T\) is defined as \(L^{t}=\{l_{ij}^{t}\}\), where
\[l_{ij}^{t} =\begin{cases}0&l_{ij}\leq(t/T)d+l_{\min}\\ 1&\text{otherwise}\end{cases} \tag{18}\] \[l_{ii}^{t} =-\sum_{i\neq j}l_{ij}^{t}. \tag{19}\]
Then, we can take the weighted sum over the all the persistent Laplacians
\[PL:=\sum_{t=1}^{T}\zeta_{t}L^{t}. \tag{20}\]
Unlike the standard Laplacian matrix \(L\), PL captures the topological features that persists over different filtration, thus providing a multiscale view of the data that standard Laplacian lacks. Here, \(\zeta_{t}\) is the hyper-parameter and must be chosen as a hyperparameter. Then, the topological NMF (TNMF) is defined as
\[\|X-WH\|_{F}^{2}+\text{Tr}(H^{T}(PL)H) \tag{21}\]
and the topological rNMF (rTNMF) is defined as
\[\|X-WH\|_{2,1}+\text{Tr}(H^{T}(PL)H). \tag{22}\]
#### 2.2.3 Multiplicative Updating scheme
The updating scheme follows the same principle as the standard GNMF and rGNMF.
2.2.3.1 TnmfFor top-NMF, the Lagrangian function is defined as
\[\mathcal{L} =\|X-WH\|_{F}^{2}+\lambda\text{Tr}(H^{T}(PL)H)+\text{Tr}(\Phi W)+ \text{Tr}(\Psi H) \tag{23}\] \[=\text{Tr}(X^{T}X)-2\text{Tr}(XH^{T}W^{T})+\text{Tr}(WHH^{T}W^{T} )+\lambda\text{Tr}(H^{T}(PL)H)+\text{Tr}(\Phi W)+\text{Tr}(\Psi H). \tag{24}\]
Taking the partial with respect to \(W\), we get
\[\frac{\partial\mathcal{L}}{\partial W}=-2H^{T}XH+2WHH^{T}+\Phi. \tag{25}\]
Using the KKT condition \(\Phi_{ij}w_{ij}=0\), we get the following
\[(-2XH^{T})_{ij}w_{ij}+(2WHH^{T})_{ij}w_{ij}=0. \tag{26}\]
Therefore, the updating scheme is
\[w_{ij}^{t+1}\gets w_{ij}^{t}\frac{(XH^{T})_{ij}}{(WHH^{T})_{ij}}. \tag{27}\]
For updating \(H\), we take the derivative of the Lagrangian function with respect to \(H\)
\[\frac{\partial\mathcal{L}}{\partial H}=-2W^{T}X+2W^{T}WH+2\lambda H(PL)+\Psi. \tag{28}\]
Using the Karush-Kuhn-Tucker (KKT) condition, we have \(\Psi_{ij}h_{ij}=0\) and obtain
\[-2(W^{T}X+\lambda H(PA))_{ij}h_{ij}+2(W^{T}WH+\lambda H(PD))_{ij}h_{ij}=0, \tag{29}\]
where \(PL=PD-PA\) and \(PD_{ii}=\sum_{i\neq j}PA_{ij}\). The updating scheme is then given by
\[h_{ij}^{t+1}\gets h_{ij}^{t}\frac{(W^{T}WH+\lambda H(PD))_{ij}}{(W^{T}X+ \lambda H(PA))_{ij}}. \tag{30}\]
2.2.3.2 rTNMFFor the updating scheme for top-rNMF, we utilize the fact that \(\|A\|_{2,1}=\mathrm{Tr}(AQA^{T})\), where \(Q_{ii}=\frac{1}{2\|A_{i}\|_{2}}\). The Lagrangian is given by
\[\mathcal{L} =\|X-WH\|_{2,1}+\lambda\mathrm{Tr}(H^{T}(PL)H)+\mathrm{Tr}(\Phi W )+\mathrm{Tr}(\Psi H) \tag{31}\] \[=\mathrm{Tr}((X-WH)Q(X-WH)^{T})+\lambda\mathrm{Tr}(H^{T}(PL)H)+ \mathrm{Tr}(\Phi W)+\mathrm{Tr}(\Psi H)\] (32) \[=\mathrm{Tr}(XQX^{T})-2\mathrm{Tr}(WHQ)+\lambda\mathrm{Tr}(H^{T}( PL)H)+\mathrm{Tr}(\Phi W)+\mathrm{Tr}(\Psi H), \tag{33}\]
where \(Q_{ii}=\frac{1}{\|\mathbf{x}_{j}-W\mathbf{h}_{ij}\|}\). Taking the partial with respect to \(W\), we get
\[\frac{\partial L}{\partial W}=-(XQH^{T})+WHQH^{T}-\Phi. \tag{34}\]
Using the KKT conditions \(\Phi_{ij}w_{ij}=0\), we get
\[-(XQH^{T})_{ij}w_{ij}+(WHQH^{T})_{ij}w_{ij}=0, \tag{35}\]
which gives the updating scheme
\[w_{ij}^{t+1}\gets w_{ij}^{t}\frac{(XQH^{T})_{ij}}{(WHQH^{T})_{ij}}. \tag{36}\]
For \(H\), we take the partial with respect to \(H\).
\[\frac{\partial L}{\partial H}=-W^{T}XQ+W^{T}WHQ+2\lambda H(PL)+\Psi. \tag{37}\]
Then, using the KKT conditions \(\Psi_{ij}h_{ij}=0\), we get
\[(-W^{T}XQ-2\lambda H(PA))_{ij}h_{ij}+(W^{T}WHQ+2\lambda H(PD))_{ij}h_{ij}=0, \tag{38}\]
where \(PL=PD-PA\) and gives the updating scheme
\[h_{ij}^{t+1}\gets h_{ij}^{t}\frac{(W^{T}XQ+2\lambda H(PA))_{ij}}{(W^{T}WHQ+ 2\lambda H(PD))_{ij}}. \tag{39}\]
### \(k\)-NN induced Persistent Laplacian
One major issue with top-GNMF and top-rGNMF is that the parameters \(\{\zeta_{t}\}_{t=1}^{T}\) have to be chosen. For the parameters, we let \(\zeta_{t}\in\{0,1,1/2,\cdots,1/T\}\) for a total of \(T+1\) parameters. Therefore, the number of parameters that needs to be chosen increases exponentially as the number of filtration \(T\) increases. Therefore, we propose an approximation to the original formulation using \(k\)-NN based persistent Laplacian.
Let \(\mathcal{N}_{t}(\mathbf{x}_{j})\) be the \(t\)-nearest neighbors of sample \(\mathbf{x}_{j}\). Then, define the \(t\)-persistent directed adjacency matrix \(\tilde{A}^{t}\) as
\[\tilde{A}^{t}=\{\tilde{a}_{ij}^{t}\},\quad\tilde{a}_{ij}^{t}= \begin{cases}1&\mathbf{x_{j}}\in\mathcal{N}_{t}(\mathbf{x}_{i})\\ 0&\text{otherwise}.\end{cases} \tag{40}\]
Then, the \(k\)-NN based directed adjacency Laplacian is the weighted sum of \(\{A^{t}\}\)
\[\tilde{A}:=\sum_{t=1}^{T}\zeta_{t}\tilde{A}^{t}. \tag{41}\]
Then, the undirected persistent adjacency matrix can be obtained via symmetrization
\[PA=\tilde{A}+\tilde{A}^{T}-\tilde{A}\cdot\tilde{A}^{T},\]
where \(\cdot\) denote Hadamard product. Then, the persistent Laplacian can be constructed using the persistent degree matrix
\[PL=PD-PA,\quad PD_{ii}=\sum_{j\neq i}PA_{ij}. \tag{42}\]
One advantage of utilizing the \(k\)-NN induced persistent Laplacian is that the parameter space is much smaller. We can set \(\zeta_{t}\in\{0,1\}\), where \(\zeta_{t}=0\) would 'turn-off' the particular neighbor's connectivity. In essence, the number of parameters will be reduced to \(2^{T}\), a significant decrease from \(T(T+1)\) of the original formulation.
### Evaluation metrics
Let \(Y=\{Y_{1},...,Y_{L}\}\) and \(C=\{C_{1},...,C_{L}\}\) be 2 partitions of the data. Here, we let \(Y\) be the true label partition and \(C\) be the cluster label partition. Let \(\{y^{i}\}_{i=1}^{N}\) and \(\{c^{i}\}_{i=1}^{N}\) be the true and predicted labels of sample \(i\).
2.4.0.1 Adjusted Rand IndexAdjusted random index (ARI) measures the similarity between two clustering by observing all pairs of samples that belong to the same cluster, and seeing if the other clustering result also have the same pair of samples in the same cluster [51]. Let \(n_{ij}=|T_{i}\cap S_{j}|\) be the number of samples that belong to true label \(i\) and cluster label \(j\), and define \(a_{i}=\sum_{j}n_{ij}\) and \(b_{j}=\sum_{i}n_{ij}\). Then, the ARI is defined as
\[\text{ARI}=\frac{\sum_{ij}\binom{n_{ij}}{2}-\left[\sum_{i}\binom{ a_{i}}{2}\sum_{j}\binom{b_{j}}{2}\right]/\binom{N}{2}}{\frac{1}{2}\left[ \binom{a_{i}}{2}+\binom{b_{j}}{2}\right]-\left[\binom{a_{i}}{2}\sum_{j}\binom{ b_{j}}{2}\right]/\binom{N}{2}}. \tag{43}\]
The ARI takes on a value between -1 and 1, where 1 is a perfect match between two clustering methods, and 0 is a completely random assignment of labels, and -1 indicates that the two clusterings are completely different.
2.4.0.2 Normalized Mutual InformationThe normalized mutual information (NMI) measures the mutual information between two clustering results and normalized according to cluster size [52]. We fix the true labels \(Y\) as one of the clustering result, and use the predicted labels as the other to calculate NMI. The NMI is calculated as the following
\[\text{NMI}=\frac{2I(Y;C)}{H(Y)H(C)}, \tag{44}\]
where \(H(\cdot)\) is the entropy and \(I(Y;C)\) is the mutual information between true labels \(Y\) and predicted labels \(C\). NMI has a range of 0 and 1, where 1 is a perfect mutual correlation between the two sets of labels and 0 means no mutual information.
2.4.0.3 AccuracyAccuracy (ACC) calculates the percentage of correctly predicted class labels. The accuracy is given by
\[\text{ACC}=\frac{1}{N}\sum_{i=1}^{N}\delta(y^{i},f(c^{i})), \tag{45}\]
where \(\delta(a,b)\) is the indicator function, where if \(a=b\), \(\delta(a,b)=1\), and 0 otherwise. \(f:C\to Y\) maps the cluster labels to the true labels, where the mapping is the optimal permutation of the cluster labels and true labels obtained from the Hungarian algorithm [53].
2.4.0.4PurityFor purity calculation, each predicted label \(C_{i}\) is assigned to a true label \(Y_{j}\) such that the \(|C_{i}\cap Y_{j}|\) is maximized [54]. Taking the average over all the predicted label, we obtain the following
\[\text{Purity}=\frac{1}{N}\max_{j}|C_{i}\cap Y_{j}|. \tag{46}\]
Note that unlike accuracy, purity does not map the predicted labels to the true labels.
## 3 Results
### Benchmark Data
We have performed benchmark on 12 publicly available datasets. The GEO accession number, reference, organism, number of cell types, and number of samples are recorded in Table 1. For each data, cell types with less than 15 cells were removed. Log-normalization was applied, and scaled the data to have unit length. For GNMF and rGNMF, \(k=8\) neighbors were used. For TNMF and rTNMF, 8 filtration values were used to construct PL, and for each scale, binary selection \(\zeta_{p}=\{0,1\}\) was used. for \(k\)-TNMF and \(k\)-rTNMF, \(k=8\) was used with \(\zeta_{p}=\{0,1\}\). For each test, double nonnegative singular value decomposition with zeros filled with the average of \(X\) (NNDSVDA) was used for the initialization. The \(k\)-means clustering was applied to obtain the clustering results.
### Benchmarking PL regularized NMF
In order to benchmark persistent Laplacain regularized NMF, we compared our methods to other commonly used NMF methods, namely the GNMF, rGNMF, rNMF and NMF. For a fair comparison, We omitted supervised or semi-supervised methods. For \(k\)-rTNMF, rTNMF, \(k\)-TNMF, TNMF, GNMF and rGNMF, we set \(\alpha=1\) for all tests.
Table 2 shows the ARI values of the NMF methods for the 12 data we have tested. The bold number indicate the highest performance. Figure 1 depicts the average ARI value over the 12 datasets for each method.
Overall, PL regularized rNMF and NMF have the highest ARI value across all the datasets. \(k\)-rTNMF outperforms other NMF methods by at least 0.09 for GSE64016. All PL regularized NMF methods outperform other NMF methods by at least 0.14 for GSE82187. For GSE84133 human 3, both rTNMF and TNMF outperform other methods by 0.07. TNMF improves other methods by more than 0.2 for GSE84133 mouse 2. Lastly, \(k\)-rTNMF has the highest ARI value for GSE94820. Moreover, rTNMF improves rGNMF by 0.05, and TNMF improves GNMF by about 0.06. \(k\)-TNMF and \(k\)-rTNMF also improve GNMF and rGNMF by about 0.03.
Table 3 shows the NMI values of of the NMF methods for the 12 datasets we have tested. The bold
\begin{table}
\begin{tabular}{|c|c|c c c c c|} \hline Geo Accession & Reference & Organism & Cell type & Number of Samples & Number of Genes \\ \hline GSE67835 & Dramanis [55] & Human & 8 & 420 & 22084 \\ GSE75748 time & Chu [56] & Human & 6 & 758 & 19189 \\ GSE82187 & Gokce [57] & Mouse & 8 & 705 & 18840 \\ GSE84133human1 & Baron [58] & Human & 9 & 1895 & 20125 \\ GSE84133human2 & Baron [58] & Human & 9 & 1702 & 20125 \\ GSE84133human3 & Baron [58] & Human & 9 & 3579 & 20125 \\ GSE84133human4 & Baron [58] & Human & 6 & 1275 & 20125 \\ GSE84133mouse1 & Baron [58] & Mouse & 6 & 782 & 14878 \\ GSE84133mouse2 & Baron [58] & Mouse & 8 & 1036 & 14878 \\ GSE57249 & Biase [59] & Human & 3 & 49 & 25737 \\ GSE64016 & Leng [60] & Human & 4 & 460 & 19084 \\ GSE94820 & Villani [61] & Human & 5 & 1140 & 26593 \\ \hline \end{tabular}
\end{table}
Table 1: GEO accession code, reference, organism type, cell type, number of samples, and number of genes of each dataset.
\begin{table}
\begin{tabular}{|c|c c c c c c c c|} \hline data & \(k\)-rTNMF & rTNMF & \(k\)-TNMF & TNMF & rGNMF & GNMF & rNMF & NMF \\ \hline GSE67835 & **0.9454** & 0.9236 & 0.9306 & 0.8533 & 0.9391 & 0.9109 & 0.7295 & 0.7314 \\ GSE64016 & **0.2569** & 0.1544 & 0.2237 & 0.1491 & 0.1456 & 0.1605 & 0.1455 & 0.1466 \\ GSE75748time & 0.6421 & **0.6581** & 0.5963 & 0.6099 & 0.6104 & 0.5790 & 0.5969 & 0.5996 \\ GSE82187 & **0.9877** & 0.9815 & 0.9676 & 0.9809 & 0.7558 & 0.7577 & 0.8221 & 0.8208 \\ GSE84133human1 & 0.8310 & **0.8969** & 0.8301 & 0.8855 & 0.8220 & 0.7907 & 0.7080 & 0.6120 \\ GSE84133human2 & **0.9469** & 0.9072 & 0.9433 & 0.9255 & 0.9350 & 0.9255 & 0.8930 & 0.8929 \\ GSE84133human3 & 0.8504 & 0.9179 & 0.8625 & **0.9181** & 0.8447 & 0.8361 & 0.7909 & 0.8089 \\ GSE84133human4 & 0.8712 & **0.9692** & 0.8712 & 0.9692 & 0.8699 & 0.8681 & 0.8311 & 0.8311 \\ GSE84133mouse1 & **0.8003** & 0.7894 & 0.8003 & 0.7913 & 0.7945 & 0.7918 & 0.6428 & 0.6348 \\ GSE84133mouse2 & 0.6953 & 0.8689 & 0.7005 & **0.9331** & 0.6808 & 0.6957 & 0.5436 & 0.5470 \\ GSE57249 & **1.0000** & 0.9638 & 1.0000 & 0.9483 & 1.0000 & 1.0000 & 0.9483 & 0.9483 \\ GSE94820 & **0.6101** & 0.5480 & 0.4916 & 0.5574 & 0.5139 & 0.5189 & 0.5440 & 0.5556 \\ \hline \end{tabular}
\end{table}
Table 2: ARI of NMF methods across 12 datasets.
number indicate the highest performance. Figure 2 shows the average NMI value over the 12 datasets.
Interestingly, \(k\)-rTNMF and \(k\)-TNMF on average have higher NMI values than rTNMF and TNMF, respectively. However, all PL regularized methods outperform rGNMF, GNMF, rNMF and NMF. Overall, PL regularized methods outperform other methods. Most noticeably, \(k\)-rTNMF, rTNMF and TNMF outperform
\begin{table}
\begin{tabular}{|c|c c c c c c c c|} \hline data & \(k\)-rTNMF & rTNMF & \(k\)-TNMF & TNMF & rGNMF & GNMF & rNMF & NMF \\ \hline GSE67835 & **0.9235** & 0.8999 & 0.9107 & 0.8607 & 0.9104 & 0.8858 & 0.7975 & 0.8017 \\ GSE64016 & 0.3057 & 0.2059 & 0.3136 & 0.1869 & 0.2593 & 0.2562 & 0.1896 & 0.1849 \\ GSE75748time & 0.7522 & **0.7750** & 0.7159 & 0.7343 & 0.7235 & 0.6971 & 0.7227 & 0.7244 \\ GSE82187 & **0.9759** & 0.9691 & 0.9298 & 0.9668 & 0.8802 & 0.8754 & 0.9124 & 0.9117 \\ GSE84133human1 & **0.8802** & 0.8716 & 0.8785 & 0.8780 & 0.8713 & 0.8310 & 0.8226 & 0.7949 \\ GSE84133human2 & **0.9363** & 0.8937 & 0.9313 & 0.9070 & 0.9237 & 0.9145 & 0.8835 & 0.8829 \\ GSE84133human3 & 0.8500 & **0.8718** & 0.8577 & 0.8677 & 0.8439 & 0.8357 & 0.8215 & 0.8260 \\ GSE84133human4 & 0.8795 & **0.9542** & 0.8795 & 0.9542 & 0.8775 & 0.8753 & 0.8694 & 0.8694 \\ GSE84133mouse1 & **0.8664** & 0.8498 & 0.8664 & 0.8495 & 0.8596 & 0.8565 & 0.7634 & 0.7593 \\ GSE84133mouse2 & 0.8218 & **0.8355** & 0.8299 & 0.8713 & 0.8005 & 0.8129 & 0.7258 & 0.7272 \\ GSE57249 & **1.0000** & 0.9505 & 1.0000 & 0.9293 & 1.0000 & 1.0000 & 0.9293 & 0.9293 \\ GSE94820 & **0.7085** & 0.6657 & 0.6157 & 0.6716 & 0.6195 & 0.6258 & 0.6624 & 0.6693 \\ \hline \end{tabular}
\end{table}
Table 3: NMI of NMF methods across 12 datasets.
Figure 1: Average ARI of \(k\)-rTNMF, rTNMF, \(k\)-TNMF,TNMF, rGNMF, GNMF, rNMF and NMF for the 12 datasets
Figure 2: Average NMI values of \(k\)-rTNMF, rTNMF, \(k\)-TNMF,TNMF, rGNMF, GNMF, rNMF and NMF for the 12 datasets
standard NMF methods by 0.06 for GSE82187. Both rTNMF and TNMf outperform rGNMF and GNMF by 0.08 for GSE84133 human 4.
Table 4 shows the purity values of the NMF methods for the 12 datasets we have tested. The bold number indicate the highest performance. Figure 3 shows the average purity over the 12 datasets.
In general, PL-regularized methods achieve higher purity values compared to other NMF methods. Purity measures the maximum intersection between true and predicted classes, which is why we do not observe a significant difference, as seen in ARI and NMI. Furthermore, since purity does not account for the size of a class, and given the imbalanced class sizes in scRNA-seq data, it is not surprising that the purity values are similar.
Table 5 shows the ACC of the NMF methods for the 12 datasets we have tested. The bold number indicate the highest performance. Figure 4 shows the average ACC over the 12 datasets.
\begin{table}
\begin{tabular}{|c|c c c c c c c c|} \hline data & \(k\)-rTNMF & rTNMF & \(k\)-TNMF & TNMF & rGNMF & GNMF & rNMF & NMF \\ \hline GSE67835 & **0.9643** & 0.9267 & 0.9595 & 0.9024 & 0.9595 & 0.9476 & 0.8726 & 0.8719 \\ GSE64016 & **0.6048** & 0.4913 & 0.5846 & 0.5013 & 0.5339 & 0.5398 & 0.5080 & 0.5050 \\ GSE75748time & **0.7736** & 0.7512 & 0.7533 & 0.7454 & 0.7553 & 0.7387 & 0.7467 & 0.7455 \\ GSE82187 & **0.9927** & 0.9895 & 0.9620 & 0.9888 & 0.9620 & 0.9594 & 0.9693 & 0.9692 \\ GSE84133human1 & **0.9543** & 0.9357 & 0.9536 & 0.9382 & 0.9490 & 0.9187 & 0.9189 & 0.9099 \\ GSE84133human2 & **0.9818** & 0.9614 & 0.9806 & 0.9661 & 0.9777 & 0.9736 & 0.9602 & 0.9600 \\ GSE84133human3 & 0.9472 & **0.9485** & 0.9531 & 0.9460 & 0.9452 & 0.9420 & 0.9464 & 0.9466 \\ GSE84133human4 & 0.9427 & **0.9882** & 0.9427 & 0.9882 & 0.9427 & 0.9420 & 0.9412 & 0.9412 \\ GSE84133mouse1 & **0.9565** & 0.9540 & 0.9565 & 0.9540 & 0.9552 & 0.9540 & 0.9309 & 0.9299 \\ GSE84133mouse2 & 0.9585 & 0.9410 & **0.9604** & 0.9373 & 0.9466 & 0.9507 & 0.9185 & 0.9199 \\ GSE57249 & **1.0000** & 0.9857 & 1.0000 & 0.9796 & 1.0000 & 1.0000 & 0.9796 & 0.9796 \\ GSE94820 & **0.7893** & 0.7462 & 0.6658 & 0.7550 & 0.6421 & 0.6421 & 0.7429 & 0.7531 \\ \hline \end{tabular}
\end{table}
Table 4: Purity of NMF methods across 12 datasets.
Once again, we see that PL regularized methods have higher ACC than other NMF methods. RTNMF and TNMF improves rGNMF and GNMF by 0.05, and \(k\)-rTNMF and \(k\)-TNMF improves rGNMF and GNMF by 0.04. We see an improvement in ACC for both \(k\)-rTNMF and \(k\)-TNMF for GSE64016. All 4 PL regularized methods improve ACC of GSE82187 by 0.1. RTNMF and TNMF improve GSE84133 mouse 2 by at least 0.1 as well.
### Overall performance
Figure 5 shows the average ARI, NMI, purity and ACC of \(k\)-rTNMF, rTNMF, \(k\)-TNMF, TNNF, rGNMF, rGNMF, rGNMF, NMF across 10 datasets. All PL regularized NMF methods outperform the traditional rGNMF, GNMF, rNMF and NMF. Both rTNMF and TNMF have higher average ARI and purity than the \(k\)-NN based PL counterparts. However, \(k\)-rTNMF and \(k\)-TNMF have higher average NMI than rTNMF and TNMF, respectively. \(k\)-rTNMF has a significantly higher purity than other methods.
\begin{table}
\begin{tabular}{|c|c c c c c c c c|} \hline data & \(k\)-rTNMF & rTNMF & \(k\)-TNMF & TNMF & rGNMF & GNMF & rNMF & NMF \\ \hline GSE67835 & **0.9643** & 0.9243 & 0.9595 & 0.9000 & 0.9595 & 0.9383 & 0.8357 & 0.8364 \\ GSE64016 & **0.5700** & 0.4870 & 0.5502 & 0.4746 & 0.4891 & 0.4537 & 0.4691 & 0.4759 \\ GSE75748time & **0.7565** & 0.7438 & 0.7414 & 0.6917 & 0.7355 & 0.7241 & 0.6873 & 0.6875 \\ GSE82187 & **0.9927** & 0.9895 & 0.9599 & 0.9888 & 0.8512 & 0.8514 & 0.8896 & 0.8889 \\ GSE84133human1 & 0.8973 & **0.9194** & 0.8974 & 0.9088 & 0.8889 & 0.8364 & 0.7988 & 0.7370 \\ GSE84133human2 & **0.9260** & 0.9069 & 0.9242 & 0.9447 & 0.9224 & 0.9177 & 0.8998 & 0.8994 \\ GSE84133human3 & 0.8539 & **0.9456** & 0.8597 & 0.9419 & 0.8498 & 0.8228 & 0.8032 & 0.8178 \\ GSE84133human4 & 0.8831 & **0.9882** & 0.8831 & 0.9882 & 0.8824 & 0.8816 & 0.8847 & 0.8847 \\ GSE84133mouse1 & **0.8581** & 0.8542 & 0.8581 & 0.8542 & 0.8555 & 0.8542 & 0.7361 & 0.7311 \\ GSE84133mouse2 & 0.8232 & **0.9101** & 0.8263 & 0.9305 & 0.7903 & 0.8155 & 0.7239 & 0.7294 \\ GSE57249 & **1.0000** & 0.9857 & 1.0000 & 0.9796 & 1.0000 & 1.0000 & 0.9796 & 0.9796 \\ GSE94820 & **0.7533** & 0.7119 & 0.6482 & 0.7201 & 0.6088 & 0.6107 & 0.7091 & 0.7189 \\ \hline \end{tabular}
\end{table}
Table 5: ACC of NMF methods across 12 datasets.
## 4 Discussion
### Visualization of meta-genes based UMAP and t-SNE
Both UMAP and t-SNE are well-known for their effectiveness in visualization. However, these methods may not perform as competitively in clustering or classification tasks. Therefore, it is beneficial to employ NMF-based methods to enhance the visualization capabilities of UMAP and t-SNE.
In this process, we generate meta-genes and subsequently utilize UMAP or t-SNE to further reduce the data to 2 dimensions for visualization. For a dataset with \(M\) cells, the number of meta-genes will be the integer value of \(\sqrt{M}\). To compare the standard UMAP and t-SNE plots with the top-NMF-assisted and top-rNMF-assisted UMAP and t-SNE visualizations, we used the default settings of the Python implementation of UMAP and the Scikit-learn implementation of t-SNE. For unassisted UMAP and t-SNE, we first removed low-abundance genes and performed log-transformation before applying UMAP and t-SNE.
Figure 6 shows the visualization of PL regularized NMF methods through UMAP. Each row corresponds to GSE67835, GSE75748 time, GSE94820 and GSE84133 mouse 2 data. The columns from left to right are the \(k\)-rTNMF assisted UMAP, rTNMF assisted UMAP, \(k\)-TNMF assisted UMAP, TNMF assisted UMAP and UMAP visualization. Samples were colored according to their true cell types.
Figure 5: Average ARI, NMI, purity and ACC of \(k\)-rTNMF, rTNMF, \(k\)-TNMF, TNMF, rGNMF, GNMF, rNMF, NMF across 10 datasets
Figure 7 shows the visualization of PL regularized NMF through t-SNE. Each row corresponds to GSE67835, GSE75748 time, GSE94820 and GSE84133 mouse 2 data. The columns from left to right are the \(k\)-rTNMF assisted t-SNE, rTNMF assisted t-SNE, \(k\)-TNMF assisted t-SNE, TNMF assisted t-SNE visualization. Samples were colored according to their true cell types.
Figure 6: Visualization of top-NMF and top-rNMF meta-genes through UMAP. Each row corresponds to GSE67835, GSE75748 time, GSE94820 and GSE84133 mouse 2 data. The columns from left to right are the \(k\)-rTNMF assisted t-SNE, \(k\)-rTNMF assisted t-SNE, rTNMF assisted t-SNE visualization. Samples were colored according to their true cell types
We see a considerable improvement in both top-NMF assisted and top-rNMF assisted UMAP and t-SNE visualization.
4.1.0.1 Gse67835In the assisted UMAP and t-SNE visualizations of GSE67835, we observe a more distinct cluster, which includes a supercluster of fetal quiescent (Fetal-Q) and fetal replicating (Fetal-R) cells. Darmanis et al. [55] conducted a study that involved obtaining differential gene expression data for human adult brain cells and sequencing fetal brain cells for comparison. It is not surprising that the undeveloped Fetal-Q and Fetal-R cells do not exhibit significant differences and cluster together.
4.1.0.2 Gse75748 timeIn GSE75748 time data, Chu et al. [56] sequenced human embryonic stem cells at times 0hr, 12hr, 24hr, 36hr, 72hr, and 96hr under hypoxic conditions to observe differentiation. In unassisted UMAP and t-SNE, although some clustering is visible, there is no clear separation between the clusters. Additionally, two subclusters of 12hr cells are observed.
Notably, in the PL-regularized assisted UMAP and t-SNE visualizations, there is a distinct supercluster comprising the 72hr and 96hr cells, while cells from different time points form their own separate clusters.
Figure 7: Visualization of top-NMF and top-rNMF meta-genes through t-SNE. Each row corresponds to GSE67835, GSE75748 time, GSE94820 and GSE84133 mouse 2 data. The columns from left to right are the \(k\)-rTNMF assisted t-SNE, rTNMF assisted t-SNE, \(k\)-TNMF asssited t-SNE, TNMF assisted t-SNE and t-SNE visualization. Samples were colored according to their true cell types
This finding aligns with Chu's observation that there was no significant difference between the 72hr and 96hr cells, suggesting that differentiation may have already occurred by the 72hr mark.
4.1.0.3Gse94820Notice that in both t-SNE and UMAP, although there is a boundary, the cells do not form distinct clusters. This lack of distinct clustering can pose challenges in many clustering and classification methods. On the other hand, all PL-regularized NMF methods result in distinct clusters.
Among the PL-regularized NMF approaches, cutoff-based PL, rTNMF, and TNMF form a single CD1C\({}^{+}\) (CD1C1) cluster, whereas the \(k\)-NN induced PL, \(k\)-rTNMF, and \(k\)-TNMF exhibit two subclusters. Villani et al. [61] previously noted the similarity in the expression profile of CD1C1\({}^{-}\)CD141\({}^{-}\) (DoubleNeg) cells and monocytes. PL-regularized NMF successfully differentiates between these two types.
4.1.0.4Gse84133 mouse 2PL-regularized NMF yields significantly more distinct clusters compared to unassisted UMAP and t-SNE. Notably, the beta and gamma cells form distinct clusters in PL-regularized NMF. Additionally, when PL-regularized NMF is applied to assist UMAP, potential outliers within the beta cell population become visible. Baron et al. [58] previously highlighted heterogeneity within the beta cell population, and we observe potential outliers in all visualizations.
### RS analysis
Although UMAP and t-SNE are excellent tools for visualizing clusters, they may struggle to capture heterogeneity within clusters. Moreover, these methods can be less effective when dealing with a large number of classes. Therefore, it is essential to explore alternative visualization techniques.
In our approach, we visualize each cluster using RS plots [23]. RS plots depict the relationship between the residue score (R score) and similarity score (S score) and have proven useful in various applications for visualizing data with multiple class types [62, 63, 64, 65, 14].
Let \(\{(\mathbf{x}_{m},y_{m})|\mathbf{x}_{m}\in\mathbb{R}^{N},y_{m}\in\mathbb{Z}_{ L},1\leq m\leq M\}\) be the data, where \(\mathbf{x}_{m}\) is the \(m\)th sample, \(y_{m}\) is the cell type or cluster label. \(L\) is the number of class. That is, \(C_{l}=\{\mathbf{x}_{m}\in\mathcal{X}|y_{m}=l\}\) and \(\uplus_{0}^{L-1}\mathcal{C}_{l}=\mathcal{X}\).
The residue (R) score is defined as the inter-class sum of distance. For a given data \(\mathbf{x}_{m}\) with assignment \(y_{m}=l\), the R-score is defined as
\[R_{m}=R(\mathbf{x}_{m})=\frac{1}{R_{\max}}\sum_{\mathbf{x}_{j}\notin\mathcal{ C}_{l}}\|\mathbf{x}_{m}-\mathbf{x}_{j}\|,\]
where \(R_{\max}=\max\limits_{\mathbf{x}_{m},\mathbf{x}_{m}\in\mathcal{X}}R_{m}\). The similarity (S) score is defined as the intra-class average of distance, defined as
\[S_{m}=S(\mathbf{x}_{m})=\frac{1}{|\mathcal{C}_{l}|}\sum_{\mathbf{x}_{j}\in \mathcal{C}_{l}}\left(1-\frac{\|\mathbf{x}_{m}-\mathbf{x}_{j}\|}{d_{\max}} \right),\]
where \(d_{\max}=\max\limits_{\mathbf{x}_{i},\mathbf{x}_{j}\in\mathcal{X}}\|\mathbf{ x}_{i}-\mathbf{x}_{j}\|\) and \(|\mathcal{C}_{l}|\) is the number of data in class \(\mathcal{C}_{l}\). Both \(R_{m}\) and \(S_{m}\) are bounded by 0 and 1, and the larger the better for a given dataset.
The class residue index (CRI) and the class similarity index (CSI) can then be defined as the average of the R-score and S-score of each of the classes. That is \(\text{CRI}_{l}=\frac{1}{|\mathcal{C}_{l}|}\sum_{m}R_{m}\) and \(\text{CSI}_{l}=\frac{1}{|\mathcal{C}_{l}|}\sum_{m}S_{m}\). Then, the residue index (RI) and the similarity index (SI) can be defined \(\text{RI}=\frac{1}{L}\text{CRI}_{l}\) and \(\text{SI}=\frac{1}{L}\text{CSI}_{l}\), respectively.
Using the RI and SI, the residue similarity disparity can be computed by taking \(\text{RSD}=\text{RI}-\text{SI}\), and the residue-similarity index (RSI) can be computed as \(\text{RSI}=1-|\text{RI}-\text{SI}|\).
Figure 8 shows the RS plots of PL-regularized NMF methods for GSE67835 data. The columns from left to right correspond to \(k\)-rTNMF, rTNMF, \(k\)-TNMF, and TNMF, while the rows correspond to the cell
Figure 8: RS plots of GSE67835 data. The columns from left to right correspond to \(k\)-rTNMF, \(t\)TNMF, \(k\)-TNMF, and TNMF. Each row corresponds to a cell type. For each section, the x-axis and y-axis correspond to the S-score and R-score, respectively. K-means was used to obtain a cluster label, and the Hungarian algorithm was used to map the cluster labels to the true labels. Each sample was colored according to their true labels.
types. The x-axis and y-axis represent the S-score and R-score for each sample, respectively. The samples are colored according to their predicted cell types. Predictions were obtained using k-means clustering, and the Hungarian algorithm was employed to find the optimal mapping from the cluster labels to the true cell types.
We can see that TNMF fails to identify OP cells, whereas \(k\)-rTNMF, rTNMF, and \(k\)-TNMF are able to identify OPC cells. Notably, the S-score is quite low, indicating that the OPC did not form a cluster for TNMF. For fetal quiescent and replicating cells, \(k\)-rTNMF correctly identifies these two types, and the few misclassified samples are located on the boundaries. RTNMF is able to correctly identify fetal replicating cells but could not distinguish fetal quiescent cells from fetal replicating cells. The S-score is low for neurons in both rTNMF and TNMF, which shows a direct correlation with the number of misclassified cells.
## 5 Conclusion
Persistent Laplacian-regularized NMF is a dimensionality reduction technique that incorporates multiscale topological interactions between the cells. Traditional graph Laplacian-based regularization only represents a single scale and cannot capture the multiscale features of the data. We have also shown that the \(k\)-NN induced persistent Laplacian outperforms other NMF methods and is comparable to the cutoff-based persistent Laplacian-regularized NMF methods. However, PL methods do come with their downside. In particular, the weights for each filtration must be determined prior to the reduction. If there are \(T\) filtrations, then the hyperparameter space is \((T+1)^{T}\). However, \(k\)-NN induced PL reduces the number of parameters to \(2^{T}\). In addition, we have shown that we can achieve a significant improvement even if we limit the hyperparameter space to \(2^{T}\). We would like to further explore possible parameter-free versions of topological NMF. Additionally, NMF methods are not globally convex, but we have shown that with NNDSVDA initialization, our methods perform the best. One possible extension to the proposed methods is to incorporate higher-order persistent Laplacians in the regularization framework, which will reveal higher-order interactions. In addition, we would like to expand the ideas to tensor decomposition, such as Canonical Polyadic Decomposition (CPD) and Tucker decomposition, multimodal omics data, and spatial transcriptomics data.
## 6 Data availability and code
The data and model used to produce these results can be obtained at [https://github.com/hozumiyu/TopologicalNMF-scRNAseq](https://github.com/hozumiyu/TopologicalNMF-scRNAseq).
## 7 Acknowledgment
This work was supported in part by NIH grants R01GM126189, R01AI164266, and R35GM148196, National Science Foundation grants DMS2052983, DMS-1761320, and IIS-1900473, NASA grant 80NSSC21M0023, Michigan State University Research Foundation, and Bristol-Myers Squibb 65109.
|
2305.11794
|
Small-time global approximate controllability of bilinear wave equations
|
We consider a bilinear control problem for the wave equation on a torus of
arbitrary dimension. We show that the system is globally approximately
controllable in arbitrarily small times from a dense family of initial states.
The control strategy is explicit, and based on a small-time limit of conjugated
dynamics to move along non-directly accessible directions (a.k.a. Lie brackets
of the generators).
|
Eugenio Pozzoli
|
2023-05-19T16:30:15Z
|
http://arxiv.org/abs/2305.11794v1
|
# Small-time global approximate controllability of bilinear wave equations
###### Abstract
We consider a bilinear control problem for the wave equation on a torus of arbitrary dimension. We show that the system is globally approximately controllable in arbitrarily small times from a dense family of initial states. The control strategy is explicit, and based on a small-time limit of conjugated dynamics to move along non-directly accessible directions (a.k.a. Lie brackets of the generators).
**Keywords:** Wave equation; bilinear systems; small-time approximate controllability; Lie brackets.
## 1 Introduction
### The model
In this paper we study the following bilinear wave equation on a \(d\)-dimensional torus \(\mathbb{T}^{d}\), \(d\in\mathbb{N}\),
\[\frac{\partial^{2}}{\partial t^{2}}w(x,t)=\Big{(}\Delta+\mu(x,t)\Big{)}w(x,t),\quad(x,t)\in\mathbb{T}^{d}\times\mathbb{R}, \tag{1}\]
where \((w(\cdot,t),\frac{\partial}{\partial t}w(\cdot,t))\in H^{1}\times L^{2}( \mathbb{T}^{d},\mathbb{R})\) is the state of the system (describing the profile \(w\) and the velocity \(\frac{\partial}{\partial t}w\) of the wave), \(\Delta=\sum_{i=1}^{d}\frac{\partial^{2}}{\partial x_{i}^{2}}\) is the Laplacian, and \(\mu\) is a function which plays the role of the control. In particular, we control the system through _low modes forcing_: this means that we assume that the control function \(\mu\) can be written as
\[\mu(x,t)=\sum_{j=0}^{2d}p_{j}(t)\mu_{j}(x), \tag{2}\]
where \(p=(p_{0},\ldots,p_{2d})\) are piecewise constant control laws that can be freely chosen, and the \(\mu_{j}\) are fixed to be the first real Fourier modes of the system:
\[(\mu_{0}(x),\ldots,\mu_{2d}(x)):=(1,\cos(e_{1}x),\sin(e_{1}x),\ldots,\cos(e_{d }x),\sin(e_{d}x)), \tag{3}\]
where \(\{e_{i},i=1,\ldots,d\}\subset\mathbb{Z}^{d}\) is the standard basis of \(\mathbb{R}^{d}\). The dependence of the state on the control is nonlinear, and hence (1) is a nonlinear control problem in infinite dimensions.
### The main result
Determining the minimal time needed for the global approximate controllability of bilinear PDEs is a fundamental problem, whose answer is known in very few cases (see, e.g., [9, 5, 6]). The scope of this paper is proving that, in the case of wave equations, this minimal time is zero.
More precisely, our main result is the small-time global controllability of (1), approximately in \(H^{1}\times L^{2}(\mathbb{T}^{d})\), from any nonzero initial state whose profile has a finite number of non-vanishing Fourier modes.
**Theorem 1**.: _Consider an initial state \((0,0)\neq(w_{0},\dot{w}_{0})\in H^{1}\times L^{2}(\mathbb{T}^{d})\) such that_
\[w_{0}\neq 0,\quad\langle w_{0},e^{ikx}\rangle_{L^{2}}=0\,\,\text{for all but a finite set of}\,\,k\in\mathbb{Z}^{d}, \tag{4}\]
_or_
\[w_{0}=0,\dot{w}_{0}\neq 0,\quad\langle\dot{w}_{0},e^{ikx}\rangle_{L^{2}}=0\,\, \text{for all but a finite set of}\,\,k\in\mathbb{Z}^{d}. \tag{5}\]
_Then, for any final state \((w_{1},\dot{w}_{1})\in H^{1}\times L^{2}(\mathbb{T}^{d})\) and any error and time \(\varepsilon,T>0\), there exists a piecewise constant control law \(p:[0,T]\to\mathbb{R}^{2d+1}\) such that the solution \(w\) of (1) associated with the control (2),(3) and with the initial condition \(\left(w(t=0),\frac{\partial}{\partial t}w(t=0)\right)=(w_{0},\dot{w}_{0})\) satisfies_
\[\left\|\left(w(\cdot,T),\frac{\partial}{\partial t}w(\cdot,T)\right)-\left(w _{1},\dot{w}_{1}\right)\right\|_{H^{1}\times L^{2}(\mathbb{T}^{d})}<\varepsilon.\]
Let us stress that the initial state \((0,0)\) is an equilibrium of system (1) regardless of the control choice, and hence system (1) cannot be steered anywhere starting from \((0,0)\). We also remark that assumptions (4) or (5) on the initial state are technical but their necessity is an open question. In view of the strictly positive minimal time needed for the local exact controllability of bilinear wave equations showed by Beauchard in [4], Theorem 1 may seem surprising and relies upon the approximate nature of the small-time controllability result.
### The technique
It is convenient to recast (1), (2) as a first order evolution equation in the state \(W=(w,\frac{\partial}{\partial t}w)\): this gives the system
\[\frac{\partial}{\partial t}W(x,t)=\left(\mathcal{A}+\sum_{j=0}^{2d}p_{j}(t) \mu_{j}(x)\mathcal{B}\right)W(x,t),\quad(x,t)\in\mathbb{T}^{d}\times\mathbb{R}, \tag{6}\]
where
\[\mathcal{A}=\begin{pmatrix}0&I\\ \Delta&0\end{pmatrix},\quad\mathcal{B}=\begin{pmatrix}0&0\\ I&0\end{pmatrix}. \tag{7}\]
The proof of Theorem 1 is based on the following small-time limit of conjugated dynamics, holding for any initial condition \((w_{0},\dot{w}_{0})\in H^{1}\times L^{2}(\mathbb{T}^{d})\):
\[\lim_{\tau\to 0}e^{-\tau^{-1/2}\mu_{j}\mathcal{B}}e^{\tau\mathcal{A}}e^{\tau^{-1 /2}\mu_{j}\mathcal{B}}\begin{pmatrix}w_{0}\\ \dot{w}_{0}\end{pmatrix}=\begin{pmatrix}w_{0}\\ \dot{w}_{0}-\mu_{j}^{2}w_{0}\end{pmatrix}. \tag{8}\]
This limit has been introduced by Duca and Nersesyan in [14] for controlling nonlinear Schrodinger equations. It can be thought as the following explicit control strategy: we first apply an impulsive control with amplitude \(\tau^{-1/2}\) (which gives the evolution \(e^{\tau^{-1/2}\mu_{j}\mathcal{B}}\)), we then let the system evolve freely for a time interval of size \(\tau\) (which gives the evolution \(e^{\tau\mathcal{A}}\)), and we finally apply again an impulsive control with amplitude \(-\tau^{-1/2}\) (which gives the evolution \(e^{-\tau^{-1/2}\mu_{j}\mathcal{B}}\)). The control law associated to this strategy is depicted in Figure 1.
As observed in the work of the author in collaboration with Chambiron [11] on Schrodinger equations, the limiting dynamic (8) (also in this case of wave equati
Figure 1: The control law yielding the limiting propagator of (8) can be thought as a fractional derivative of a Dirac delta. Indeed, it can be thought as the control law \(p_{j}^{\tau}(t)=\frac{1}{\tau^{1/2}}(\delta(t)-\delta(t-\tau))\) with \(\tau\to 0\), where \(\delta(s)\) is a Dirac delta centred at \(s=0\).
exponential of an iterated Lie bracket between \(\mathcal{A}\) and \(\mu_{j}\mathcal{B}\): more precisely, limit (8) can also be written as
\[\lim_{r\to 0}e^{-\tau^{-1/2}\mu_{j}\mathcal{B}}e^{\tau\mathcal{A}}e^{\tau^{-1/2 }\mu_{j}\mathcal{B}}\begin{pmatrix}w_{0}\\ \dot{w}_{0}\end{pmatrix}=\exp\left(\frac{1}{2}[[\mathcal{A},\mu_{j}\mathcal{B }],\mu_{j}\mathcal{B}]\right)\begin{pmatrix}w_{0}\\ \dot{w}_{0}\end{pmatrix}, \tag{9}\]
where
\[[[\mathcal{A},\mu_{j}\mathcal{B}],\mu_{j}\mathcal{B}]=\begin{pmatrix}0&0\\ -2\mu_{j}^{2}&0\end{pmatrix},\]
and the Lie bracket \([\mathcal{C},\mathcal{D}]\) of two linear operators \(\mathcal{C},\mathcal{D}\) is formally defined as the commutator \(\mathcal{C}\mathcal{D}-\mathcal{D}\mathcal{C}\). It is then interesting to interpret Theorem 1 as a consequence of a geometric control technique adapted to this infinite dimensional setting: as in finite-dimensional nonlinear control systems [19, 23], one can think of the generators \(\mathcal{A}\) and \(\mu_{j}\mathcal{B}\) as directions that are directly accessible to the system, and recover from them additional directions that were not directly accessible, as for instance the Lie bracket \([[\mathcal{A},\mu_{j}\mathcal{B}],\mu_{j}\mathcal{B}]\). Moreover, the exponential flow computed on these directions (and applied to some initial condition) describes states that are approximately reachable in arbitrarily small times.
### Related literature
As shown in [3, 8], bilinear PDEs are never exactly controllable in the larger functional space where the evolution is defined. In particular, system (1) is not exactly controllable in \(H^{1}\times L^{2}\) (we refer also to [12] for an analogous obstructions to the exact controllability of (1) even in the presence of a state nonlinearity). Researchers have then focus their efforts on the approximate controllability, or the exact controllability in smaller functional spaces, of bilinear PDEs.
The global approximate controllability in \(H^{1}\times L^{2}\) of bilinear wave equations of the form (1) has been firstly proved by Ball, Marsden, and Slemrod in [3], on a 1-D interval with Dirichlet boundary conditions, with space-independent control function \(\mu(x,t)=\mu(t)\), from any initial condition \((w_{0},\dot{w}_{0})\) whose Fourier modes are all non-vanishing, and in times \(T\geq 1\).
The main contributions of Theorem 1, in this sense, are extensions of the global approximate controllability result in two ways: it is proved to hold in any space dimensions, and in arbitrarily small times. Moreover, the control technique behind the proof of Theorem 1 has the advantage of being explicit with the use of piecewise constant (in time) control laws; also, it suggests a useful link between controllability of bilinear PDEs and Lie brackets of the generators.
Approximate controllability of bilinear wave equations (on a 1-D interval with Dirichlet boundary conditions) has also been studied by Khapalov in [17], where it is shown in \(H^{1}\times L^{2}\), from any nonzero initial state, towards any state of the form \((w_{1},0)\), in large times (and a similar result is obtained in [18] even in the presence of state nonlinearity).
The local exact controllability of bilinear wave equations (on a 1-D interval with Neumann boundary conditions) has been studied by Beauchard in [4]: around the state \((w_{0},\dot{w}_{0})=(1,0)\), it is proved to hold (in the optimal space \(H^{3}\times H^{2}\)) if and only if the control time satisfies \(T>2\) (similar conclusions are obtained in [7] in the presence of state nonlinearity, and in the recent work [10] in the case of a degenerate Laplacian).
We conclude this bibliographical review by commenting on the geometric control technique of low mode forcing and the small-time controllability analysis of bilinear PDEs: the assumption on the spatial control to be supported only on a finite number of Fourier modes (cf. (2)) originates in the work of Agrachev and Sarychev [1, 2] and Shirikyan [21, 22] on the additive control of Navier-Stokes equations. More recently, it has been introduced in the bilinear setting for studying small-time controllability properties of Schrodinger equations by Duca and Nersesyan [14, 15] and by Coron, Xiang, and Zhang [13].
The results on bilinear wave equations obtained in this paper also testify about the versatility of the control strategy of small-time conjugated dynamics, readapted from bilinear Schrodinger equations [14, 11]. In a forthcoming work [16], we will show how this strategy also furnishes new results on the small-time approximate controllability of bilinear heat equations.
### Structure of the paper
The paper is organized as follow: in Section 2 we show a preliminary result on the free evolution; in Section 3 we prove the small-time limit (8) of conjugated dynamics; in Section 4 we show a density property of trigonometric functions; in Section 5 we prove that the velocity of the wave profile is globally approximately controllable in small times; finally, in Section 6 we put things together and conclude the proof of Theorem 1.
## 2 Preliminaries
Let
\[\varphi_{k}(x)=(2\pi)^{-d/2}e^{ikx},\quad\lambda_{k}=|k|^{2},\quad k\in\mathbb{Z}^ {d}, \tag{10}\]
be the eigenfunctions and eigenvalues of \(-\Delta\) on \(\mathbb{T}^{d}\) which satisfy
\[-\Delta\phi_{k}=\lambda_{k}\phi_{k},\quad\langle\varphi_{k},\varphi_{j}\rangle =\begin{cases}1,&j=k,\\ 0,&j\neq k,\end{cases}\quad k,j\in\mathbb{Z}^{d}, \tag{11}\]
where \(\langle\cdot,\cdot\rangle\) denotes the scalar product of \(L^{2}(\mathbb{T}^{d})\). The domains of the linear operators \(\Delta\) and \(\mathcal{A}\) are respectively \(H^{2}(\mathbb{T}^{d})\) and \(H^{2}\times H^{1}(\mathbb{T}^{d})\), where
\[H^{k}(\mathbb{T}^{d})=\left\{\phi\in L^{2}(\mathbb{T}^{d})\mid\sum_{j\in \mathbb{Z}}\lambda_{j}^{k/2}|\langle\phi,\varphi_{j}\rangle|^{2}<\infty\right\}.\]
Using the spectral decomposition (11) of \(\Delta\), for any \((w_{0},\dot{w}_{0})\in H^{1}\times L^{2}(\mathbb{T}^{d})\) and \(t\in\mathbb{R}\), one can write \((w(t),\frac{\partial}{\partial t}w(t)):=e^{t\mathcal{A}}(w_{0},\dot{w}_{0})\) as a converging series in \(H^{1}\times L^{2}(\mathbb{T}^{d})\):
\[w(t)=(\langle w_{0},\varphi_{0}\rangle+\langle\dot{w}_{0},\varphi_{0}\rangle t )\,\varphi_{0}+\sum_{k\in\mathbb{Z}^{d}\setminus\{0\}}\left(\langle w_{0}, \varphi_{k}\rangle\cos(\sqrt{\lambda_{k}}t)+\langle\dot{w}_{0},\varphi_{k} \rangle\frac{\sin(\sqrt{\lambda_{k}}t)}{\sqrt{\lambda_{k}}}\right)\varphi_{k}, \tag{12}\]
\[\frac{\partial}{\partial t}w(t)=\langle\dot{w}_{0},\varphi_{0}\rangle\varphi_ {0}+\sum_{k\in\mathbb{Z}^{d}\setminus\{0\}}\left(-\sqrt{\lambda_{k}}\langle w _{0},\varphi_{k}\rangle\sin(\sqrt{\lambda_{k}}t)+\langle\dot{w}_{0},\varphi_{ k}\rangle\cos(\sqrt{\lambda_{k}}t)\right)\varphi_{k}. \tag{13}\]
We start by showing that the free evolution can be used to instantaneously change the initial profile into an arbitrary profile, if the initial velocity is arbitrary.
**Proposition 2**.: _For any initial and final profile \(w_{0},w_{1}\in H^{1}(\mathbb{T}^{d})\) such that_
\[\langle w_{j},\varphi_{k}\rangle_{L^{2}}=0\text{ for all but a finite set of }k\in\mathbb{Z}^{d},\quad j=1,2 \tag{14}\]
_and any positive time \(T>0\), there exist a smaller time \(\tau\in[0,T)\) and an initial velocity \(f\in L^{2}(\mathbb{T}^{d})\) such that the solution \(w\) of (1) associated with the identically zero control \(p=0\) with initial condition \(\left(w(t=0),\frac{\partial}{\partial t}w(t=0)\right)=(w_{0},f)\) satisfies \(w(\tau)=w_{1}\) in \(H^{1}(\mathbb{T}^{d})\)._
Proof.: We define
\[f_{\tau} =\left(\frac{\langle w_{1},\varphi_{0}\rangle-\langle w_{0}, \varphi_{0}\rangle}{\tau}\right)\varphi_{0}\] \[+\sum_{k\in\mathbb{Z}^{d}\setminus\{0\}}\left(\frac{\langle w_{1 },\varphi_{k}\rangle}{\sin(\sqrt{\lambda_{k}}\tau)/\sqrt{\lambda_{k}}}-\frac{ \langle w_{0},\varphi_{k}\rangle}{\sin(\sqrt{\lambda_{k}}\tau)/\sqrt{\lambda_{k }}}\cos(\sqrt{\lambda_{k}}\tau)\right)\varphi_{k}.\]
Notice that we can choose \(\tau<T\) such that \(f_{\tau}\) is well-defined: indeed, \(f\) is a sum on a finite subset of \(\mathbb{Z}^{d}\) (cf. (14)), and denoting the latter as
\[\mathcal{K}:=\{k\in\mathbb{Z}^{d}\mid\langle w_{0},\varphi_{k}\rangle\neq 0, \text{ or }\langle w_{1},\varphi_{k}\rangle\neq 0\},\]
it suffices to choose \(\tau<T\) such that
\[\tau\neq\frac{n\pi}{|k|},\quad\forall n\in\mathbb{N},k\in\mathcal{K}.\]
In this way \(f=f_{\tau}\) is well-defined, belongs to \(L^{2}(\mathbb{T}^{d})\), and thanks to (12) one easily checks that \(w(\tau)=\sum_{k\in\mathbb{Z}^{d}}\langle w_{1},\varphi_{k}\rangle\varphi_{k}=w _{1}\).
We now recall the well-posedness of (6),(3) (for a proof, see e.g. [4, Proposition 2]).
**Proposition 3**.: _Given \(T>0\), \(p\in L^{1}([0,T],\mathbb{R}^{2d+1})\), and \(W_{0}=(w_{0},\dot{w}_{0})\in H^{1}\times L^{2}(\mathbb{T}^{d})\), there exists a unique weak solution \(\mathcal{R}(t,W_{0},p)\) of (6), that is, a unique function \(\mathcal{R}(\cdot,W_{0},p)\in C^{0}([0,T],H^{1}\times L^{2}(\mathbb{T}^{d}))\) satisfying the following equality in \(H^{1}\times L^{2}(\mathbb{T}^{d})\)_
\[\mathcal{R}(t,W_{0},p)=e^{t\mathcal{A}}W_{0}+\int_{0}^{t}e^{(t-s)\mathcal{A}} \left(\sum_{j=0}^{2d}p_{j}(s)\mu_{j}(x)\mathcal{B}\right)\mathcal{R}(s,W_{0},p)ds. \tag{15}\]
_Moreover, there exists \(C=C(\|p\|_{L^{1}},T)>0\) such that for any other \(W_{1}=(w_{1},\dot{w}_{1})\in H^{1}\times L^{2}(\mathbb{T}^{d})\) the following holds_
\[\|\mathcal{R}(\cdot,W_{0},p)-\mathcal{R}(\cdot,W_{1},p)\|_{C^{0}([0,T],H^{1} \times L^{2}(\mathbb{T}^{d}))}\leq C\|W_{0}-W_{1}\|_{H^{1}\times L^{2}( \mathbb{T}^{d})}. \tag{16}\]
Small-time conjugated dynamics
In this section we prove limit (8).
**Proposition 4**.: _Let \(\xi,\psi\in L^{\infty}(\mathbb{T}^{d})\). For any \((w_{0},\dot{w}_{0})\in H^{1}\times L^{2}(\mathbb{T}^{d})\), the following limits hold in \(H^{1}\times L^{2}(\mathbb{T}^{d})\)_
\[\lim_{\tau\to 0}\exp\left(\tau\left(\mathcal{A}+\frac{\xi}{\tau} \mathcal{B}\right)\right)\begin{pmatrix}w_{0}\\ \dot{w}_{0}\end{pmatrix} =\begin{pmatrix}w_{0}\\ \dot{w}_{0}+\xi w_{0}\end{pmatrix}, \tag{17}\] \[\lim_{\tau\to 0}e^{-\tau^{-1/2}\psi\mathcal{B}}e^{\tau\mathcal{A}}e^{- 1/2}\psi\mathcal{B}\begin{pmatrix}w_{0}\\ \dot{w}_{0}\end{pmatrix} =\begin{pmatrix}w_{0}\\ \dot{w}_{0}-\psi^{2}w_{0}\end{pmatrix}. \tag{18}\]
Proof of (17).: Using the explicit expressions for the free dynamics (12),(13), one sees that \(\mathcal{A}\) generates a strongly continuous group \(\{e^{\mathcal{A}}\}_{t\in\mathbb{R}}\) of bounded operators on \(H^{1}\times L^{2}(\mathbb{T}^{d})\). Since \(\frac{\xi}{\tau}\mathcal{B}\) is bounded for any \(\tau\in\mathbb{R}\), the same does \(M_{\tau}:=\mathcal{A}+\frac{\xi}{\tau}\mathcal{B}\) (see, e.g., [12, Section 2.1]). Consider then
\[V_{\tau}(t):=e^{tM_{\tau}}W_{0}=\exp\left(t\left(\mathcal{A}+\frac{\xi}{\tau} \mathcal{B}\right)\right)W_{0},\]
i.e. its associated group at time \(t\), applied to \(W_{0}=(w_{0},\dot{w}_{0})\in H^{1}\times L^{2}(\mathbb{T}^{d})\): we have to prove that \(V_{\tau}(\tau)\) tends to \((w_{0},\dot{w}_{0}+\xi w_{0})\) in \(H^{1}\times L^{2}(\mathbb{T}^{d})\) when \(\tau\to 0\). We compute \(V_{\tau}(t)\) by considering its Dyson expansion: this is obtained by iterating (15), where the bounded time-dependent perturbative term \(\sum_{j=0}^{2d}p_{j}(t)\mu_{j}(x)\mathcal{B}\) is replaced with the bounded time-independent perturbative term \(\frac{\xi}{\tau}\mathcal{B}\). This procedure gives
\[V_{\tau}(t)=e^{t\mathcal{A}}W_{0}+\sum_{j=1}^{\infty}\frac{e^{t\mathcal{A}}B( t)_{j}}{\tau^{j}}, \tag{19}\]
where
\[B(t)_{j}=\int_{0}^{t}\!\!\int_{0}^{t_{1}}\!\!\dots\!\int_{0}^{t_{j-1}}\left( \prod_{i=1}^{j}e^{-t_{i}\mathcal{A}}\xi\mathcal{B}e^{t_{i}\mathcal{A}}\right) dt_{1}\dots dt_{j}W_{0},\]
and the series (19) converges in \(H^{1}\times L^{2}(\mathbb{T}^{d})\) (see e.g. [12, Section 2.1], or analogously (20) below). We claim that
\[B(t)_{j}=\frac{1}{j!}\left(\int_{0}^{t}e^{-s\mathcal{A}}\xi\mathcal{B}e^{s \mathcal{A}}ds\right)^{j}W_{0}, \tag{20}\]
and prove it by induction. The case \(j=1\) is obvious. Now, by inductive hypothesis, we can write
\[B(t)_{j}=\int_{0}^{t}\frac{1}{(j-1)!}\left(\int_{0}^{t_{1}}e^{-s\mathcal{A}} \xi\mathcal{B}e^{s\mathcal{A}}ds\right)^{j-1}e^{-t_{1}\mathcal{A}}\xi \mathcal{B}e^{t_{1}\mathcal{A}}dt_{1}W_{0}.\]
We make the change of variable
\[v(t_{1})=\int_{0}^{t_{1}}e^{-s\mathcal{A}}\xi\mathcal{B}e^{s\mathcal{A}}ds,\]
which gives
\[B(t)_{j}=\frac{1}{(j-1)!}\int_{0}^{t}v^{j-1}dvW_{0}=\frac{1}{j!}\left(\int_{0}^ {t}e^{-s\mathcal{A}}\xi\mathcal{B}e^{s\mathcal{A}}ds\right)^{j}W_{0},\]
and the claim is proved. From this, we see that
\[V_{\tau}(\tau)\to\left(\sum_{j=0}^{\infty}\frac{(\xi\mathcal{B})^{j}}{j!} \right)W_{0}=(I+\xi\mathcal{B})W_{0}=\begin{pmatrix}w_{0}\\ \dot{w}_{0}+\xi w_{0}\end{pmatrix},\quad\tau\to 0,\]
in \(H^{1}\times L^{2}(\mathbb{T}^{d})\) (where we used that \(\mathcal{B}^{2}=0\)), which concludes the proof.
Proof of (18).: Since \(\mathcal{A}\) generates a strongly continuous group of bounded operators on \(H^{1}\times L^{2}(\mathbb{T}^{d})\) and \(\tau^{-1/2}\psi\mathcal{B}\) is bounded for all \(\tau>0\), the same does
\[L_{\tau}:=e^{-\tau^{-1/2}\psi\mathcal{B}}\mathcal{A}e^{-\tau^{1/2}\psi\mathcal{ B}}.\]
Consider then \(W_{\tau}(t):=e^{tL_{\tau}}W_{0}\), i.e. its associated group at time \(t\), applied to \(W_{0}=(w_{0},\dot{w}_{0})\in H^{1}\times L^{2}(\mathbb{T}^{d})\): we have
\[W_{\tau}(t)=\exp\left(e^{-\tau^{-1/2}\psi\mathcal{B}}t\mathcal{A}e^{\tau^{-1/2} \psi\mathcal{B}}\right)W_{0}. \tag{21}\]
By computing the time derivative of \(e^{\tau^{-1/2}\psi\mathcal{B}}W_{\tau}(t)\), we see that
\[W_{\tau}(t)=e^{-\tau^{-1/2}\psi\mathcal{B}}e^{t\mathcal{A}}e^{\tau^{-1/2}\psi \mathcal{B}}W_{0}.\]
To conclude the proof, we are then left to prove that the RHS of (21) computed at time \(t=\tau\) tends to \((w_{0},\dot{w}_{0}-\psi^{2}w_{0})\) in \(H^{1}\times L^{2}(\mathbb{T}^{d})\), as \(\tau\to 0\).
In order to do so, we first notice that
\[e^{-\tau^{-1/2}\psi\mathcal{B}}t\mathcal{A}e^{\tau^{-1/2}\psi\mathcal{B}}= \begin{pmatrix}tr^{-1/2}\psi&t\\ -t\tau^{-1}\psi^{2}+t\Delta&-t\tau^{-1/2}\psi\end{pmatrix},\]
where we used that \(\mathcal{B}^{2}=0\) to compute the exponentials, which allows us to write
\[W_{\tau}(t)=\exp\left(t\left(\mathcal{A}+\frac{\psi}{\tau^{1/2}}[\mathcal{A},\mathcal{B}]+\frac{\psi^{2}}{2\tau}[[\mathcal{A},\mathcal{B}],\mathcal{B}] \right)\right)W_{0},\]
where
\[[\mathcal{A},\mathcal{B}]=\begin{pmatrix}I&0\\ 0&-I\end{pmatrix},\quad[[\mathcal{A},\mathcal{B}],\mathcal{B}]=\begin{pmatrix} 0&0\\ -2I&0\end{pmatrix}.\]
Notice moreover that \([\mathcal{A},\mathcal{B}],[[\mathcal{A},\mathcal{B}],\mathcal{B}]\) are bounded. We are thus left to prove that \(W_{\tau}(\tau)\) tends to \((w_{0},\dot{w}_{0}-\psi^{2}w_{0})\) in \(H^{1}\times L^{2}(\mathbb{T}^{d})\), as \(\tau\to 0\). We do it by considering the Dyson expansion of \(W_{\tau}(t)\): this is obtained by iterating (15), where the bounded time-dependent perturbative term \(\sum_{j=0}^{2d}p_{j}(t)\mu_{j}(x)\mathcal{B}\) is replaced with the bounded time-independent perturbative term \(\frac{\psi}{\tau^{1/2}}[\mathcal{A},\mathcal{B}]+\frac{\psi^{2}}{2\tau}[[ \mathcal{A},\mathcal{B}],\mathcal{B}]\). This procedure gives
\[W_{\tau}(t)=e^{t\mathcal{A}}W_{0}+\sum_{j=1}^{\infty}\left(\frac{e^{t \mathcal{A}}}{\tau^{j}}C(t)_{j}+\sum_{i=1}^{j}\frac{e^{t\mathcal{A}}}{\tau^{j -\frac{1}{2}}}R(t)_{j,i}\right), \tag{22}\]
where
\[C(t)_{j}:=\int_{0}^{t}\!\!\int_{0}^{t_{1}}\!\!\cdots\!\int_{0}^{t_{j-1}}\left( \prod_{i=1}^{j}e^{-t_{i}\mathcal{A}}\frac{\psi^{2}}{2}[[\mathcal{A},\mathcal{ B}],\mathcal{B}]e^{t\mathcal{A}}\right)dt_{1}\ldots dt_{j}W_{0},\]
and
\[R(t)_{j,i}:= \sum_{\begin{subarray}{c}R_{k_{1}}\ldots,R_{k_{j}}=\psi[\mathcal{ A},\mathcal{B}],\{k_{1},\ldots,k_{i}\}\subset\{1,\ldots,j\}\end{subarray}}\int_{0}^{t}\!\! \!\int_{0}^{t_{1}}\!\!\!\cdots\!\int_{0}^{t_{j-1}}\!\!\!\left(\prod_{k=1}^{j}e^ {-t_{k}\mathcal{A}}R_{k}e^{t_{k}\mathcal{A}}\right)dt_{1}\ldots dt_{j}W_{0}.\] \[\quad\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
## 4 Density of saturation space
We consider the \((2d+1)\)-dimensional vector subspace of \(L^{2}(\mathbb{T}^{d})\) spanned by the spatial control functions:
\[\mathcal{H}_{0}:=\operatorname{span}\{1,\cos(e_{1}x),\sin(e_{1}x),\ldots,\cos(e _{d}x),\sin(e_{d}x)\},\]
and define \(\mathcal{H}_{j},j\geq 1,\) as the largest vector space whose elements \(\phi\) can be written as
\[\phi=\phi_{0}-\sum_{i=1}^{N}\phi_{i}^{2},\quad\phi_{0},\ldots,\phi_{n}\in \mathcal{H}_{j-1},\quad N\in\mathbb{N}.\]
We moreover define the saturation space as \(\mathcal{H}_{\infty}=\cup_{j=0}^{\infty}\mathcal{H}_{j}.\)
**Lemma 5**.: _The vector space \(\mathcal{H}_{\infty}\) is dense in \(L^{2}(\mathbb{T}^{d}).\)_
A more general version of this Lemma appeared in [20, Proposition 2.5] for the study of nonlinear parabolic equations with additive controls. The proof here is simpler, and we furnish it for completeness.
Proof.: We prove the statement by showing that
\[\operatorname{span}\{\cos(kx),\sin(kx)\mid k\in\mathbb{Z}^{d}\}\subset \mathcal{H}_{\infty}.\]
From
\[\cos(2kx)=1-2\sin^{2}(kx),\quad-\cos(2kx)=1-2\cos^{2}(kx)\]
we deduce that \(\pm\cos(2e_{j}x)\in\mathcal{H}_{1}\), and also that \(\pm\cos^{2}(e_{j}x),\pm\sin^{2}(e_{j}x)\in\mathcal{H}_{1}\), for all \(j=1,\ldots,d.\) From
\[\pm\sin(2kx)=1-(\sin(kx)\mp\cos(kx))^{2}\]
we deduce that \(\pm\sin(2e_{j}x)\in\mathcal{H}_{1}\) for all \(j=1,\ldots,d.\) Finally, from
\[\pm\cos((k+m)x)=1-\frac{1}{2}(\cos(kx)\mp\cos(mx))^{2}-\frac{1}{2}(\sin(kx)\pm \sin(mx))^{2}\]
we thus deduce that \(\pm\cos(kx)\in\mathcal{H}_{\infty}\) for all \(k\in\mathbb{Z}^{d}\), and from
\[\pm\sin((k+m)x)=1-\frac{1}{2}(\sin(kx)\mp\cos(mx))^{2}-\frac{1}{2}(\cos(kx) \mp\sin(mx))^{2}\]
we thus deduce that \(\pm\sin(kx)\in\mathcal{H}_{\infty}\) for all \(k\in\mathbb{Z}^{d}\), concluding the proof.
## 5 Small-time global approximate controllability of the velocity
In this section we prove that we can globally control the velocity in small times without changing the profile, approximately.
**Proposition 6**.: _Consider an initial state \((w_{0},\dot{w}_{0})\in H^{1}\times L^{2}(\mathbb{T}^{d})\) satisfying (4), and any final velocity \(f\in L^{2}(\mathbb{T}^{d})\). Then, for any error and time \(\varepsilon,T>0\) there exist a smaller time \(\tau\in[0,T)\) and a piecewise constant control law \(p:[0,\tau]\to\mathbb{R}^{2d+1}\) such that the solution \(w\) of (1) associated with the control (2),(3) with initial condition \(\big{(}w(t=0),\frac{\partial}{\partial t}w(t=0)\big{)}=(w_{0},\dot{w}_{0})\) satisfies_
\[\left\|\left(w(\cdot,\tau),\frac{\partial}{\partial t}w(\cdot,\tau)\right)-(w _{0},f)\right\|_{H^{1}\times L^{2}(\mathbb{T}^{d})}<\varepsilon.\]
The core of the proof of Proposition 6 is given in the following Lemma.
**Lemma 7**.: _Let \((w_{0},\dot{w}_{0})\in H^{1}\times L^{2}(\mathbb{T}^{d})\) and \(\phi\in L^{2}(\mathbb{T}^{d})\). Then, for any \(\varepsilon,T>0\), there exist \(\tau\in[0,T)\) and \(p:[0,\tau]\to\mathbb{R}^{2d+1}\) piecewise constant such that the solution \(w\) of (1) associated with the control (2),(3) and with the initial condition \(\big{(}w(t=0),\frac{\partial}{\partial t}w(t=0)\big{)}=(w_{0},\dot{w}_{0})\) satisfies_
\[\left\|\left(\begin{array}{c}w(\cdot,\tau)\\ \frac{\partial}{\partial t}w(\cdot,\tau)\end{array}\right)-\binom{w_{0}}{\dot{ w}_{0}+w_{0}\phi}\right\|_{H^{1}\times L^{2}(\mathbb{T}^{d})}<\varepsilon.\]
Leu us show how Proposition 6 follows from Lemma 7
Proof of Proposition 6.: Let \(f\) be the final velocity in the statement, and define
\[\phi_{\varepsilon}=\frac{f-w_{0}}{w_{0}}\chi_{\mathbb{T}^{d},Z_{\varepsilon}(w_{ 0})}\in L^{2}(\mathbb{T}^{d}), \tag{24}\]
where
\[Z_{\varepsilon}(w_{0}):=\{x\in\mathbb{T}^{d}\mid\operatorname{dist}(x,Z(w_{0}) )<\varepsilon\},\quad Z(w_{0}):=\{z\in\mathbb{T}^{d}\mid w_{0}(z)=0\},\]
and \(\chi_{S}\) is the characteristic function of any subset \(S\subset\mathbb{T}^{d}\). Then one has
\[\|(\dot{w}_{0}+w_{0}\phi_{\varepsilon})-f\|_{L^{2}(\mathbb{T}^{d})}\leq\|\dot {w}_{0}\|_{L^{2}(Z_{\varepsilon}(w_{0}))}+\|f\|_{L^{2}(Z_{\varepsilon}(w_{0}) )}\to 0,\quad\text{as }\varepsilon\to 0,\]
thanks to the fact that, since by hypothesis (cf. (4)) \(w_{0}\) is supported only on a finite set of Fourier modes, the Lebesgue measure of \(Z_{\varepsilon}(w_{0})\) tends to zero when \(\varepsilon\) tends to zero. Hence, Proposition 6 follows from Lemma 7 by choosing \(\phi=\phi_{\varepsilon}\) with \(\varepsilon\) small enough.
Before proving Lemma 7, let us recall that the concatenation \(q*p\) of two scalar control laws \(p:[0,T_{1}]\to\mathbb{R}\), \(q:[0,T_{2}]\to\mathbb{R}\) is the scalar control law defined on \([0,T_{1}+T_{2}]\) as follows
\[(q*p)(t)=\begin{cases}p(t),&t\in[0,T_{1}]\\ q(t-T_{1}),&t\in(T_{1},T_{1}+T_{2}],\end{cases}\]
and the definition extends to controls with values in \(\mathbb{R}^{2d+1}\) componentwise. Denoting with \(\mathcal{R}(t,W_{0},p)\) the solution \((w(t),\frac{\partial}{\partial t}w(t))\) of (6) at time \(t\), associated with a control \(p\) and the initial condition \(W_{0}=(w_{0},\dot{w}_{0})\), we will often use the fact that
\[\mathcal{R}(T_{1}+t,W_{0},q*p)=\mathcal{R}(t,\mathcal{R}(T_{1},W_{0},p),q), \quad t>0.\]
Proof of Lemma 7.: Let us start by assuming that the following property holds for any \(n\in\mathbb{N}\):
1. for any \((w_{0},\dot{w}_{0})\in H^{1}\times L^{2}(\mathbb{T}^{d})\), \(\phi\in\mathcal{H}_{n}\), and any \(\varepsilon,T>0\), there exist \(\tau\in[0,T)\) and \(p:[0,\tau]\to\mathbb{R}^{2d+1}\) piecewise constant such that the solution \(w\) of (1) associated with the control (2) and with the initial condition \((w_{0},\dot{w}_{0})\) satisfies \[\left\|\begin{pmatrix}w(\cdot,\tau)\\ \frac{\partial}{\partial t}w(\cdot,\tau)\end{pmatrix}-e^{\phi_{\mathcal{B}}} \begin{pmatrix}w_{0}\\ \dot{w}_{0}\end{pmatrix}\right\|_{H^{1}\times L^{2}(\mathbb{T}^{d})}<\varepsilon.\]
Noticing that, thanks to the fact that \(\mathcal{B}^{2}=0\), one has
\[e^{\phi_{\mathcal{B}}}\begin{pmatrix}w_{0}\\ \dot{w}_{0}\end{pmatrix}=\begin{pmatrix}w_{0}\\ \dot{w}_{0}+w_{0}\phi\end{pmatrix},\]
the property \((P_{n})\) combined with the density property proved in Lemma 5 implies at once Lemma 7.
We are thus left to prove the property \((P_{n})\). An analogous property appeared in [14, Theorem 2.2] in the study of nonlinear Schrodinger equations with bilinear control. We furnish the proof also here for completeness; we do it by induction on \(n\).
**Basis of induction:**\(n=0\)
If \(\phi\in\mathcal{H}_{0}\), there exists \((p_{0},\dots,p_{2d})\in\mathbb{R}^{2d+1}\) such that \(\phi(x)=\sum_{i=0}^{2d}p_{j}\mu_{j}(x)\). Consider then the solution of (6) associated with the constant control \(p^{\tau}:=(p_{0},\dots,p_{2d})/\tau\in\mathbb{R}^{2d+1}\) and with the initial condition \((w_{0},\dot{w}_{0})\), that is,
\[\mathcal{R}(t,\begin{pmatrix}w_{0}\\ \dot{w}_{0}\end{pmatrix},p^{\tau})=\exp\left(t\left(\mathcal{A}+\frac{\phi}{ \tau}\mathcal{B}\right)\right)\begin{pmatrix}w_{0}\\ \dot{w}_{0}\end{pmatrix}.\]
Applying (17) with \(\xi=\phi\), we find \(\tau\in[0,T)\) such that
\[\left\|\mathcal{R}(\tau,\begin{pmatrix}w_{0}\\ \dot{w}_{0}\end{pmatrix},p^{\tau})-e^{\phi_{\mathcal{B}}}\begin{pmatrix}w_{0} \\ \dot{w}_{0}\end{pmatrix}\right\|_{H^{1}\times L^{2}(\mathbb{T}^{d})}<\varepsilon,\]
which proves the desired property.
**Inductive step:**\(n\Rightarrow n+1\)
Assuming that \((P_{n})\) holds, we prove \((P_{n+1})\). If \(\phi\in\mathcal{H}_{n+1}\), there exist \(N\in\mathbb{N}\) and \(\phi_{0},\dots,\phi_{N}\in\mathcal{H}_{n+1}\), and we have
\[\left\|\begin{pmatrix}w_{0}\\ \dot{w}_{0}\end{pmatrix}-e^{\phi_{\mathcal{B}}}\begin{pmatrix}w_{0}\\ \dot{w}_{0}\end{pmatrix}\right\|_{H^{1}\times L^{2}(\mathbb{T}^{d})}<\varepsilon.\]
We now prove the desired property.
**Lemma 8**.: _Let \(\phi\in\mathcal{H}_{n+1}\), and let \(\phi_{0},\dots,\phi_{N}\in\mathcal{H}_{n+1}\) be the solution of (6) associated with the control \(p^{\tau}:=(p_{0},\dots,p_{2d})/\tau\in\mathbb{R}^{2d+1}\), and let \(\phi_{0},\dots,\phi_{N}\in\mathcal{H}_{n+1}\) be the solution of (6) associated with the control \(p^{\tau}:=(p_{0},\dots,p_{2d
\(\mathcal{H}_{n}\) such that \(\phi=\phi_{0}-\sum_{i=1}^{N}\phi_{i}^{2}\). Consider, e.g., \(\phi_{i}\): thanks to (18), we can fix \(\gamma\in[0,T/3)\) small enough such that
\[\left\|e^{-\gamma^{-1/2}\phi_{1}B}e^{\gamma A}e^{\gamma^{-1/2}\phi_{1}B}\begin{pmatrix} w_{0}\\ \dot{w}_{0}\end{pmatrix}-e^{-\phi_{1}^{2}B}\begin{pmatrix}w_{0}\\ \dot{w}_{0}\end{pmatrix}\right\|_{H^{1}\times L^{2}(\mathbb{T}^{d})}<\varepsilon/2.\]
Thanks to the inductive hypothesis, for any \(\epsilon,T,\gamma>0\) there exist \(\delta\in[0,T/3)\) and a piecewise constant control \(p^{\delta,\gamma}:[0,\delta]\rightarrow\mathbb{R}^{2d+1}\) such that
\[\left\|\mathcal{R}(\delta,\begin{pmatrix}w_{0}\\ \dot{w}_{0}\end{pmatrix},p^{\delta,\gamma})-e^{\gamma^{-1/2}\phi_{1}B} \begin{pmatrix}w_{0}\\ \dot{w}_{0}\end{pmatrix}\right\|_{H^{1}\times L^{2}(\mathbb{T}^{d})}<\epsilon. \tag{25}\]
Consider now a zero control \(0|_{[0,\gamma]}=(0,\ldots,0):[0,\gamma]\rightarrow\mathbb{R}^{2d+1}\) (that is, a free evolution), applied on a time interval of size \(\gamma\): thanks to (25) and the fact that, for any \(t\in\mathbb{R}\), \(e^{\epsilon A}\) is bounded, there exists \(C=C(\gamma)\) such that
\[\left\|\mathcal{R}(\delta+\gamma,\begin{pmatrix}w_{0}\\ \dot{w}_{0}\end{pmatrix},0|_{[0,\gamma]}*p^{\delta,\gamma})-e^{\gamma A}e^{ \gamma^{-1/2}\phi_{1}B}\begin{pmatrix}w_{0}\\ \dot{w}_{0}\end{pmatrix}\right\|_{H^{1}\times L^{2}(\mathbb{T}^{d})}\] \[= \left\|e^{\gamma A}\mathcal{R}(\delta,\begin{pmatrix}w_{0}\\ \dot{w}_{0}\end{pmatrix},p^{\delta,\gamma})-e^{\gamma A}e^{\gamma^{-1/2}\phi_{ 1}B}\begin{pmatrix}w_{0}\\ \dot{w}_{0}\end{pmatrix}\right\|_{H^{1}\times L^{2}(\mathbb{T}^{d})}<C\epsilon.\]
Now, we use again the inductive hypothesis to deduce that there exist \(\delta^{\prime}\in[0,T/3)\) and a piecewise constant control \(p^{\delta^{\prime},\gamma}:[0,\delta^{\prime}]\rightarrow\mathbb{R}^{2d+1}\) such that
\[\left\|\mathcal{R}(\delta^{\prime},e^{\gamma A}e^{\gamma^{-1/2}\phi_{1}B} \begin{pmatrix}w_{0}\\ \dot{w}_{0}\end{pmatrix},p^{\delta^{\prime},\gamma})-e^{-\gamma^{-1/2}\phi_{1} B}e^{\gamma A}e^{\gamma^{-1/2}\phi_{1}B}\begin{pmatrix}w_{0}\\ \dot{w}_{0}\end{pmatrix}\right\|_{H^{1}\times L^{2}(\mathbb{T}^{d})}<\epsilon.\]
Then, thanks to (16), there exists \(C^{\prime}=C^{\prime}(\|p^{\delta^{\prime},\gamma}\|_{L^{1}},\delta^{\prime})\) such that
\[\left\|\mathcal{R}(\delta+\gamma+\delta^{\prime},\begin{pmatrix} w_{0}\\ \dot{w}_{0}\end{pmatrix},p^{\delta^{\prime},\gamma}*0|_{[0,\gamma]}*p^{\delta, \gamma})-e^{-\phi_{1}^{2}B}\begin{pmatrix}w_{0}\\ \dot{w}_{0}\end{pmatrix}\right\|_{H^{1}\times L^{2}(\mathbb{T}^{d})}\] \[\leq \left\|\mathcal{R}(\delta^{\prime},\mathcal{R}(\delta+\gamma, \begin{pmatrix}w_{0}\\ \dot{w}_{0}\end{pmatrix},0|_{[0,\gamma]}*p^{\delta,\gamma})-\mathcal{R}( \delta^{\prime},e^{\gamma A}e^{\gamma^{-1/2}\phi_{1}B}\begin{pmatrix}w_{0}\\ \dot{w}_{0}\end{pmatrix},p^{\delta^{\prime},\gamma})\right\|_{H^{1}\times L^{2 }(\mathbb{T}^{d})}\] \[+ \left\|\mathcal{R}(\delta^{\prime},e^{\gamma A}e^{\gamma^{-1/2}\phi _{1}B}\begin{pmatrix}w_{0}\\ \dot{w}_{0}\end{pmatrix},p^{\delta^{\prime},\gamma})-e^{-\gamma^{-1/2}\phi_{1} B}e^{\gamma A}e^{\gamma^{-1/2}\phi_{1}B}\begin{pmatrix}w_{0}\\ \dot{w}_{0}\end{pmatrix}\right\|_{H^{1}\times L^{2}(\mathbb{T}^{d})}\] \[+ \left\|e^{-\gamma^{-1/2}\phi_{1}B}e^{\gamma A}e^{\gamma^{-1/2}\phi _{1}B}\begin{pmatrix}w_{0}\\ \dot{w}_{0}\end{pmatrix}-e^{-\phi_{1}^{2}B}\begin{pmatrix}w_{0}\\ \dot{w}_{0}\end{pmatrix}\right\|_{H^{1}\times L^{2}(\mathbb{T}^{d})}\] \[\leq C^{\prime}C\epsilon+\epsilon+\varepsilon/2.\]
Choosing \(\epsilon>0\) small enough such that \(C^{\prime}C\epsilon+\epsilon<\varepsilon/2\), we have then proved that the piecewise constant control \(p^{\delta^{\prime},\gamma}*0|_{[0,\gamma]}*p^{\delta,\gamma}\) steers the initial state \((w_{0},\dot{w}_{0})\)\(\varepsilon\)-close to the state
\[\begin{pmatrix}w_{0}\\ \dot{w}_{0}-\phi_{1}^{2}w_{0}\end{pmatrix}=e^{-\phi_{1}^{2}B}\begin{pmatrix}w_{0 }\\ \dot{w}_{0}\end{pmatrix}\]
in time \(\tau:=\delta^{\prime}+\gamma+\delta<T\). We can now repeat the same argument w.r.t. \(\phi_{2}\) (arguing as if we were starting from the initial state \((w_{0},\dot{w}_{0}-\phi_{1}^{2}w_{0})\)) and prove that the system can be steered arbitrarily close to the state \((w_{0},\dot{w}_{0}-\phi_{1}^{2}w_{0}-\phi_{2}^{2}w_{0})\) in arbitrarily small times, and iteratively to \((w_{0},\dot{w}_{0}-\sum_{i=1}^{N}\phi_{i}^{2}w_{0})\). To conclude, by inductive hypothesis, there exists a piecewise constant control \(p\) steering the state \((w_{0},\dot{w}_{0}-\sum_{i=1}^{N}\phi_{i}^{2}w_{0})\) arbitrarily close to the state
\[e^{\phi_{0}\mathcal{B}}\begin{pmatrix}w_{0}\\ \dot{w}_{0}-\sum_{i=1}^{N}\phi_{i}^{2}w_{0}\end{pmatrix}=e^{(\phi_{0}-\sum_{i=1 }^{N}\phi_{i}^{2})B}\begin{pmatrix}w_{0}\\ \dot{w}_{0}\end{pmatrix}=e^{\phi B}\begin{pmatrix}w_{0}\\ \dot{w}_{0}\end{pmatrix}\]
in arbitrarily small times, which ends the proof of the property \((P_{n})\).
Small-time global approximate controllability
In this section, we start by proving the small-time version of Theorem 1.
**Theorem 8**.: _Consider an initial state \((0,0)\neq(w_{0},\dot{w}_{0})\in H^{1}\times L^{2}(\mathbb{T}^{d})\) such that (4), or (5), holds. Then, for any final state \((w_{1},\dot{w}_{1})\in H^{1}\times L^{2}(\mathbb{T}^{d})\) and any error and time \(\varepsilon,T>0\), there exist a smaller time \(\tau\in[0,T)\) a piecewise constant control law \(p:[0,\tau]\to\mathbb{R}^{2d+1}\) such that the solution \(w\) of (1) associated with the control (2),(3) and with the initial condition \(\left(w(t=0),\frac{\partial}{\partial t}w(t=0)\right)=(w_{0},\dot{w}_{0})\) satisfies_
\[\left\|\left(w(\cdot,\tau),\frac{\partial}{\partial t}w(\cdot,\tau)\right)-(w _{1},\dot{w}_{1})\right\|_{H^{1}\times L^{2}(\mathbb{T}^{d})}<\varepsilon.\]
Proof.: We first remark that:
* If the final profile \(w_{1}\) is such that \(w_{1}=0\), we consider \(\widetilde{w}_{1}=\epsilon\ll 1\): the approximate controllability towards \(\widetilde{w}_{1}\) implies, thanks to the triangular inequality, the approximate controllability towards \(w_{1}\).
* Moreover, for any final profile \(w_{1}\) and any \(\epsilon>0\), we consider a finite set \(\mathcal{K}_{\epsilon}\subset\mathbb{Z}^{d}\) such that \[\left\|\sum_{k\in\mathcal{K}_{\epsilon}}\langle w_{1},\varphi_{k}\rangle \varphi_{k}-w_{1}\right\|_{H^{1}(\mathbb{T}^{d})}<\epsilon.\] Then, thanks to the triangular inequality, the approximate controllability in \(H^{1}\times L^{2}\) towards \((\sum_{k\in\mathcal{K}_{\epsilon}}\langle w_{1},\varphi_{k}\rangle\varphi_{k },\dot{w}_{1})\) implies the approximate controllability in \(H^{1}\times L^{2}\) towards \((w_{1},\dot{w}_{1})\) if \(\epsilon\) is small enough.
We are thus left to show Theorem 1 w.r.t. any final profiles \(w_{1}\neq 0\) supported on a finite set of Fourier modes. We first show it under the assumption (4). We do it by combining three steps, which can be outlined as
\[\begin{pmatrix}w_{0}\\ \dot{w}_{0}\end{pmatrix}\xrightarrow{(i)}\begin{pmatrix}w_{0}\\ f\end{pmatrix}\xrightarrow{(ii)}\begin{pmatrix}w_{1}\\ g\end{pmatrix}\xrightarrow{(iii)}\begin{pmatrix}w_{1}\\ \dot{w}_{1}\end{pmatrix},\]
and roughly read as follows:
* thanks to Proposition 6, we can change in small times the initial velocity \(\dot{w}_{0}\) of the wave profile into an arbitrary velocity \(f\), without changing the initial profile \(w_{0}\);
* thanks to Proposition 2, we can choose the velocity \(f\) in \((i)\) to be such that the associated free evolution sends the initial profile \(w_{0}\) into the final profile \(w_{1}\) in arbitrarily small times;
* finally, we use again Proposition 6 to change in arbitrarily small times the freely evolved velocity \(g\) into the final velocity \(\dot{w}_{1}\), without changing the final profile \(w_{1}\).
More precisely: let \(f\in L^{2}(\mathbb{T}^{d})\) be such that the solution to (1) with initial condition \((w_{0},f)\) and identically zero control on a time interval of size \(\gamma<T/3\) satisfies \(w(\gamma)=w_{1}\) (the existence of such \(f\) is guaranteed by Proposition 2). Denote moreover with \(g:=\frac{\partial}{\partial t}w(\gamma)\) the freely evolved velocity. Thanks to Proposition 6, for any \(\epsilon,T>0\) there exists a time \(\delta\in[0,T/3)\) and a piecewise constant control \(p:[0,\delta]\to\mathbb{R}^{2d+1}\) such that
\[\left\|\mathcal{R}(\delta,\begin{pmatrix}w_{0}\\ \dot{w}_{0}\end{pmatrix},p)-\begin{pmatrix}w_{0}\\ f\end{pmatrix}\right\|_{H^{1}\times L^{2}(\mathbb{T}^{d})}<\epsilon.\]
Since \(e^{tA}\) is bounded for any \(t\), there exists \(C=C(\gamma)\) such that
\[\left\|\mathcal{R}(\delta+\gamma,\begin{pmatrix}w_{0}\\ \dot{w}_{0}\end{pmatrix},0|_{[0,\gamma]}*p)-\begin{pmatrix}w_{1}\\ g\end{pmatrix}\right\|_{H^{1}\times L^{2}(\mathbb{T}^{d})}\] \[= \left\|e^{\gamma\mathcal{A}}\mathcal{R}(\delta,\begin{pmatrix}w_{ 0}\\ \dot{w}_{0}\end{pmatrix},p)-e^{\gamma\mathcal{A}}\begin{pmatrix}w_{0}\\ f\end{pmatrix}\right\|_{H^{1}\times L^{2}(\mathbb{T}^{d})}<C\epsilon.\]
Now, thanks to Proposition 6, there exists a time \(\delta^{\prime}\in[0,T/3)\) and a piecewise constant control \(p^{\prime}:[0,\delta^{\prime}]\to\mathbb{R}^{2d+1}\) such that
\[\left\|\mathcal{R}(\delta^{\prime},\begin{pmatrix}w_{1}\\ g\end{pmatrix},p^{\prime})-\begin{pmatrix}w_{1}\\ \dot{w}_{1}\end{pmatrix}\right\|_{H^{1}\times L^{2}(\mathbb{T}^{d})}<\epsilon.\]
Then, thanks to (16), there exists \(C^{\prime}=C^{\prime}(\|p^{\prime}\|_{L^{1}},\delta^{\prime})\) such that
\[\bigg{\|}\mathcal{R}(\delta+\gamma+\delta^{\prime},\binom{w_{0}}{w_ {0}},p^{\prime}*0|_{[0,\gamma]}*p)-\binom{w_{1}}{w_{1}}\bigg{\|}_{H^{1}\times L ^{2}(\mathbb{T}^{d})}\] \[\leq \bigg{\|}\mathcal{R}(\delta^{\prime},\mathcal{R}(\delta+\gamma, \binom{w_{0}}{w_{0}},0|_{[0,\gamma]}*p),p^{\prime})-\mathcal{R}(\delta^{ \prime},\binom{w_{1}}{g},p^{\prime})\bigg{\|}_{H^{1}\times L^{2}(\mathbb{T}^{d })}\] \[+ \bigg{\|}\mathcal{R}(\delta^{\prime},\binom{w_{1}}{g},p^{\prime} )-\binom{w_{1}}{w_{1}}\bigg{\|}_{H^{1}\times L^{2}(\mathbb{T}^{d})}\] \[\leq C^{\prime}C\epsilon+\epsilon\]
Taking \(\epsilon\) small enough such that \(C^{\prime}C\epsilon+\epsilon<\varepsilon\), we have found a piecewise constant control \(p^{\prime}*0|_{[0,\gamma]}*p\) steering \((w_{0},\dot{w}_{0})\)\(\varepsilon\)-close to \((w_{1},\dot{w}_{1})\) in \(H^{1}\times L^{2}(\mathbb{T}^{d})\), in time \(\tau:=\delta+\gamma+\delta^{\prime}<T\). This concludes the proof under assumption (4).
To prove the theorem under assumption (5), we first consider a free evolution for an arbitrarily small time \(\gamma>0\): thanks to (12), \(w(\gamma)\neq 0\) and \((w(\gamma),e^{ikx})=0\) for all but a finite set of \(k\in\mathbb{Z}^{d}\). Hence, we can now consider as initial profile \(w(\gamma)\) satisfying assumption (4), and the previous argument applies.
We conclude by showing how Theorem 1 follows from Theorem 8.
Proof of Theorem 1.: Thanks to Theorem 8, for every error \(\epsilon>0\) there exist times \(\tau,\tau^{\prime}\in[0,T/2)\) and piecewise constant controls \(q:[0,\tau]\rightarrow\mathbb{R}^{2d+1},q^{\prime}:[0,\tau^{\prime}]\rightarrow \mathbb{R}^{2d+1}\) such that
\[\bigg{\|}\mathcal{R}(\tau,\binom{w_{0}}{\dot{w}_{0}},q)-\binom{1}{0}\bigg{\|}< \epsilon,\quad\bigg{\|}\mathcal{R}(\tau^{\prime},\binom{1}{0},q^{\prime})- \binom{w_{1}}{\dot{w}_{1}}\bigg{\|}<\epsilon.\]
We now use the fact that \((w(t),\frac{\partial}{\partial t}w(t))\equiv(1,0)\) is a solution of (6) w.r.t. the control \(0|_{[0,T-(\tau+\tau^{\prime})}\) (that is, the zero control applied on a time interval of size \(T-(\tau+\tau^{\prime})\)), which gives
\[\bigg{\|}\mathcal{R}(T,\binom{w_{0}}{\dot{w}_{0}})\,,q^{\prime}*0 |_{[0,T-(\tau+\tau^{\prime})]}*q)-\binom{w_{1}}{\dot{w}_{1}}\bigg{\|}_{H^{1} \times L^{2}(\mathbb{T}^{d})}\] \[\leq \bigg{\|}\mathcal{R}(\tau^{\prime},\mathcal{R}(T-\tau^{\prime}, \binom{w_{0}}{\dot{w}_{0}},0|_{[0,T-(\tau+\tau^{\prime})]}*q),q^{\prime})- \mathcal{R}(\tau^{\prime},\binom{1}{0},q^{\prime})\bigg{\|}_{H^{1}\times L^{ 2}(\mathbb{T}^{d})}\] \[+ \bigg{\|}\mathcal{R}(\tau^{\prime},\binom{1}{0}\,,q^{\prime})- \binom{w_{1}}{\dot{w}_{1}}\bigg{\|}_{H^{1}\times L^{2}(\mathbb{T}^{d})}\] \[\leq C^{\prime}\bigg{\|}\mathcal{R}(T-(\tau+\tau^{\prime}),\mathcal{R} (\tau,\binom{w_{0}}{\dot{w}_{0}},q),0|_{[0,T-(\tau+\tau^{\prime})]})- \mathcal{R}(T-(\tau+\tau^{\prime}),\binom{1}{0}\,,0|_{[0,T-(\tau+\tau^{\prime })]})\bigg{\|}+\epsilon\] \[\leq C^{\prime}C\epsilon+\epsilon,\]
where the existence of \(C^{\prime}=C^{\prime}(\|q^{\prime}\|_{L^{1}},\tau^{\prime})\) is given by (16) and the existence of \(C=C(T-(\tau+\tau^{\prime}))\) follows from the boundedness of \(e^{tA}\) for any fixed \(t\). Taking \(\epsilon\) small enough such that \(C^{\prime}C\epsilon+\epsilon<\varepsilon\), we have found a piecewise constant control \(q^{*}*0|_{[0,T-(\tau+\tau^{\prime})]}*q\) steering \((w_{0},\dot{w}_{0})\)\(\varepsilon\)-close to \((w_{1},\dot{w}_{1})\) in \(H^{1}\times L^{2}(\mathbb{T}^{d})\), in time \(T\). This concludes the proof.
**Acknowledgements**
The author is thankful to Thomas Chambrion, Sylvain Ervedoza, Vahagn Nersesyan, Mario Sigalotti, and Marius Tucsnak for helpful discussions.
This work has been supported by the STARS Consolidator Grant 2021 "NewSRG" of the University of Padova, and by the PNRR MUR project PE0000023-NQSTI.
|
2302.02029
|
Towards Few-Shot Identification of Morality Frames using In-Context
Learning
|
Data scarcity is a common problem in NLP, especially when the annotation
pertains to nuanced socio-linguistic concepts that require specialized
knowledge. As a result, few-shot identification of these concepts is desirable.
Few-shot in-context learning using pre-trained Large Language Models (LLMs) has
been recently applied successfully in many NLP tasks. In this paper, we study
few-shot identification of a psycho-linguistic concept, Morality Frames (Roy et
al., 2021), using LLMs. Morality frames are a representation framework that
provides a holistic view of the moral sentiment expressed in text, identifying
the relevant moral foundation (Haidt and Graham, 2007) and at a finer level of
granularity, the moral sentiment expressed towards the entities mentioned in
the text. Previous studies relied on human annotation to identify morality
frames in text which is expensive. In this paper, we propose prompting-based
approaches using pretrained Large Language Models for identification of
morality frames, relying only on few-shot exemplars. We compare our models'
performance with few-shot RoBERTa and found promising results.
|
Shamik Roy, Nishanth Sridhar Nakshatri, Dan Goldwasser
|
2023-02-03T23:26:59Z
|
http://arxiv.org/abs/2302.02029v1
|
# Towards Few-Shot Identification of Morality Frames using In-Context Learning
###### Abstract
Data scarcity is a common problem in NLP, especially when the annotation pertains to nuanced socio-linguistic concepts that require specialized knowledge. As a result, few-shot identification of these concepts is desirable. Few-shot in-context learning using pre-trained Large Language Models (LLMs) has been recently applied successfully in many NLP tasks. In this paper, we study few-shot identification of a psycho-linguistic concept, Morality Frames Roy et al. (2021), using LLMs. Morality frames are a representation framework that provides a holistic view of the moral sentiment expressed in text, identifying the relevant moral foundation Haidt and Graham (2007) and at a finer level of granularity, the moral sentiment expressed towards the entities mentioned in the text. Previous studies relied on human annotation to identify morality frames in text which is expensive. In this paper, we propose prompting based approaches using pretrained Large Language Models for identification of morality frames, relying only on few-shot exemplars. We compare our models' performance with few-shot RoBERTa and found promising results.
## 1 Introduction
While the NLP field has seen tremendous progress over the last decade, building models capable of identifying abstract concepts remain a highly challenging problem. This difficulty stems from two key reasons. First, these concepts can manifest in very different ways in text. For example, the concept of _fairness_, that we discuss at length in this paper, can be discussed in the context of the abortion debate (e.g., _"right to privacy"_) or in the context of Covid-19 vaccination (e.g., _"everyone should have access to the vaccine"_). Learning to identify instances of this concept in previously unseen contexts remains a challenge. Second, building NLP models using the supervised learning paradigm requires humans to annotate data, which for such tasks is a cognitively demanding process. In this paper, we investigate whether the recently introduced paradigm of zero/few shot learning using Large Language Models Brown et al. (2020) is better equipped to deal with these challenges. We focus on a recently introduced framework for analyzing moral sentiment, called _morality frames_Roy et al. (2021). This framework builds on, and extends, moral foundation theory Haidt and Graham (2007), which identifies five moral values (i.e., foundations, each with a positive and a negative polarity) central to human moral sentiment which include Care/Harm, Fairness/Cheating, Loyalty/Betrayal, Authority/Subversion, and Purity/Degradation. Morality frames is a relational framework that identifies expressions of the moral foundations in text and associates moral roles with entities mentioned in it (see Section 3 for details).
Unlike previous approaches to this task Roy et al. (2021); Pacheco et al. (2022) which use annotated data to train a relational classifier using DRaiL Pacheco and Goldwasser (2021), we define the task as a zero/few shot problem. We rely on in-context learning using Large Language Models for the identification of morality frames. In in-context learning, a desired NLP task is framed as a text generation problem where the Large Language Models are provided with zero/few shot input-output pairs and prompted to generate label for the test data point without updating parameters of the LLMs Min et al. (2021).
In this paper, we introduce several prompting techniques for LLMs for the identification of morality frames in tweets that rely on only few-shot examples. We compare our models' performance with few-shot RoBERTa-based Liu et al. (2019) classifiers. We found that prompting-based techniques underperform RoBERTa in identification of subtle concepts like moral foundations, but in case of moral role identification, the prompting-based techniques outperforms RoBERTa by a large mar
gin. Note that moral roles are directed towards entities and are more evident than subtle moral foundations.
Our promising findings in this paper suggest that in-context learning approaches can be useful in many Computational Social Science related tasks and we propose a few potential future directions of this work.
## 2 Related Works
There has been a lot of work towards exploiting existing knowledge in pretrained Large Language Models (LLMs) and improving its few-shot abilities on various downstream tasks in NLP. Some of these works have been influenced from areas related to instruction-based NLP Goldwasser and Roth (2014). Mishra et al. (2021) fine-tuned a 140M parameter BART Lewis et al. (2019) model using instructions and few-shot examples for various NLP tasks such as text classification, question answering, and text modification. This work suggests that augmenting instructions in the fine-tuning process improves model performance on unseen tasks. On similar lines, through a large scale experiment with over 60 different datasets, Wei et al. (2021) showed that instruction tuning on a LLM (\(\approx\)137B parameters) improves zero and few-shot capabilities of these models. Other notable works Min et al. (2021); Sanh et al. (2021) show that even a relatively smaller language model can achieve substantial improvement in a similar setting. Furthermore, Schick and Schutze (2020) use cloze-style phrases in a semi-supervised manner to help LM assign a sentiment label for the text classification task.
Another line of work focuses on improving LM on downstream tasks with no parameter updates. Brown et al. (2020) proposed to improve LLM few-shot performance by conditioning on concatenation of training examples without any gradient updates. Other works Min et al. (2021); Zhao et al. (2021) have further improved this work and have shown consistent gains in various NLP tasks. In addition, Wei et al. (2022) shows that sufficiently large LM can exploit its innate reasoning abilities to solve complex tasks when provided with a series of intermediate steps during prompting.
However, having a generalized LLM may have poor performance when the downstream task needs nuanced understanding of the text or is very different from language modeling in nature. While Schick and Schutze (2020) and Gao et al. (2020) have studied sentiment classification task in few-shot settings, not many works are available towards utilizing LLM without finetuning it to understand more nuanced concepts like political framing Boydstun et al. (2014), moral foundations Haidt and Joseph (2004); Haidt and Graham (2007), among others.
Previous work Roy and Goldwasser (2020) has performed nuanced analysis of political framing by breaking the policy frames proposed by Boydstun et al. (2014), into fine-grained sub-frames. It was observed that the sub-frames better captured political polarization by providing a structural breakdown of policy frames. A later work Roy and Goldwasser (2021) studied the Moral Foundation Theory Haidt and Joseph (2004); Haidt and Graham (2007) at entity level and proposed a knowledge representation framework for organizing moral attitudes directed at different entities. The structured framework is named morality frames Roy et al. (2021). These nuanced structural frameworks, such as, frames, sub-frames, entity-centric moral sentiments (morality frames), are expensive to annotate as they largely depend on human knowledge. A few-shot automatic identification of such concepts is required to save manual human-effort and for performing these studies at scale. In this paper, we take the first step towards the analysis on how well LLMs can understand these psycho-linguistic concepts in few-shot settings. As our first study, we explore in-context learning of morality frames in this paper and leave the study of framing and sub-frames as a future work.
## 3 Dataset
We conduct our study on the dataset proposed by Roy et al. (2021). In this dataset, there are \(1599\) political tweets from US politicians that are annotated for moral foundations by Johnson and Goldwasser (2018). Roy et al. (2021) proposed Morality Frames and broke down the sentence level moral foundations into nuanced moral role dimensions that capture sentiment towards entities expressed in the text. The moral foundations and corresponding moral roles can be found in Table 1. Roy et al. (2021) annotated the dataset proposed by Johnson and Goldwasser (2018) for these moral sentiments towards entities.
In this paper, our goal is to study the identification of morality frames when only few-shot train
ing examples are available. To build this setup, we randomly sampled \(10\) tweets from each of the \(5\) moral foundations, and used it as training set. We use Large Language Models (LLMs) for in-context learning that are expensive and resource heavy even for inference only. So, we benchmark our approaches using a smaller test set containing randomly sampled \(20\) tweets per moral foundation. It resulted in \(103\) and \(207\) tweet-entity pairs in the training and the test set, respectively.
## 4 Task Definition
The identification of morality frame in a tweet involves the following two steps.
**Identification of Moral Foundation:** Given a tweet text \(t\), the task is to identify the moral foundation expressed in the tweet.
**Identification of Moral Roles of Entities:** After identification of moral foundation, the second step is to identify the moral roles of entities in the tweet. We study this step in the following two settings.
* **Entities are pre-identified:** In this setting, the assumption is that the entities are already identified in the tweet text. The task is to assign moral roles to them. So, given a tweet \(t\), an entity \(e\) mentioned in the tweet, and the moral foundation label of the tweet \(m\), the task is to identify the moral role of \(e\) in \(t\).
* **Entities are not pre-identified:** In this setting, a tweet \(t\), and its corresponding moral foundation label \(m\) is known in prior. The task is to identify the entities mentioned in the tweet, and their corresponding moral roles.
Examples of the tasks can be found in Figure 1.
## 5 Few-Shot Identification of Morality Frames using Large Language Models
### In-Context Learning
In-context learning using pretrained LLMs has been shown effective in few-shot scenarios in previous studies for different NLP tasks Brown et al. (2020); Wei et al. (2022); Reif et al. (2021). LLMs are pretrained on huge amount of web-crawl, books and Wikipedia text. Hence, they are expected to carry world-knowledge. As a result, they are able to perform many NLP tasks using only few-shot training examples without any further fine-tuning or gradient updates. In the in-context learning paradigm, the downstream task is framed as a text generation problem and the model is prompted to generate the next tokens Min et al. (2021). These tokens are mapped to desired output labels in classification tasks. In this work, we assume that only few-shot examples are given for the morality frames identification task. So, we apply in-context learning approach for this purpose to perform different steps of the task defined in Section 4. Note that we do not update LLM parameters in this process. The proposed in-context learning approaches are described in the subsequent sections.
\begin{table}
\begin{tabular}{l|l} \hline \hline
**Moral Foundations** & **Moral Roles** \\ \hline
**Care/Harm:** Care for others, generosity, compassion, ability to feel pain of others, sensitivity to suffering of others, prohibiting actions that harm others. & Target of care/harm Entity causing harm Entity providing care \\ \hline
**Fairness/Cheating:** Fairness, justice, reciprocity, reciprocal altruism, rights, autonomy, equality, proportionality, prohibing cheating. &
\begin{tabular}{l} Target of fairness/cheating Entity ensuring fairness \\ Entity doing cheating \\ \end{tabular} \\ \hline
**Loyalty/Bertayal:** Group affiliation and solidarity, virtues of patri- & \begin{tabular}{l} Target of loyalty/betrayal \\ Entity being loyal \\ Entity doing betrayal \\ \end{tabular} \\ & \begin{tabular}{l} **Authority/Subversion:** Fulfilling \\ social roles, submitting to authority, respect for social hierarchy traditions, leadership, prohibiting rebelbeltion against authority. \\ \end{tabular} &
\begin{tabular}{l} Justified authority \\ Justified authority over \\ Failing authority over \\ Failing authority over \\ Failing \\ \end{tabular} \\ \hline
**Parity/Degradation:** Associations with the sacred and holy, disgust, contamination, religious notions which guide how to live, prohibiting violating the sacred. &
\begin{tabular}{l} Target of purity/degradation \\ Entity preserving purity \\ Entity causing degradation \\ \end{tabular} \\ \hline \hline \end{tabular}
\end{table}
Table 1: Morality Frames: Moral foundations and their associated roles. (Adopted from Roy et al. (2021)).
Figure 1: Morality frames identification task. Input for each step is colored in blue and expected outputs are colored in red.
### Moral Foundation (MF) Identification
Following the previous works, we frame the task of moral foundation identification as a text generation problem where the model is prompted to generate the moral foundation label of a tweet. To this end, we experiment with two different types of prompting techniques.
**MF identification in one pass:** In this method, we provide the moral foundation definitions (from Table 1) in the beginning of the prompt as a guideline for the language model. Then, few-shot training examples and their associated labels are provided in the prompt. Finally, the test tweet is provided as the last example in the prompt and the model is expected to generate the moral foundation label of this tweet. The prompt template for this approach can be seen in Figure 2.
**MF identification in one-vs-all manner:** Identification of moral foundations in one-pass might be difficult for the language models. So, we propose one-vs-all prompting approach where the language model is prompted to predict if a certain moral foundation is present in the tweet. This step is repeated for each of the five moral foundations. The moral foundation predicted with the highest confidence is consolidated as the predicted label. To obtain the confidence score, we prompt the language model multiple times with different random seeds to generate multiple predictions for a single tweet. The final confidence score is the percentage of times a specific moral foundation is generated by the LLM. In case there is a tie between two moral foundation labels, we perform a second prompting step, where few-shot prompting enables to break the tie between moral foundations.1 Prompt templates for these two steps can be seen in Figure 3.
Footnote 1: In case of tie among more than two moral foundations, we break that by randomly selecting one.
### Moral Role Identification of a Pre-identified Entity
Post prediction of the moral foundation label, the next step is to identify moral roles of entities as described in the Section 4. Given a test tweet, and a predicted moral foundation label for it, we prompt the LLMs to generate moral role of an entity in a tweet only from the associated moral roles to
Figure 3: Prompt templates for moral foundation identification technique in one-vs-all manner. The blue colored segments are input prompts and the red colored segments are the generated output by the LLMs. Corresponding prompt example can be seen in Appendix A: Figure 8.
Figure 2: Prompt template for identification of moral foundation in one pass. The blue colored segment is input prompt and the red colored segment is the generated output by the LLMs. Example of this prompt template can be seen in Appendix A: Figure 7.
the predicted moral foundation. For example, if a tweet is identified to be having the moral foundation 'Care/Harm', we prompt the language model to predict the the moral role of an entity mentioned in the tweet from only three moral roles that are associated to 'Care/Harm', namely, 'Entity target of care/harm', 'Entity causing harm', 'Entity providing care'. We propose two prompting approaches for this task.
**Moral role identification in one pass:** We prompt the LLMs to directly identify moral role of a given entity from the corresponding moral roles in one pass using the prompt shown in Figure 4. Following the moral foundation classification prompt template, we provide the description of the moral roles in the template as guideline. We come up with the definitions based on intuition.
**Moral role identification in two steps:** In the morality frames, different moral foundation roles intuitively carry either positive or negative sentiment towards them. For example, "entity causing harm", "entity violating fairness", "entity doing cheating", "failing authority" and "entity doing degradation" are the roles carrying negative sentiment towards them. The rest of the entity roles carry positive sentiment towards them. With this intuition, we break down the task of moral role identification in two steps. In the first step, we prompt the LLMs to identify the sentiment towards entities in "positive" and "negative" dimensions only by using the prompt structure in Figure 4(a). Now the entities discovered as having negative sentiment towards them directly maps to one of the five negative sentiments, each associated with only one of the moral foundations. Given the moral foundation is discovered in the previous step, we can readily map the entities with negative sentiments to one of the negative moral roles. Now, each moral foundation has two or more positive moral roles associated to them. To differentiate among them, we perform another prompting step where the LLMs are prompted to generate one of the positive moral roles for an entity in a tweet. The prompt template is shown in Figure 4(b).
### Identification of entities and corresponding moral roles jointly
In this approach, we propose a prompting method for the setting where the the entities are not pre-identified as described in Section 4. In this setting, the moral foundation is known for a tweet and the target entities in the tweets are not explicitly given. We create a prompt similar to a slot filling task where the LLMs have to fill the slots of moral roles with entities mentioned in the tweet. The prompt template is shown in Figure 6.
## 6 Experimental Evaluation
In this section first we discuss our experimental setting. Secondly, we discuss our proposed models' performance in morality frame identification.
### Experimental Settings
**Large Language Model:** We use an open-source Large Language Model named GPT-J-6B Wang and Komatsuzaki (2021). This is 6B parameters decoder only language model. We use top-k (k=5) sampling with temperature (=0.5) Holtzman et al. (2019) as a decoding method for the language model. Note that, we do not update the parameters of the model in the in-context learning steps. For each of the test data point, we run the model with \(5\) random seeds each generating \(2\) outputs, hence, yielding 10 predictions for each data point. We take the majority voting among these predictions to get the predicted label.
Figure 4: Prompt template for identification of moral role in one pass in case of ‘Care/Harm’. The blue colored segment is input prompt and the red colored segment is the generated output by the LLMs. Corresponding prompt example can be seen in Appendix A: Figure 9.
**Ablation study:** We experiment with various numbers of training examples in the prompts. In this paper, we define number of shots or training examples \(k\), as the number of examples used for training from each class related to a classification task. For moral foundation identification and moral roles identification of the pre-identified entities, we experiment with 0 to 5 shots. In the moral role identification method where entities are not pre-identified, we experiment with 0, 1, 3, 5, 7, 10 shots. Because of the limit in the number of tokens in the prompt we cannot experiment with more number of shots. In all of our prompting methods we provide the description of the expected labels as task instruction in the prompt. As a result, a zero-shot learning is feasible in our setting. We run all of the studies using the train and test set described in Section 3.
**Baseline:** We compare our models' performance with a few-shot RoBERTa-based Liu et al. (2019) text classifier. For the identification of moral foundation in a tweet, we encode the tweet using RoBERTa where the embedding of the [CLS] token of the last layer is used as a representation of the text. This representation is used for moral foundation classification. For moral role identification of an entity in the tweet, we encode the tweet and the entity using two RoBERTa instances, and concatenate their representations to get a final representation. This concatenated representation is used for moral roles classification. Note that, the RoBERTa-based classifiers are trained with few-shot examples only as the prompting based methods. We run the RoBERTa-based classifiers 5 times using 5 random seeds and report the average result.
**Implementation Infrastructure** We ran all of the experiments on a 4 core Intel(R) Core(TM) i5-7400 CPU @ 3.00GHz machine with 64GB RAM and two NVIDIA GeForce GTX 1080 Ti 11GB GDDR5X GPUs. GPT-J-6B was mounted using two GPUs. We used PyTorch library for all of the implementations.
Figure 5: Prompt templates for moral role identification by breaking the task in 2 steps. The blue colored segments are input prompts and the red colored segments are the generated output by the LLMs. Corresponding prompt examples can be seen in Appendix A: Fig. 10.
Figure 6: Prompt template for identification of entity and corresponding moral roles jointly in case of ‘Care/Harm’. The blue colored segment is input prompt and the red colored segment is the generated output by the LLMs. Corresponding prompt example can be seen in Appendix A: Figure 11.
### Results
**Moral Foundation Identification:** In Table 2, we show the results for moral foundation identification using our two proposed methods and few-shot RoBERTa. It can be seen that as the number of shots increases the performance improves in almost all of the cases. We also found that performance with RoBERTa is pretty bad with no gradient updates. But fine-tuning RoBERTa with few-shot examples provide reasonable performance. We found that the one-vs-all prompting technique underperforms the one-pass prompting technique, except in the zero-shot setting. Our intuition is that the language model is able to learn better when more contrastive examples are given which is the case in the one-pass method. Per class classification results for one-pass prompting using 5-shot examples per class are shown in Table 3. However, the one-pass prompting technique outperforms the one-vs-all technique but underperforms few-shot RoBERTa with finetuning. It seems that without fine-tuning the subtle moral foundation identification is a difficult task for the LLMs.
**Moral Role Identification for pre-identified entities:** In moral role identification, the assumption is that the moral foundation for each tweet is pre-identified. But the performance of all the models for the moral foundation identification task are not up to the mark as shown in Table 2. So, in identification of moral roles we use the gold moral foundation labels instead of the predicted ones.
In Table 4, we present the results for moral role identification using our proposed two methods along with the RoBERTa-based baseline. We omitted the results using zero-shot prompting as we found out that in moral role generation, zero-shot prompting of the LLM generates a lot of open-ended labels rather than the fixed moral role labels. It becomes difficult to parse these generations and map them to a moral role label using an automatic method. So we leave zero-shot prompting for moral role identification as a future work.
It can be seen in Table 4 that both one-pass prompting and the two steps prompting methods outperform the RoBERTa baseline in moral role identification. It suggests that moral role identification is easier than moral foundation identification for LLMs. Note that, moral roles are micro structures of the morality frames and they are more focused towards entities and evident in text compared to subtle moral foundations. As a result it is easier for the LLMs to identify them.
The two-steps prompting technique for moral roles identification underperforms the one-pass prompting approach although the task is broken down in two easier tasks. We found that in the first step of the task the model identifies polarity of sentiment towards entities with more than 70% F1 score in the 4 shots and 5 shots settings. But it struggles in the second step where the model has to differentiate between two positive sentiments (e.g. 'Entity target of care/harm' vs 'Entity providing care') which is more difficult as the difference among positive sentiments is subtle. This finding is consistent with prior studies. For example, in previous work (Roy et al., 2021) it was found that deep relational learning based model also struggles to differentiate among multiple positive sentiments.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{5}{c}{**Macro F1 score for various number of shots per class**} \\ \cline{2-7}
**Models** & **0-shot** & **1-shot** & **2-shots** & **3-shots** & **4-shots** & **5-shots** \\ \hline One-Pass prompting for 5 classes & 6.24 & 24.19 & 29.80 & 30.63 & 39.49 & 43.56 \\ One-vs-all prompting & 13.23 & 20.46 & 24.34 & 20.51 & 27.76 & 15.70 \\ \hline RoBERTa (Parameters frozen) & N/A & 7.61 (1.9) & 7.84 (2.3) & 8.1 (2.9) & 8.21 (3.1) & 8.0 (2.6) \\ RoBERTa (Finetuned) & N/A & 19.68 (7.3) & 33.22 (9.6) & 37.05 (5.8) & 38.78 (5.9) & 45.42 (6.6) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Few-shot moral foundation identification results. Between the prompting-based methods, the one-pass prompting method is the best performing one. The one-pass prompting method outperforms parameters-frozen RoBERTa, but underperforms finetuned RoBERTa in few-shot training setup.
\begin{table}
\begin{tabular}{l c c c c} \hline
**Morals** & **Prec.** & **Rec.** & **F1** & **Support** \\ \hline
**Care/Harm** & 31.82 & 70.00 & 43.75 & 20 \\
**Fairness/Cheating** & 66.67 & 10.00 & 17.39 & 20 \\
**Logality/Bterayal** & 31.43 & 55.00 & 40.00 & 20 \\
**Auth./Subversion** & 87.50 & 35.00 & 50.00 & 20 \\
**Purity/Degradation** & 100.0 & 50.00 & 66.67 & 20 \\ \hline
**Accuracy** & & & 44.00 & 100 \\
**Macro Average** & 63.48 & 44.00 & 43.56 & 100 \\
**Weighted Average** & 63.48 & 44.00 & 43.56 & 100 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Per class moral foundation classification results for one-pass prompting (using 5-shots per class).
In the one-pass prompting technique, contrastive positive and negative examples are given in the prompt. As a result it might be easier for the LLMs to resonate.
In moral role identification also the performance improves with the increase of number of shots for all of the models as shown in Table 4.
Identification of entities and corresponding moral roles jointly:In this setting, the model is expected to identify entities having the moral roles in a tweet. To evaluate the model's performance we measure in what percentage of time the predicted entity is matched with the actual entity2 annotated by Roy et al. (2021) and in how many cases they are assigned to the correct entity role. We found out that the LLM hallucinates a lot when identifying entities and filling the entity role slots. Hallucination in LLMs is a common phenomena. When open ended text generation is expected but the language model generates some response that is not a part of the input text or not related to the input text, it is called hallucination Ji et al. (2022). Note that we don't encounter the problem of hallucination when generating labels for moral foundation and moral roles as the labels were well-defined in the prompt. But in entity identification task the model has to identify entities from a given text span which is open ended. Hence, it resulted in a higher rate of hallucination.
Footnote 2: Entity matching procedure can be found in Appendix B
However, The results for this task are shown in Table 5. We can see in the table that as we increase the number of training examples (shots) the % of correct entity and entity role identification improve although the performance is not up to the mark even with the highest number of shots (10). We also found out that % of hallucination decreases as the number of shots increases. This findings imply that joint identification of entity and entity role is a much difficult task for the LLMs but as we increase the number of shots the LLMs are able to understand the task better.
## 7 Summary and Future Works
In this paper, we apply few-shot in-context learning for identification of one of the psycho
\begin{table}
\begin{tabular}{l l c c c c c} \hline \hline & & \multicolumn{5}{c}{**Macro F1 score for various number of shots per class**} \\ \cline{3-6}
**Moral Foundations** & **Models** & **1-shot** & **2-shots** & **3-shots** & **4-shots** & **5-shots** \\ \hline \multirow{3}{*}{**Care/Harm**} & One-Pass prompting & 48.21 & 58.61 & 74.37 & 70.98 & 68.41 \\ & 2-Steps prompting & 37.77 & 42.04 & 58.29 & 68.97 & 63.76 \\ & RoBERTa (Finetuned) & 31.67 (13.4) & 35.79 (13.2) & 35.35 (14.0) & 30.64 (14.0) & 43.83 (26.0) \\ \hline \multirow{3}{*}{**Fairness/Cheating**} & One-Pass prompting & 42.92 & 71.86 & 75.95 & 82.26 & 74.65 \\ & 2-Steps prompting & 40.91 & 71.28 & 72.64 & 74.92 & 68.70 \\ & RoBERTa (Finetuned) & 26.89 (11.9) & 46.16 (6.0) & 43.06 (3.6) & 35.61 (15.2) & 42.95 (12.9) \\ \hline \multirow{3}{*}{**Loyalty/Betrayal**} & One-Pass prompting & 35.56 & 36.40 & 35.24 & 45.10 & 41.27 \\ & 2-Steps prompting & 30.39 & 38.69 & 32.32 & 38.82 & 25.83 \\ & RoBERTa (Finetuned) & 21.29 (3.0) & 28.39 (7.1) & 24.14 (11.5) & 37.73 (1.7) & 36.57 (8.2) \\ \hline \multirow{3}{*}{**Authority/Subversion**} & One-Pass prompting & 19.17 & 31.69 & 29.35 & 34.76 & 36.12 \\ & 2-Steps prompting & 21.85 & 31.69 & 30.67 & 31.47 & 29.56 \\ & RoBERTa (Finetuned) & 11.77 (0) & 28.02 (11.6) & 23.31 (11.3) & 20.08 (10.5) & 24.64 (6.0) \\ \hline \multirow{3}{*}{**Purity/Degradation**} & One-Pass prompting & 41.28 & 46.91 & 66.67 & 69.04 & 61.84 \\ & 2-Steps prompting & 40.51 & 41.66 & 43.08 & 47.65 & 45.89 \\ \cline{1-1} & RoBERTa (Finetuned) & 31.59 (7.9) & 40.15 (5.7) & 30.80 (9.9) & 42.25 (10.8) & 56.57 (20.4) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Few shot moral role identification performance comparison among models. The one-pass prompting method outperforms both 2-steps prompting method and finetuned RoBERTa in few-shot training setup.
\begin{table}
\begin{tabular}{l c c c} \hline \hline
**No.** & **\% Correct** & **\% Correct** & **\% Correct Role** \\
**of** & **Entity** & **Hallucination** & **Identification** \\ \cline{2-4}
**Shots** & **Identification** & **\%** & **\%** \\ \hline
1 & 43.80 & 21.69 & 33.97 \\
3 & 48.28 & 11.54 & 41.09 \\
5 & 48.68 & 9.58 & 43.71 \\
7 & 49.91 & 7.68 & 45.27 \\
10 & 51.39 & 5.95 & 46.88 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Correctness of joint identification of entity and corresponding moral roles using in-context learning. The LLM hallucinates from previous training examples in open-ended entity identification. The percentage of hallucination decreases and the percentage of correct entity and correct role identification increase with the increase of the number of shots in prompt.
linguistic knowledge representation framework named Morality Frames. We proposed different prompting methods to perform the task. We found that in-context learning using a comparatively smaller language model (GPT-J-6B) does not perform well in identification of moral foundations that are very subtle. But it excels in moral roles identification of entities that are more evident in text. We believe there is a lot of scope for improvement, and this study will encourage the application of in-context learning in more Computational Social Science related tasks. Below we list a few future directions of this work.
* **Prompt selection:** Appropriate prompt selection based on the test data point has been successfully applied in in-context learning in different NLP tasks Han et al. (2022). Implementation of a dynamic prompt selection technique in morality frame identification task may boost the performance.
* **Incorporation of context in prompt:** In complex concepts such as moral foundation Haidt and Joseph (2004); Haidt and Graham (2007) and framing Boydstun et al. (2014), to name a few, the social context and the speaker's demographics play an important role. Incorporating these information in prompts for LLMs can be an effective direction towards solving these problems.
* **Experiment with larger language models:** Larger language models such as GPT-3 Brown et al. (2020) use more parameters and are trained on diverse data. As a result, they could be more successful in capturing nuanced social concepts, and result in better performance.
* **Experiment with long text:** Identification of complex concepts like framing and moral foundation have been studied in longer text (e.g. news articles) in previous works Card et al. (2015); Fulgoni et al. (2016); Field et al. (2018); Roy and Goldwasser (2020). How successful the pre-trained language models can be on these tasks in longer text such as, news articles, can be an interesting future work.
## Acknowledgements
We are thankful to the anonymous reviewers for their insightful comments. This project was partially funded by NSF CAREER award IIS-2048001.
## Limitations
The limitations of this paper are as follows.
* Previous study Johnson and Goldwasser (2018) has shown that a single tweet may contain multiple moral foundations. Multiple labels were not considered in this work. It may be the case that language models are successful on identifying only one of the moral foundations in such multi-label data points.
* Usage of large language models are expensive as they are resource-heavy. Due to that we could not run the prompt-based methods multiple times to perform a statistical significance test on the results. This is a limitation of our work.
* Due to resource-constraint and no availability of an open-source version we could not run our proposed prompt-based models with state-of-the-art larger language models, such as GPT-3. The insights and results reported in this paper may have been different if a larger language model was used.
* LLMs are pretrained on a huge amount of human generated text. As a result, they may inherently contain many human biases Brown et al. (2020); Blodgett et al. (2020). We did not consider any bias that can be incorporated by the LLMs in the morality frames identification task.
## Ethics Statement
In this paper, we do not propose any new dataset rather we only experiment with existing datasets which are, to the best of our knowledge, adequately cited. We provided all experimental details of our approaches and we believe the results reported in this paper are reproducible. Any result or tweet text presented in this paper are either results of a machine learning model or taken from an existing dataset. They don't represent the authors' or the funding agencies' views on this topic. As described in the limitations sections, inherent bias in the large language models are not taken into account in this paper while experimenting. So, we suggest not to deploy the proposed algorithms in a real life system without further investigation on bias and fairness.
|
2303.02300
|
Quantum Gates Between Mesoscopic Spin Ensembles
|
Quantum algorithmics with single spins poses serious technological challenges
such as precision fabrication, rapid decoherence, atomic-scale addressing and
readout. To circumvent atomic-scale challenges, we examine the case of fully
polarized mesoscopic spin ensembles (spin-coherent states) whose total angular
momenta states map to qudit submanifolds. We show that in the limit where the
size of the ensembles is small compared to their separation, it is possible to
treat them as qubits with an effective coupling strength that scales with the
number of spins. If the spins within each ensemble are decoupled (e.g., via
control fields, spinning or diffusional averaging or materials engineering),
one- and two-qubit gate operations can be implemented with high fidelities.
|
Mohamad Niknam, Robert N. Schwartz, Louis-S. Bouchard
|
2023-03-04T02:49:26Z
|
http://arxiv.org/abs/2303.02300v1
|
# Quantum Gates Between Mesoscopic Spin Ensembles
###### Abstract
Quantum algorithmics with single spins poses serious technological challenges such as precision fabrication, rapid decoherence, atomic-scale addressing and readout. To circumvent atomic-scale challenges, we examine the case of fully polarized mesoscopic spin ensembles (spin-coherent states) whose total angular momenta states map to qudit submanifolds. We show that in the limit where the size of the ensembles is small compared to their separation, it is possible to treat them as qubits with an effective coupling strength that scales with the number of spins. If the spins within each ensemble are decoupled (e.g., via control fields, spinning or diffusional averaging or materials engineering), one- and two-qubit gate operations can be implemented with high fidelities.
_Introduction.-_ Large-scale quantum operations are difficult to develop, requiring isolation of a large number of individual qubits, precise control of their quantum states, error correction schemes and high-accuracy measurements. Existing qubits are implemented in environments requiring high vacuums or low temperatures. Several recent initiatives have been driving hardware development for the realization of new quantum technologies that are more scalable, robust, and less physically demanding. Qubits should have long coherence times or be able to retain quantum information for times much longer than the operation time of quantum gates. Arbitrary unitary transformations on the set of all qubits can be constructed up to desired precision by composing quantum gates chosen from a set of universal quantum gates. The latter typically consists of single-qubit gates (one per qubit) and one two-qubit gate (acting on pairs of qubits) [1]. Implementation of the two-qubit gate has proven to be the most challenging, and even the leading platforms for quantum computing, i.e., superconducting qubits [2], trapped ions [3], and Rydberg atoms [4] have error rates exceeding 0.1% per gate, which is an order of magnitude larger than the threshold required for fault-tolerant quantum computing [5].
Ensemble quantum computing has the advantage of many replicas, eliminating the need to repeat projective readouts thousands of times to obtain the statistical weights in the many-qubit wavefunction. Ensembles can have favorable properties such as ease of fabrication, built-in robustness, longer storage times, more sensitive detection, etc. Liquid-state NMR is of historical significance because it provided the platform for the first experimental demonstrations of a working quantum algorithm [6; 7]. Nuclear spins in the liquid-state feature long coherence times, typically in the order of seconds. They interact weakly, meaning that they behave independently and can be addressed individually. Qubits can be selectively addressed using frequency-selective (soft) pulses. This allows coherent control of quantum devices containing up to 12 qubits [8] and implementation of Shor's quantum algorithm [9]. However, nuclear spin qubits are not scalable due to the difficulty of initializing pure quantum states (highly mixed pseudo-pure states above \(\approx 1\) mK), and synthesizing molecules with a large number of individually addressable nuclear spins coupled to one another. Indeed, molecules can only accommodate the selective addressing of a small number of distinct nuclei due to the narrow range of chemical shifts. Therefore, obtaining entanglement in liquid-state qubits for ensemble NMR quantum computing is nearly impossible [10]. NMR ensemble computing involves performing collective quantum operations on molecules, followed by averaging the state of qubits over the ensemble. To address the important issues of scalability and ease-of-fabrication, we revisit the idea of ensemble computing and consider reversing the order of averaging: ensemble-average spins to obtain large qubits, then perform operations among the ensembles. In doing so, we use the rotational properties of large-\(J\) angular momentum operators, which leads to a realization of qubit transformations similar to the single spin case [11]. In this paper we will assume the ability to fully polarize spins and work with spin-coherent states that we term E-qubits. Practical considerations are discussed.
For nuclear spins the notation \(\mathbf{I}\) is used instead of \(\mathbf{J}\) to denote the spin angular momentum. Suppose we have an ensemble of \(N\) molecules, each with \(n\) nuclear spins \(\{\mathbf{I}^{1},\ldots,\mathbf{I}^{n}\}\). The Hilbert space for the nuclear spin degrees of freedom in a molecule has dimension \(\prod_{i=1}^{n}(2I^{i}+1)\). The state of each molecule's nuclear spins is represented by a projector \(P_{\omega}(\mathbf{I}^{1},\ldots,\mathbf{I}^{n})\), \(\omega=1,\ldots,N\). Let \(\left|\varphi_{i}(\omega)\right\rangle\) be the wavefunction describing the state of spin \(i\) on molecule \(\omega\). Each projector is initially a product state, i.e., \(P_{\omega}=\left|\psi_{\omega}\right\rangle\left\langle\psi_{\omega}\right|\), where \(\left|\psi_{\omega}\right\rangle=\left|\varphi_{1}(\omega)\right\rangle \otimes\cdots\otimes\left|\varphi_{n}(\omega)\right\rangle\), i.e.,
\[P_{\omega}=\left|\varphi_{1}(\omega)\right\rangle\left\langle\varphi_{1}(\omega )\right|\otimes\cdots\otimes\left|\varphi_{n}(\omega)\right\rangle\left\langle \varphi_{n}(\omega)\right|.\]
Here, \(\omega\) is standard notation in probability theory to denote the elementary outcome of a random variable. The unitary propagator corresponding to the quantum circuit is denoted \(U(\mathbf{I}^{1},\ldots,\mathbf{I}^{n})\). The final projector is \(U(\mathbf{I}^{1},\ldots,\mathbf{I}^{n})P_{\omega}(\mathbf{I}^{1},\ldots, \mathbf{I}^{n})U(\mathbf{I}^{1},\ldots,\mathbf{I}^{n})^{\dagger}\). Averaging over \(N\) molecules (denoted by overline):
\[\overline{U(\mathbf{I}^{1},\ldots,\mathbf{I}^{n})P_{\omega}( \mathbf{I}^{1},\ldots,\mathbf{I}^{n})U(\mathbf{I}^{1},\ldots,\mathbf{I}^{n})^ {\dagger}}\] \[\qquad=U(\mathbf{I}^{1},\ldots,\mathbf{I}^{n})\rho(\mathbf{I}^{1 },\ldots,\mathbf{I}^{n})U(\mathbf{I}^{1},\ldots,\mathbf{I}^{n})^{\dagger} \equiv\rho^{\prime}\]
where
\[\rho(\mathbf{I}^{1},\ldots,\mathbf{I}^{n})\equiv\overline{P_{\omega}(\mathbf{I}^{1},\ldots,\mathbf{I}^{n})}=\sum_{\omega=1}^{N}p_{\omega}P_{\omega}(\mathbf{I}^{1}, \ldots,\mathbf{I}^{n})\]
is a statistical (density) operator and \(\sum_{\omega}p_{\omega}=1\). The readout of observable operator \(A\) is obtained as:
\[A^{(e_{1})}\equiv\langle A(\mathbf{I}^{1},\ldots,\mathbf{I}^{n})\rangle= \text{Tr}[\rho^{\prime}(\mathbf{I}^{1},\ldots,\mathbf{I}^{n})A(\mathbf{I}^{1},\ldots,\mathbf{I}^{n})]\]
where \(e_{1}\) distinguishes this form of ensemble averaging. On the other hand, suppose we start with \(n\) ensembles, each described by a statistical operator, \(\rho_{i}(\mathbf{I}^{i})\), describing this initial ensemble averaging. Each ensemble contains \(N\) spins. The initial state is a product state \(\rho_{1}(\mathbf{I}^{1})\otimes\cdots\otimes\rho_{n}(\mathbf{I}^{n})\), where
\[\rho_{i}(\mathbf{I}_{i})=\sum_{\omega}p_{\omega,i}\left|\varphi_{i}(\omega) \right\rangle\left\langle\varphi_{i}(\omega)\right|,\quad i=1,\ldots,n.\]
The propagator for the quantum circuit is denoted \(V(\mathbf{I}^{1},\ldots,\mathbf{I}^{n})\). (The notation \(V\) instead of \(U\) indicates that the circuit implementations may be different, depending on the particular physical implementation of the qubits.) Evolution of this product state leads to:
\[\rho^{\prime\prime}=V(\mathbf{I}^{1},\ldots,\mathbf{I}^{n})\rho_{1}(\mathbf{ I}^{1})\otimes\cdots\otimes\rho_{n}(\mathbf{I}^{n})V(\mathbf{I}^{1},\ldots, \mathbf{I}^{n})^{\dagger}\]
The readout for this ensemble-averaging method, labeled \(e_{2}\), leads to:
\[A^{(e_{2})}\equiv\langle A(\mathbf{I}^{1},\ldots,\mathbf{I}^{n})\rangle= \text{Tr}[\rho^{\prime\prime}(\mathbf{I}^{1},\ldots,\mathbf{I}^{n})A(\mathbf{ I}^{1},\ldots,\mathbf{I}^{n})].\]
As a consequence of the different orders of averaging, we note that even if \(U=V\), \(A^{(e_{1})}\) and \(A^{(e_{2})}\) do not generally give the same result. The averaging to obtain \(A^{(e_{1})}\) only requires specifying the probability distribution \(\{p_{\omega}\}_{\omega=1}^{N}\) whereas averaging for \(A^{(e_{2})}\) is described by a set \(i=1,\ldots,n\) of distributions \(\{p_{\omega,i}\}_{\omega=1}^{N},\sum_{\omega}p_{\omega,i}=1\).
Averaging over molecules before any computation is expected to yield different outcomes. Qubits occupying larger physical volumes would significantly relax the manufacturing requirements, as the placement of single-atom qubits in a solid lattice (or in a host molecule, for that matter) at precise locations and spacings has been challenging due to the lack of methods to implant point defects with good precision [12]. A second advantage is the robustness of the quantum operations with respect to environmental noise. A single spin undergoing decoherence leads to the loss of its quantum state, whereas the quantum state stored in ensembles may survive longer due to the inherent robustness of ensembles (see Appendix I.3).
Suppose we have a large ensemble (\(N\cdot n\gg 1\)) of nuclear spins \(\{\mathbf{I}^{i}\}\), where \(N\) is the number of localized spin ensembles and \(n\) is the number of spins per ensemble. We will call these mesoscopic spin ensembles E-qubits for reasons that will become clear shortly. To simplify the presentation we assume that \(I=1/2\) for nuclear spins; we will also assume that all spins in each localized ensemble have the same gyromagnetic ratio. This choice only affects some constants and scaling factors below, and does not lead to a loss of generality. The magnetic dipole interaction between \(N\cdot n\) spins cannot be used for quantum computing because of the impossibility of addressing each spin individually. Also, the exact couplings in atomic ensembles are not known, making it impossible to design precise gates. We consider: (1) two (\(N=2\)) localized ensembles \(A\) and \(B\) each containing \(n\) spins; (2) assume full polarization of all spins in the ensembles; (3) spins within each ensemble are decoupled. Spin ensembles are rarely discussed as candidate qubits because of their multilevel energy structure and high degeneracies, which differs from the structure of a qubit. If each ensemble admits a \(\mathfrak{su}(2)\)-like algebra, the ensembles can be viewed as qubits, even though they are in reality multilevel qudits (Fig. 1). The magnetic interaction between all pairs of spins is
\[\sum_{\begin{subarray}{c}i\in A,\\ j\in B\end{subarray}}\mathbf{I}^{i}\cdot\mathbf{D}_{ij}\cdot\mathbf{I}^{j}, \ \ \mathbf{D}_{ij}=-\frac{\mu_{0}}{4\pi}\frac{\gamma_{i}\gamma_{j}\hbar^{2}}{|\vec{r }_{ij}|^{3}}\left(3\frac{\vec{r}_{ij}\otimes\vec{r}_{ij}}{|\vec{r}_{ij}|^{2}} -\mathbb{1}_{3}\right) \tag{1}\]
where \(\vec{r}_{ij}\) is the vector connecting spin \(i\) from \(A\) to spin \(j\) from \(B\) (Fig. 1a). \(\mu_{0}\) is the vacuum permeability and \(\gamma\) is the gyromagnetic ratio. If there are \(N\) such localized spin ensembles, each containing \(n\) spin 1/2 particles the dimension of the spin algebra \(\mathfrak{su}(2^{Nn})\) is \(2^{2Nn}-1\). Clean quantum gates are still not possible due to the random distribution of \(\vec{r}_{ij}\) values. Moreover, this type of interaction over macroscopic distances is rarely considered due to the rapid \(1/r^{3}\) falloff of the dipolar interaction. However, in a specific cooperative geometry, clean gates are possible via cooperative couplings, as explained below.
Consider a geometry involving two spherical volumes \(A\) and \(B\) (Fig. 1a) whose centers are separated by a constant vector \(\vec{r}_{0}\) and the distance of each spin from the center of each region is much smaller than \(|\vec{r}_{0}|\), i.e., \(\delta_{A}\ll r_{0}\), \(\delta_{B}\ll r_{0}\). The internuclear vector \(\vec{r}_{ij}\) is nearly equal to \(\vec{r}_{0}\). Expanding \(|\vec{r}_{ij}|^{-3}\) in the small parameter \(\vec{\epsilon}=\vec{\delta}_{B}-\vec{\delta}_{A}\) about \(\vec{r}_{0}\) gives
\[\frac{1}{|\vec{r}_{0}+\vec{\epsilon}|^{3}}=\frac{1}{r_{0}^{3}}-3\frac{\vec{r}_ {0}\cdot\vec{\epsilon}}{r_{0}^{5}}+O(|\epsilon|^{2}). \tag{2}\]
The zeroth order term (i.e., neglecting terms of order \(|\epsilon/r_{0}|\) and higher) gives the effective Hamiltonian \(\mathcal{H}_{D}\)
\[\sum_{\begin{subarray}{c}i\in A,\\ j\in B\end{subarray}}\mathbf{I}^{i}\cdot\mathbf{D}_{AB}\cdot\mathbf{I}^{j}, \ \ \ \mathbf{D}_{AB}=-\frac{\mu_{0}\gamma A\gamma\eta\hbar^{2}}{4\pi r_{0}^{3}}\left(3 \frac{\vec{r}_{0}\otimes\vec{r}_{0}}{r_{0}^{2}}-\mathbb{1}_{3}\right),\]
where the coupling tensor is now constant. It is therefore independent of \(i\) and \(j\), the indices of summation. The Hamiltonian can be written as:
\[\mathcal{H}_{D}=\left(\sum_{i=1}^{n}\mathbf{I}^{A,i}\right)\cdot\mathbf{D}_{AB} \cdot\left(\sum_{j=1}^{n}\mathbf{I}^{B,j}\right)=\mathbf{I}^{A}\cdot\mathbf{D}_{ AB}\cdot\mathbf{I}^{B}, \tag{3}\]
where \(\mathbf{I}^{A}=\sum_{i=1}^{n}\mathbf{I}^{A,i}\) and \(\mathbf{I}^{B}=\sum_{j=1}^{n}\mathbf{I}^{B,j}\) are total angular momentum operators. We thus have a bilinear coupling \(\mathbf{I}^{A}\cdot\mathbf{D}_{AB}\cdot\mathbf{I}^{B}\) involving two large spins \(A\) and \(B\).
Also, since \([I^{A}_{\alpha},I^{A}_{\beta}]=i\epsilon_{\alpha\beta}\gamma I^{A}_{\gamma}\), \([\mathbf{I}^{A},\mathbf{I}^{B}]=0\), the action of the total angular momentum operator on the composite \(\prod_{i=1}^{n}(2I^{i}+1)\) dimensional Hilbert space of a localized ensemble constitutes a representation of the \(\mathfrak{su}(2)\) Lie algebra
that is reducible. Due to this rotational symmetry that maps to that of qubits, ensembles \(A\) and \(B\) are examples of E-qubits. (The \(\mathfrak{su}(2)\) symmetry of large spins has been known since 1932 [13].) If we have \(N\) localized spin ensembles, each with \(n\) spins, the spin-Hamiltonian is the sum of Zeeman (static, control fields) and point dipolar interaction between the E-qubits,
\[\mathcal{H}_{Z}+\mathcal{H}_{D}=\sum_{Q=1}^{N}\mathbf{I}^{Q}\cdot\left(\vec{h }^{Q}(t)+\frac{1}{2}\sum_{P\neq Q}^{N}\mathbf{D}_{QP}\cdot\mathbf{I}^{P}\right) \tag{4}\]
with \(\vec{h}^{Q}=(h_{x}^{Q},h_{y}^{Q},h_{z}^{Q})\) is an external field. It has \(\otimes\mathfrak{su}(2)^{N}\simeq\mathfrak{su}(2^{N})\) symmetry. Even though E-qubits are multilevel systems, the Hamiltonian (4) of the \(nN\)-spins system is equivalent to an effective \(N\)-qubit Hamiltonian, which is controllable. Controllability would not be possible with the lower symmetric form (1). The spin operator algebra is also much larger, with dimension \((2I+1)^{2Nn}-1\) for (1) vs \((2I+1)^{2N}-1\) for Eq. (4).
_Qubit state.-_ Each E-qubit occupies a mesoscopic volume and can therefore be addressed selectively using nanomagnets [14], nanowires, or similar technologies. Assuming \(I=1/2\), the density matrix of subsystem \(A\) containing \(n\) fully polarized spins \((\frac{1}{2}\mathbb{1}\pm I_{z})^{\otimes n}\), corresponding to \(\ket{I^{A},\pm I^{A}}\bra{I^{A},\pm I^{A}}\), in product operator notation is
\[\frac{1}{2^{n}}\sum_{m=0}^{n}2^{m-n}(\pm 1)^{m}\mathbb{1}_{\begin{subarray}{c}2^{n }\\ i_{1}<\cdots<i_{m}\end{subarray}}^{A}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
gates. This will work as long as the spins within each ensemble are decoupled from each other. The fidelity of quantum gates depends on linewidth. The latter is influenced by couplings between spins in each E-qubit. The fidelity of a quantum gate, to the first order, depends linearly on relaxation rate and inverse of gate time [17]. We discuss decoupling schemes below. Figure 2(a) demonstrates such rotations of spin-coherent states describing the E-qubit with \(n=5\) and \(N=2\). Initially polarized states such as \(|I^{A},I^{A}\rangle\) in the coupled spin basis can easily be rotated into (say) \(|I^{A},-I^{A}\rangle\) using standard operations.
The CNOT gate where the first E-qubit is the control and the second is the target can be implemented with the following sequence of unitaries:
\[\sqrt{i}R_{z}^{A}(\frac{\pi}{2})\cdot R_{z}^{B}(\frac{-\pi}{2})\cdot R_{x}^{B} (\frac{\pi}{2})\cdot U_{zz}(\frac{1}{2\|\mathsf{D}_{AB}\|})\cdot R_{y}^{B}( \frac{\pi}{2}),\]
where \(\|\mathsf{D}_{AB}\|\) is the norm of \(\mathsf{D}_{AB}\), \(U_{zz}(\frac{1}{2\|\mathsf{D}_{AB}\|})=\exp(i\pi I_{z}^{A}\otimes I_{z}^{B})\) describes evolution under an Ising (\(zz\))-type interaction derived from Eq. (4). Figure 2(c) indicates simulated time-evolution under the full \(U_{zz}\) dynamics for two qudits \(N=2\), with \(n=1,2,3,4,5\) spins, showing that the behavior is similar to the evolution of qubits, except that the evolution frequency scales with the number of spins in each ensemble. Figure 2 proves that the coupling between E-qubits scales linearly with \(n\). Although cases \(n>5\) (\(>10\) spins total) are difficult to simulate on an average classical computer, this linear relationship holds for any \(n\). We discuss below specific examples for much larger \(n\). With spin ensembles, storage of quantum information is inherently robust to noise if the spin-flips affect only a small fraction of the total moment. This is an advantage over single-atom qubits. We now discuss decoupling schemes.
_Solid-state nuclear spins.-_ In solids, the magnetic dipole interaction between nearest neighbors is strong. In contrast to the liquid state interactions, the spatial part of dipolar coupling, \((1-3\cos^{2}\theta_{ij})r_{ij}^{-3}\), is not averaged to zero as the result of molecular motion. Spin evolution under homonuclear dipolar interaction \(\sum_{i<j}D_{ij}(2I_{z}^{i}I_{z}^{j}-\frac{1}{2}(I_{+}^{i}I_{-}^{j}+I_{-}^{i}I_ {+}^{j}))\) transforms uncorrelated spin terms such as \(I_{x}^{i}\otimes I^{j}\) into correlated terms such as \(I_{y}^{i}\otimes I_{z}^{j}\). By connecting multiple spin pairs, a network of multispin correlated terms is generated that grows in size exponentially fast. These correlated terms are not directly observable and result in short \(T_{2}\) times, on the order of microseconds [18; 19; 20; 21].
To average out the dipolar interactions inside each E-qubit while preserving the inter-qubit interactions, well-known techniques of dynamic decoupling can be applied to average out unwanted dipolar interactions and obtain the desired effective Hamiltonian. The idea is to either suppress unwanted interactions or shift them to a different frequency band. One may leverage solid-echo cycles such as MREV-8 [22], or sequences based on magic echo [23], or use magic angle spin-locking such as Lee-Goldberg (LG) [24]. Consider the polarization-inversion spin-exchange at the magic angle (PISEMA) experiment which is a variant of LG decoupling sequence. The homonuclear dipolar interaction is averaged to zero by locking the spins to an effective field \(\mathbf{B}_{\text{eff}}\) along the magic angle, \(\theta_{m}=\arccos(\sqrt{1/3})\). The spatial part of the dipolar interaction averages to zero in each E-qubit, as long as \(\|\mathbf{B}_{\text{eff}}\|\gg\|\omega_{D}\|\) (Fig. 3a). To understand the influence on the inter-qubit interaction, consider the magnetic moment of spins in each qubit. As their magnetic moment vectors rotate around \(\mathbf{B}_{\text{eff}}\), the perpendicular component gains a time dependence and averages out over the cycle, while the parallel component survives. For all the spins with initial magnetic moment along the \(z\) axis, a portion of their initial magnetic moment remains after the application of the locking field \(\tilde{I}_{z}^{\|,i}=I_{z}^{i}\sin\theta_{m}=0.82\times I_{z}^{i}\)[25]. This is a large scaling factor in comparison to rival decoupling sequences. Figure 3b shows the pulse sequence that combines these ideas for quantum control of two E-qubits. By applying the PISEMA sequence on both E-qubits simultaneously, we ensure that the intra-qubit interactions are averaged to zero. Selectivity in applying \(\mathbf{B}_{\text{eff}}\) (direction and amplitude) means that the effective inter-qubit interactions can be engineered. In particular, the effective magnetic moment in each qubit can be oriented at a different direction and with different precession frequencies resulting in the inter-qubit interaction of the form \(\sin\left(\theta_{m}\right)^{2}\sum_{i}I_{z}^{A,i}\otimes\sum_{j}I_{z}^{B,j}\).
To get an idea of the effective coupling strength, consider the example of solid hydrogen (\(T<14.01\) K, density \(0.086\) g/cm\({}^{3}\)). The gyromagnetic ratio of \({}^{1}\)H is \(26.75\times 10^{7}\) rad/T/s. Consider an arrangement of spin ensembles each confined to spherical volumes with radius of \(r=30\) nm. The centers of the ensembles are placed \(r_{0}=100\) nm away from the cen
Figure 2: Demonstration of single- and two-qubit gates for E-qubits. Single qubit gates are applied with standard rotation operators; the control frequencies are independent of \(n\). a) Rabi oscillations for the \(n=5\) with \(R_{x}^{A}(\theta)\) applied to E-qubit A. The vertical axis shows the amplitude of observed signal resulting from projection of the density matrix on the \(X\), \(Y\), and \(Z\) axes. Spins are chosen at random locations inside two spheres with \(r_{A}=r_{B}=30\) nm and \(r_{0}=100\) nm. b) Evolution under the Ising \(zz\) interaction needed for the entangling CNOT gate, with \(n=1-5\). Black/blue dots project the evolution of the E-qubit B with full/effective Hamiltonian, Eq. (1)/Eq. (3), on the vertical axis. c) Fourier transform of signal from the E-qubit B evolving with the full Hamiltonian, blue points from panel b, show a clear linear dependency of the coupling frequency to \(n\). The signal amplitude also grows with \(n\) as expected.
ters of their nearest neighbors. There are \(5.9\times 10^{6}\) spins in a sphere of solid hydrogen, leading to \(D_{AB}=7.51\times 10^{-4}\) rad/s. The magnetic field seen by a spin in sphere \(A\) due to the field of spins in sphere \(B\) is \(104\)\(\mu\)T. This corresponds to a precession frequency of 4.4 kHz. Entangling gates between the two ensembles can therefore be executed in 227 \(\mu\)s. This proposed implementation is limited by the efficiency of the control sequence in removing intra-qubit interactions. The fidelity of two-qubit gate is mainly determined by the robustness of two-qubit unitary interactions such as \(zz\). Figure 3d shows the effect of residual intra-qubit interaction in the implementation of a macroscopic Ising \(zz\) interaction.
_Liquid state or gas phase.-_ In the gas or liquid state Eq. (4) automatically takes the form of a mesoscopic Ising (\(zz\))-type interaction. In high magnetic fields, dipolar interactions between nearby molecules vanish due to diffusion, i.e., \(\left<3\cos^{2}\theta-1\right>_{S^{2}}=0\), leading to intra-qubit decoupling. However, beyond the diffusion length, the magnetic dipole interaction does not vanish, leading to a nonzero mesoscopic inter-qubit dipolar interaction. The end result is an Ising \(zz\) form \(\mathcal{H}_{AB}\propto I_{z}^{A}I_{z}^{B}\)[26; 27; 28; 29], where the proportionality constant depends only on the time-averaged internuclear vector connecting pairs of spins between each ensemble. In spite of molecular motions this effective mesoscopic long-range interaction is static in the geometry described here. For spherical volumes (\(r\sim 30\) nm) filled with liquid hydrogen, \(r\) is less than the diffusion length of H\({}_{2}\) molecules during the timescale of a NMR experiment, the dipolar interaction is averaged out within each sphere. The effective coupling strength for inter-qubit interactions when \(r_{0}=100\) nm is 3.6 kHz. This is to be contrasted with the long relaxation times (\(>\) 1 s) of nuclear spins in liquids.
_Nuclear-spin polarization.-_ In the initialization step all spins must be fully polarized, which is a non-trivial task for nuclear spins. Depending on the physical implementation of the E-qubits, hyperpolarization techniques have been developed which may be applicable including dynamic nuclear polarization [30], spin exchange optical pumping [31] and parahydrogen induced polarization [32]. We also note that nuclear ferromagnetic ordering has been observed at nanokelvin temperatures [33; 34; 35; 36; 37].
_Molecular magnets.-_ Moving on from nuclear spins to discussing electronic spins we switch our notation from \(\mathbf{I}\) to \(\mathbf{S}\). Molecular magnets are high-spin clusters in a generally anisotropic crystalline structure [38]. They are modeled as giant spins [39]; Mn\({}_{25}\) has \(S=\frac{51}{2}\)[40] whereas Chen et al. [41] reported a giant spin ground state of magnitude \(S=91\). A \(T_{2}\) time of 31 ms was recently demonstrated for \({}^{171}\)Yb\({}^{3+}\) at 1.2 K [42]. They are possible building blocks for the physical implementation of quantum processors. A realization of Grover's quantum search algorithm was achieved by coherent manipulation of a TbPc\({}_{2}\) molecular magnet [43]. Unlike nuclear spins, nearly full thermal polarization of electron spins, \(\langle S_{z}\rangle=\text{Tr}(S_{z}^{\dagger}e^{-\beta\mathcal{H}})/\text{ Tr}(e^{-\beta\mathcal{H}})\approx|S|\), \(\mathcal{H}=D[S_{z}^{2}-\frac{1}{3}S(S+1)]+E[S_{x}^{2}-S_{y}^{2}]-g\mu_{B} \mathbf{S}\cdot\mathbf{B}\), can be asymptotically approached at high magnetic fields and low temperatures. As an example, Gd\({}^{3+}\) ions in GdW\({}_{30}\) complexes [44] (\(D=1281\) MHz, \(E=294\) MHz, \(S=\frac{7}{2}\), \(g=2\)) can reach 95% polarization at 3 T and 2 K. The crystal-field (CF) term is sufficiently weak (for GdW\({}_{30}\), 3% of the Zeeman term) and does not interfere significantly with the spin dynamics. Weak CF interactions are a general feature of rare-earth ions, stemming from the isolated nature of the \(4f\) electrons, making it possible to treat them like nearly free paramagnetic ions (unlike transition-metal ions whose CF splittings are much larger [45]). Other methods could be considered, such as chirality-induced spin selectivity (CISS), whose helical electrons are fully polarized at high temperatures [46].
The methods of the previous section can be extended to ensembles of large spins (\(S>1/2\)) provided that the algebra of large spins is used. Molecular magnets are normally treated as multipole tensors, \(\mathscr{Y}^{(k)q}(\mathbf{S})\) where \(0\leq k\leq 2S\), \(-k\leq q\leq k\), in the EPR literature [45]. A full quantum treatment of multipoles (e.g., Ref. [11]) is presented in Appendices I.1 and I.2. There, we demonstrate the creation of one- and two-qubit gates involving arbitrary spins. In this context, it is worth reviewing the well-known relationship from angular momentum theory between coupled and uncoupled representations. It is the basis that gives rise to large composite electronic spins. A molecular magnet is made up of \(n\) magnetic atoms each with spin \(\{\mathbf{s}^{i}\}\) (\(i=1,\ldots,n\)). The total spin is \(\mathbf{S}=\sum_{i=1}^{n}\mathbf{s}^{i}\). The Zeeman interaction with an applied magnetic field \(\mathbf{B}(t)\) is \(\mu_{B}\mathbf{B}(t)\cdot\sum_{i=1}^{n}\mathbf{g}_{i}\cdot\mathbf{s}^{i}\), where \(\mu_{B}\) is the Bohr magneton and \(\{\mathbf{g}_{i}\}\) are \(g\)-tensors. Using the Wigner-Eckart theorem
\[\langle\alpha^{\prime},j^{\prime}m^{\prime}|\mathscr{Y}^{(k)q}|\alpha,jm\rangle =\langle jk;mq|jk;j^{\prime}m^{\prime}\rangle\,\frac{\langle\alpha^{ \prime}j^{\prime}||\mathscr{Y}^{(k)}||\alpha j\rangle}{\sqrt{2j^{\prime}+1}}\]
Figure 3: Quantum operations with solid state nuclear-spin ensemble qubits. a) In the LG spin-locking experiment, an effective field along the magic angle is applied to remove the dipolar interaction inside each E-qubit. b) Decoupling sequence for removing the intra-qudit dipolar interaction in E-qubits \(A\) and \(B\), while preserving the \(zz\) interaction between them. c) Proposed experimental setup for nuclear spins. The radius of each sphere is 30 nm and they are 100 nm apart. d) Two-qubit entangling gate between two E-qubits. This experiment is limited by the efficiency of control sequence in removing the intra-qubit interaction. Residual interactions larger than \(1\%\) have a devastating effect on the fidelity of the two-qubit gate. Average gate fidelity reduces linearly with the linewidth of spins in E-qubits.
applied to vectors:
\[\langle\alpha^{\prime},j^{\prime}m^{\prime}|V^{(1)q}|\alpha,jm\rangle=\langle jm^{ \prime}|J^{(1)q}|jm\rangle\,\frac{\langle\alpha^{\prime}j^{\prime}|\mathbf{J} \cdot\mathbf{V}|\alpha j\rangle}{\hbar^{2}j(j+1)},\]
where \(\mathscr{Y}^{(k)q}\) are components of an irreducible tensor \(\mathscr{Y}^{(k)}\), \(\langle jk;mq|jk;j^{\prime}m^{\prime}\rangle\) is a Clebsch-Gordan coefficient and \(\langle\alpha^{\prime}j^{\prime}||\mathscr{Y}^{(k)}||\alpha j\rangle\) is a reduced matrix element, it is customary to write this interaction as \(\mu_{B}\mathbf{B}(t)\cdot\mathbf{g}\cdot\mathbf{S}\). Here, \(\mathbf{S}\) is the total spin of the molecular magnet and \(\mathbf{q}=\sum_{i}\mathbf{g}_{i}c_{i}\), with \(c_{i}=\langle\alpha^{\prime}S|\mathbf{S}\cdot\mathbf{s}^{i}|\alpha S\rangle /\hbar^{2}S(S+1)\). This is certainly the case when only the multiplet with maximum spin \(S\) is populated. Inside this multiplet, application of this theorem for \(\mathbf{V}=\mathbf{s}^{i}\), \(\mathbf{J}=\mathbf{S}\), \(|jm\rangle\equiv|SM\rangle\), enables replacing \(\mathbf{s}^{i}\) by \(c_{i}\mathbf{S}\), where \(c_{i}\) is a numerical constant, i.e., \(\pi\mathbf{s}^{i}\pi=c_{i}\pi\mathbf{S}\pi\), where \(\pi\) is the projection operator onto the multiplet of highest spin allowed in the ground state of the molecular magnet, i.e. \(\pi=\sum_{M=-S}^{S}|SM\rangle\,\langle SM|\). The numerical constants \(c_{i}\) are determined by the nature of the exchange interactions (e.g., ferro- vs antiferromagnetic) between the constituent spins.
_Solid State Electronic Spins.-_ Solid state systems such as rare-earth ion or transition-metal-doped semiconductors could be a potential platform for E-qubits. In the dilute limit, spins are weakly interacting and residual intra-qubit couplings can be addressed using decoupling schemes as discussed earlier. In the dense limit, however, ions interact strongly. Fortunately, uncoupling may be possible with the use of co-dopants or materials engineering. Long decoherence times of Er\({}^{3+}\) ions in the presence of oxygen co-dopant have been observed in the system Er-Si [47; 48; 49]; we note that the system Er:YSO is also of interest. For Er\({}^{3+}\) the ZFS is smaller than the Zeeman splitting from Tesla fields, making it possible to polarize electron spins above \(>\) 90%. Weak ZFS are a general feature of rare-earth ions.
_Conclusion.-_ The design requirements of quantum processors based on large collective spins (ensembles, molecules) may be less stringent that those based on individual atomic spins. A scalable implementation of the latter would require precise placement of atomic defects in the lattice with atomic resolution, something that is not currently feasible at this time using foundry techniques. Also, once the single atom decays, the entire quantum state is lost. That is not the case for ensembles (E-qubits), where survival of the fittest prevails, so-to-speak. Moreover, E-qubits can be selectively addressed through spatial degrees of freedom, rather than frequency (spectral) selectivity, providing a sensible path towards scalability. To date, a prescription of gate implementation between collective spins has been lacking. Herein we examined the feasibility of using direct macroscopic magnetic dipolar interactions for the implementation of universal gates between E-qubits. We have shown how to create coherent evolution where each E-qubit behaves isomorphically to \(\mathfrak{su}(2)\). Under conditions of intra-qubit decoupling and effective mesoscopic Ising \(zz\) interactions, possible implementations could include nuclear-spin ensembles in the solid, liquid and gas phases, as well as giant electronic spins from molecular magnets (including possibly, ensembles of molecular magnets or rare-earth ion dopants). Previous implementations of ensemble qubits such as neutral atoms rely on interactions with an external field to apply quantum gates, where the coupling scales with \(\sqrt{n}\)[50; 51] as compared to \(n\), a clear advantage in our proposal. High-fidelity control and readout methods are of course, critical to the success of such a proposal. To this end, we note the possibility of electric-field control [52], nanomagnet control [14], nanowire control [53] and nanosquid readout [54].
## I Appendix
### Quantum Treatment of Single- and Two-Qubit Gates For Arbitrary Spin
Here we give a full quantum treatment of gates for maximally polarized states involving single-spin states from arbitrary spins. The formalism of state multipoles \(\mathscr{Y}^{(k)q}(\mathbf{S})\)[11] is employed. State multipoles are defined in terms of the angular momentum states \(|SM\rangle\) by Eq. (11) below. Sets of arbitrary spins are best treated using the multispin formalism of Sanctuary [11]. The multispin tensors for \(N\) spins, \(T_{\{K\}}^{(k)q}(k_{1},k_{2},\ldots,k_{N})\), are constructed from the \(\mathscr{Y}\)'s as follows:
\[T_{\{K\}}^{(k)q}(k_{1},k_{2},\ldots,k_{N})=\left[\prod_{i=1}^{N} (S^{i})^{-1/2}\right]\] \[\times\sum_{q_{1},\ldots,q_{N}}\langle k_{1}q_{1}k_{2}q_{2}\ldots k _{N}q_{N}|(k_{1}k_{2}\ldots k_{N})\{K\}kq\rangle\] \[\times\mathscr{Y}^{(k_{1})q_{1}}(\mathbf{S}^{1})\otimes\mathscr{Y }^{(k_{2})q_{2}}(\mathbf{S}^{2})\otimes\cdots\otimes\mathscr{Y}^{(k_{N})q_{N} }(\mathbf{S}^{N}), \tag{5}\]
where \(\langle k_{1}q_{1}k_{2}q_{2}\ldots k_{N}q_{N}|(k_{1}k_{2}\ldots k_{N})\{K\}kq\rangle\) is a generalized Clebsch-Gordan coefficient [11], \((S^{i})=(2S^{i}+1)\) and \(\{K\}=K_{1},K_{2},\ldots,K_{N-2}\) is the angular momentum coupling scheme (set of intermediate values). The adjoint is:
\[T_{\{K\}}^{(k)q}(k_{1},k_{2},\ldots,k_{N})^{\dagger}=(-1)^{k-q}T_{\{K\}}^{(k)- q}(k_{1},k_{2},\ldots,k_{N}). \tag{6}\]
These operators are orthonormal in the following sense:
\[\text{Tr}\left[T_{\{K\}}^{(k)q}(k_{1},k_{2},\ldots,k_{N})^{\dagger }T_{\{K^{\prime}\}}^{(k^{\prime})q^{\prime}}(k^{\prime}_{1},k^{\prime}_{2}, \ldots,k^{\prime}_{N})\right]\\ =\delta_{kk^{\prime}}\delta_{qq^{\prime}}\delta_{\{K\},\{K^{\prime }\}}\prod_{i=1}^{N}\delta_{k_{i},k^{\prime}_{i}}, \tag{7}\]
where \(\delta_{\{K\},\{K^{\prime}\}}=\delta_{K_{1},K^{\prime}_{1}}\delta_{K_{2},K^{ \prime}_{2}}\ldots\delta_{K_{N},K^{\prime}_{N}}\). In this formalism a scalar operator on the \(n\) spins can be written \(\phi=\sum_{k,\{V\}}\phi_{\{V\}}^{(k)}\odot^{k}T_{\{V\}}^{(k)}\), where \(T_{\{V\}}^{(k)}\) forms a basis for the irreducible representation of the rotation group, \(\phi_{\{V\}}^{(k)}\) are \(k\)-th rank tensor coefficients and \(\odot^{k}\) indicates a \(k\)-th order contraction of the two tensors [11]. In this formalism the rotational invariance can be exploited but not when the expansion \(\phi=\sum_{\alpha,\beta}|\alpha\rangle\,\langle\alpha|\,\phi|\beta\rangle\, \langle\beta|\) is used. Here the basis states \(|\alpha\rangle\), which form a complete set, are usually taken to be the product state of the \(n\) spins. Such a treatment with product states can be quite involved because the whole series must be used [11].
#### iv.1.1 Single-Qubit Gates
It is clear by inspection of Eq. (5) that single-spin operators (\(k=1\), \(k_{i}=1\), \(k_{j}=0\), \(j\neq i\) for some \(i=1,\ldots,N\)), i.e.,
\[T_{\{K\}}^{(1)q}(00\ldots 1_{i}\ldots 00)\propto\mathscr{Y}^{(1)q}(\mathbf{S}^{i })\propto\mathbf{S}^{i}\]
in the spherical basis, since for single-spin operators, \(q=q_{i}\), according to the generalized Clebsch-Gordan coefficient \(\langle k_{1}q_{1}k_{2}q_{2}\ldots k_{N}q_{N}|(k_{1}k_{2}\ldots k_{N})\{K\}{kq \}\rangle\). By construction, these single-spin operators obey the usual angular momentum commutation relations:
\[[S_{\pm},\mathscr{Y}^{(k)q}(\mathbf{S})]= \sqrt{k(k+1)-q(q\pm 1)}\mathscr{Y}^{(k)q\pm 1}(\mathbf{S})\] \[= q\mathscr{Y}^{(k)q}(\mathbf{S}). \tag{8}\]
And since these commutation rules hold, as would for any angular momentum operator, any Euler rotation can be applied using multispin tensors of the form \(T_{\{K\}}^{(1)q}(00\ldots 1_{i}\ldots 00)\), where \(1_{i}\) indicates the component \(k_{i}=1\), and therefore, single-qubit gates can be readily achieved.
The density matrix for \(N\) spins can be expanded in the multispin basis:
\[\rho(t)=\frac{1}{\prod_{i=1}^{N}(2S^{i}+1)}\Bigl{[}\sum_{kq\alpha}\phi_{q}^{k }(\alpha,t)T^{(k)q}(\alpha)\Bigr{]} \tag{9}\]
where \(\alpha\) is shorthand for all quantum numbers except \(k,q\), i.e., the coupling scheme \(\{K\}\) as well as \(k_{1},\ldots,k_{N}\). In light of (6) and (7) the functions \(\phi_{q}^{k}(\alpha,t)\) are defined as:
\[\phi_{q}^{k}(\alpha,t)=\text{Tr}[T^{(k)q}(\alpha)^{\dagger}\rho(t)]\prod_{i=1 }^{N}(2S^{i}+1).\]
Each term in the density matrix is of the form (5), which in turn is a summation of products \(\mathscr{Y}^{(k_{1})q_{1}}(\mathbf{S}^{1})\otimes\mathscr{Y}^{(k_{2})q_{2}}( \mathbf{S}^{2})\otimes\cdots\otimes\mathscr{Y}^{(k_{N})q_{N}}(\mathbf{S}^{N})\). A single-qubit gate amounts to applying a unitary constructed from the generator \(T_{\{K\}}^{(1)q}(00\ldots 1_{i}\ldots 00)\propto\mathbf{S}^{i}\), which commutes with all tensors \(\mathscr{Y}^{(k_{j})q_{j}}(\mathbf{S}^{j})\) in the product except \(j=i\). Such rotations of angular momenta \(\mathbf{S}^{i}\) are analogous to the spin 1/2 case.
#### iv.1.2 Two-Qubit Gates
In the expression for the density matrix (9) two-qubit gates can affect any term, like the case of single-qubit gates. However, in a product operator \(\mathscr{Y}^{(k_{1})q_{1}}(\mathbf{S}^{1})\otimes\mathscr{Y}^{(k_{2})q_{2}}( \mathbf{S}^{2})\otimes\cdots\otimes\mathscr{Y}^{(k_{N})q_{N}}(\mathbf{S}^{N})\) we need only consider two operators (say 1 and 2), the ones affected by the bilinear coupling Hamiltonian:
\[S_{z}^{1}S_{z}^{2}=\left\{\frac{S^{1}(S^{1}+1)S^{2}(S^{2}+1)}{3}\right\}^{1/2 }\mathscr{Y}^{(1)0}(\mathbf{S}^{1})\otimes\mathscr{Y}^{(1)0}(\mathbf{S}^{2}). \tag{10}\]
This Hamiltonian will affect \(\mathscr{Y}^{(k_{1})q_{1}}(\mathbf{S}^{1})\otimes\mathscr{Y}^{(k_{2})q_{2}}( \mathbf{S}^{2})\) in the product operator. In the multipole formalism, each term in the product has a matrix representation:
\[\mathscr{Y}^{(k_{1})q_{i}}(\mathbf{S}^{i})=(i)^{k_{i}}[(S^{i})(k _{i})]^{1/2}\\ \times\sum_{M_{i}M_{i}^{\prime}}(-1)^{S^{i}-M_{i}}\begin{pmatrix}S ^{i}&k_{i}&S^{i}\\ -M_{i}&q_{i}&M_{i}^{\prime}\end{pmatrix}|S^{i}M_{i}\rangle\,\langle S^{i}M_{i} ^{\prime}| \tag{11}\]
where \((S^{i})\equiv(2S^{i}+1)\) and \((k_{i})=(2k_{i}+1)\). For the axial component (\(q_{i}=0\)) the selection rule from the Wigner \(3j\) symbol implies that \(m_{i}=m_{i}^{\prime}\) and the terms are all diagonal in the basis of projection operators \(|S^{i}M_{i}\rangle\,\langle S^{i}M_{i}^{\prime}|\). By extension to the tensor product space, \(\mathscr{Y}^{(1)0}(\mathbf{S}^{1})\otimes\mathscr{Y}^{(1)0}(\mathbf{S}^{2})\) is diagonal in the tensor product basis \(|S^{i}M_{i}\rangle\,\langle S^{i}M_{i}^{\prime}|\otimes|S^{j}M_{j}\rangle\, \langle S^{j}M_{j}^{\prime}|\). We can therefore exponentiate the operator \(\mathscr{Y}^{(1)0}(\mathbf{S}^{1})\otimes\mathscr{Y}^{(1)0}(\mathbf{S}^{2})\) as follows:
\[\exp\left(iDt\mathscr{Y}^{(1)0}(\mathbf{S}^{1})\otimes\mathscr{Y }^{(1)0}(\mathbf{S}^{2})\right)=\sum_{M_{1},M_{2}}e^{-iDt[(S^{1})(1)(S^{2})(1) ]^{1/2}(-1)^{S^{1}-M_{1}+S^{2}-M_{2}}\bigl{(}\begin{smallmatrix}S^{1}&1\\ -M_{1}&0&M_{1}\end{smallmatrix}\bigr{)}\bigl{(}\begin{smallmatrix}S^{2}&1&S^{2} \\ -M_{2}&0&M_{2}\end{smallmatrix}\bigr{)}\\ \times|S^{1}M_{1}\rangle\,\langle S^{1}M_{1}|\otimes|S^{2}M_{2} \rangle\,\langle S^{2}M_{2}|\]
where \(D\) is a coupling strength that also absorbs the numerical coefficient of Eq. (10) for convenience. Evolution of a product state such as \(\mathscr{Y}^{(k_{1})q_{1}}(\mathbf{S}^{1})\otimes\mathscr{Y}^{(k_{2})q_{2}}( \mathbf{S}^{2})\) gives:
\[\exp\left(iDt\mathscr{Y}^{(1)0}(\mathbf{S}^{1})\otimes\mathscr{Y}^{(1) 0}(\mathbf{S}^{2})\right)\mathscr{Y}^{(k_{1})q_{1}}(\mathbf{S}^{1})\otimes \mathscr{Y}^{(k_{2})q_{2}}(\mathbf{S}^{2})\exp\left(-iDt\mathscr{Y}^{(1)0}( \mathbf{S}^{1})\otimes\mathscr{Y}^{(1)0}(\mathbf{S}^{2})\right)\\ =\sum_{M_{1},M_{2}}\sum_{M_{1}^{\prime},M_{2}^{\prime}}e^{-iDt[(S^ {1})(1)(S^{2})(1)]^{1/2}(-1)^{S^{1}-M_{1}+S^{2}-M_{2}}\big{(}\begin{smallmatrix}S ^{1}&1&S^{1}\\ -M_{1}&0&M_{1}\end{smallmatrix}\big{)}\big{(}\begin{smallmatrix}S^{2}&1&S^{2}\\ -M_{2}&0&M_{2}\end{smallmatrix}\big{)}\\ \times e^{iDt[(S^{1})(1)(S^{2})(1)]^{1/2}(-1)^{S^{1}-M_{1}^{ \prime}+S^{2}-M_{2}^{\prime}}\big{(}\begin{smallmatrix}S^{1}&1&S^{1}\\ -M_{1}^{\prime}&0&M_{1}^{\prime}\end{smallmatrix}\big{)}\Big{(}\begin{smallmatrix} S^{2}&1&S^{2}\\ -M_{2}^{\prime}&0&M_{2}\end{smallmatrix}\Big{)}\\ \times|S^{1}M_{1}\rangle\,\langle S^{1}M_{1}|\,\mathscr{Y}^{(k_{ 1})q_{1}}(\mathbf{S}^{1})\,|S^{1}M_{1}^{\prime}\rangle\,\langle S^{1}M_{1}^{ \prime}|\otimes|S^{2}M_{2}\rangle\,\langle S^{2}M_{2}|\,\mathscr{Y}^{(k_{2})q _{2}}(\mathbf{S}^{2})\,|S^{2}M_{2}^{\prime}\rangle\,\langle S^{2}M_{2}^{\prime}| \tag{12}\]
Expressing the tensors as projectors using Eq. (11) we get:
\[\sigma(t)=(i)^{k_{1}+k_{2}}[(S^{1})(k_{1})(S^{2})(k_{2})]^{1/2} \sum_{M_{1},M_{2}}\sum_{M_{1}^{\prime},M_{2}^{\prime}}(-1)^{S^{1}-M_{1}+S^{2}- M_{2}}\\ \times e^{-iDt[(S^{1})(1)(S^{2})(1)]^{1/2}(-1)^{S^{1}-M_{1}+S^{2}- M_{2}}\big{(}\begin{smallmatrix}S^{1}&1&S^{1}\\ -M_{1}&0&M_{1}\end{smallmatrix}\big{)}\big{(}\begin{smallmatrix}S^{2}&1&S^{2} \\ -M_{2}&0&M_{2}\end{smallmatrix}\big{)}\\ \times e^{iDt[(S^{1})(1)(S^{2})(1)]^{1/2}(-1)^{S^{1}-M_{1}^{ \prime}+S^{2}-M_{2}^{\prime}}\big{(}\begin{smallmatrix}S^{1}&1&S^{1}\\ -M_{1}^{\prime}&0&M_{1}^{\prime}\end{smallmatrix}\big{)}\Big{(}\begin{smallmatrix} S^{2}&1&S^{2}\\ -M_{2}^{\prime}&0&M_{2}^{\prime}\end{smallmatrix}\Big{)}\\ \times\begin{pmatrix}S^{1}&k_{1}&S^{1}\\ -M_{1}&q_{1}&M_{1}^{\prime}\end{pmatrix}\begin{pmatrix}S^{2}&k_{2}&S^{2}\\ -M_{2}&q_{2}&M_{2}^{\prime}\end{pmatrix}|S^{1}M_{1}\rangle\,\langle S^{1}M_{1}^ {\prime}|\otimes|S^{2}M_{2}\rangle\,\langle S^{2}M_{2}^{\prime}| \tag{13}\]
Measurement of a bilinear spin operator \(\mathscr{Y}^{(k_{3})q_{3}}(\mathbf{S}^{1})\otimes\mathscr{Y}^{(k_{4})q_{4}}( \mathbf{S}^{2})\) is obtained by computing the trace \(\text{Tr}\left[(\mathscr{Y}^{(k_{3})q_{3}}(\mathbf{S}^{1})\otimes\mathscr{Y}^ {(k_{4})q_{4}}(\mathbf{S}^{2}))^{\dagger}\rho(t)\right]\). Expanding \(\mathscr{Y}^{(k_{3})q_{3}}(\mathbf{S}^{1})\) and \(\mathscr{Y}^{(k_{4})q_{4}}(\mathbf{S}^{2})\) in projectors (using Eq. (11)) and computing the trace using \(\text{Tr}(\cdot)=\sum_{ij}\,\langle S^{1}M_{i}|\otimes\langle S^{2}M_{j}|\,( \cdot)\,|S^{1}M_{i}\rangle\otimes|S^{2}M_{j}\rangle\). (We have omitted the summation over \(S^{1}\) and \(S^{2}\); all other terms different from \(S^{1},S^{2}\) would be zero since \(\rho(t)\) only contains terms with \(S^{1},S^{2}\).)
\[\phi(t)=[(S^{1})(k_{1})(S^{2})(k_{2})]\sum_{M_{i},M_{j}}\sum_{M_{ 1},M_{2}}(-1)^{S^{1}-M_{1}+S^{2}-M_{2}}e^{-iDt[(S^{1})(1)(S^{2})(1)]^{1/2}(-1)^ {S^{1}-M_{1}+S^{2}-M_{2}}\big{(}\begin{smallmatrix}S^{1}&1&S^{1}\\ -M_{1}&0&M_{1}\end{smallmatrix}\big{)}\big{(}\begin{smallmatrix}S^{2}&1&S^{2} \\ -M_{2}&0&M_{2}\end{smallmatrix}\big{)}\\ \times e^{iDt[(S^{1})(1)(S^{2})(1)]^{1/2}(-1)^{S^{1}-M_{i}+S^{2}- M_{j}}\big{(}\begin{smallmatrix}S^{1}&1&S^{1}\\ -M_{1}&0&M_{1}\end{smallmatrix}\big{)}\Big{(}\begin{smallmatrix}S^{2}&k_{4}&S^{2} \\ -M_{2}&q_{4}&M_{j}\end{smallmatrix}\Big{)} \tag{14}\]
This formula gives the time-evolution for any initial state described by the quantum numbers \(S_{1},S_{2},k_{1},q_{1},k_{2},q_{2}\). The special case of a single-spin operator is obtained (for example) by setting \(k_{2}=0,q_{2}=0\). Suppose that the initial state is \(\mathscr{Y}^{(1)1}(\mathbf{S}^{1})\) (\(k_{1}=1\), \(q_{1}=1\), \(k_{2}=0\), \(q_{2}=0\)) and we are interested in the amount of \(\mathscr{Y}^{(1)1}(\mathbf{S}^{1})\) (\(k_{3}=1\), \(q_{3}=1\), \(k_{4}=0\), \(q_{4}=0\)). The product of Wigner \(3j\) symbols is nonzero provided that the triangle rules are obeyed (i.e., \(0\leq k_{1},k_{3}\leq 2S^{1}\), \(0\leq k_{2},k_{4}\leq 2S^{2}\)). The Wigner \(3j\) symbols lead to the following selection rules: \(M_{2}=M_{j}\) and \(M_{1}=M_{i}+1\), causing the sums over \(M_{i}\) and \(M_{j}\) to collapse. Inside the argument of the exponential, we may simplify the term:
\[=\frac{(-1)^{M_{1}+S^{1}+M_{2}+S^{2}}M_{2}}{\sqrt{[S^{1}(S^{1}+1)(2S ^{1}+1)][S^{2}(S^{2}+1)(2S^{2}+1)]}} \tag{15}\]
provided that \(-S^{1}\leq M_{1}-1\leq S^{1}\) and \(-S^{2}\leq M_{2}\leq S^{2}\).
We get:
\[\mathcal{C}(t)=[(S^{1})(1)(S^{2})(0)]\sum_{M_{1},M_{2}}(-1)^{S^{1}-M_{ 1}+S^{2}-M_{2}}\\ \times e^{-iDt[(S^{1})(1)(S^{2})(1)]^{1/2}(-1)^{M_{1}+M_{2}+S^{1}+S ^{2}}M_{2}[S^{1}(S^{1}+1)(2S^{1}+1)S^{2}(S^{2}+1)(2S^{2}+1)]^{-1/2}}\\ \times\frac{(M_{1}-1-S^{1})(S^{1}-M_{1})!}{2S^{1}(1+3S^{1}+2[S^{1} ]^{2})(1+2S^{2})(M_{1}-1+S^{1})!} \tag{16}\]
where \([S^{1}]^{2}\) denotes the square of \(S^{1}\) (spin 1 magnitude). For fixed \(S^{1},S^{2}\) this equation describes a sum of oscillatory functions, each with a slightly different frequency (\(M_{1},M_{2}\) dependence) and amplitudes. The initial coherence (\(q_{1}=1\)) returns to its original state periodically, at a frequency that can be read out from the argument of the exponential; the factor \(M_{2}\) determines the particular harmonic for each term in the summation.
The limit of large spin is obtained by taking \(S^{1}=S^{2}=S\) and letting \(S\to\infty\) (i.e., \(2S+1\approx S\), etc) and recalling that \(D\) is proportional to \([S]^{2}\) (see Eq. 10). The argument of the exponential (evolution frequency) is then seen to be proportional to \(([S]^{2}\cdot S/[S]^{3})M_{2}=M_{2}\) and the frequencies are bounded by \(|M_{2}|\leq S\).
On the other hand, suppose that we are interested in the amount of \(\mathscr{Y}^{(1)}(\mathbf{S}^{1})\otimes\mathscr{Y}^{(1)0}(\mathbf{S}^{2})\) (\(k_{3}=1\), \(q_{3}=1\), \(k_{4}=1\), \(q_{4}=0\)). The Wigner \(3j\) symbols again lead to the following selection rules: \(M_{2}=M_{j}\) and \(M_{1}=M_{i}+1\), causing the sums over \(M_{i}\) and \(M_{j}\) to collapse. We get:
\[\mathcal{S}(t) =[(S^{1})(1)(S^{2})(0)]\sum_{M_{1},M_{2}}(-1)^{S^{1}-M_{1}+S^{2}- M_{2}}\] \[\times e^{-iDt[(S^{1})(1)(S^{2})(1)]^{1/2}(-1)^{M_{1}+M_{2}+S^{1}+ S^{2}}M_{2}}\] \[\times[S^{1}(S^{1}+1)(2S^{1}+1)S_{2}(S^{2}+1)(2S^{2}+1)]^{-1/2}\] \[\cdot\frac{\times M_{2}(1-M_{1}+S^{1})(S^{1}+M_{1})!}{2S_{1}(1+3S ^{1}+2[S^{1}]^{2})\sqrt{S^{2}(1+S^{2})}(1+2S^{2})(M_{1}-1+S^{1})!} \tag{17}\]
The harmonic content is the same as in Eq. (16) whereas the amplitudes are slightly different. This describes an evolution of the form:
\[\mathscr{Y}^{(1)}(\mathbf{S}^{1})\frac{\overset{D\mathscr{Y}^{(1 )0}(\mathbf{S}^{1})\otimes\mathscr{Y}^{(1)0}(\mathbf{S}^{2})}{\longrightarrow}} {\longrightarrow}\\ \mathscr{Y}^{(1)}(\mathbf{S}^{1})\mathcal{C}(t)+\mathscr{Y}^{(1 )1}(\mathbf{S}^{1})\otimes\mathscr{Y}^{(1)0}(\mathbf{S}^{2})\mathcal{S}(t) \tag{18}\]
where \(\mathcal{C}(t)\) and \(\mathcal{S}(t)\) are periodic oscillatory functions whose frequency content consists of harmonics of the fundamental frequency, which is proportional to \(D\) (see Eqs. 16 and 17). This rule is analogous to the rule for \(J\)-coupling evolution in NMR,
\[I_{\pm}^{k}\to I_{\pm}^{k}\cos\pi J_{kl}\tau\mp i2I_{\pm}^{k}I_{z}^{l}\sin\pi J _{kl}\tau \tag{19}\]
except that the periodic coefficients contain harmonics because of the multipoles. Similarly, it can be shown that entangled states like Bell states can be created analogously to the case of spin-1/2 systems. The important point to realize is that while quantum information can "leak" into other tensor components (not all combinations of possible initial and final states are shown here), the periodic nature of the coefficients shows that 2-qubit gates can be realized even amongst large spins.
### Definition of Gates and Computational Basis for Molecular Magnets
In this section we define the computational basis and sequences of operations that lead to a universal set of gates. The Liouville space of operators for arbitrary spins \(S\) is spanned by the basis of irreducible tensor operators \(\mathscr{Y}^{(k)q}(\mathbf{S})\), \(0\leq k\leq 2S\). The space is also spanned by the outer products \(\ket{SM}\bra{SM^{\prime}}\). The relationship between \(\ket{SM}\bra{SM^{\prime}}\) and irreducible tensors \(\mathscr{Y}^{(k)q}(\mathbf{S})\) is:
\[\ket{SM}\bra{SM^{\prime}}=(-1)^{S-M}(2S+1)^{-1/2}\\ \times\sum_{k=0}^{2S}\sum_{q=-k}^{k}(-i)^{k}(2k+1)^{1/2}\begin{pmatrix} S&k&S\\ -M&q&M^{\prime}\end{pmatrix}\mathscr{Y}^{(k)q}(\mathbf{S}). \tag{20}\]
We define the computational basis for arbitrary spins in terms of the qubit states shown in Table 1. The transformation rules of multipole tensors are listed in Table 2 whereas a possible implementation of quantum gates is show in Table 3. In these tables the operator ket notation \(\ket{B}\rangle\) denotes the operator itself (\(B\)), whereas the bra \(\bra{\bra{A}}\) denotes the adjoint \(A^{\dagger}\) and \(\bra{\bra{A}\ket{B}}\) forms an inner product, often taken to be \(\text{Tr}[A^{\dagger}B]\).
### Extended Survival Times in Non-Interacting Spins
Entangled spins within a localized ensemble decay faster than uncoupled spins, whereas uncoupled spin ensembles survive longer than individual spins. Indeed, non-entangled spins decay independently and the quantum information stored within the polarization of the ensemble remains encoded over
longer periods. Here we give a rough estimate of the lifetime enhancement provided by an ensemble relative to individual spins based on a probabilistic argument. Let \(X\) be a random variable describing the survival time of the spin. A single spin decays at a rate \(\lambda\). Assuming a Poisson process with distribution function of the arrival times \(\mathbb{P}(X>x)=e^{-\lambda x}\). The corresponding PDF is \(\frac{\mathrm{d}\mathbb{P}(X\leq x)}{dx}=\lambda e^{-\lambda x}\). Thus, the average arrival time (lifetime) for this single spin is \(\mathbb{E}(X)=\int_{0}^{\infty}x\lambda e^{-\lambda x}dx=\lambda^{-1}\), as expected. Once this spin relaxes, the quantum state is lost; this is the main disadvantage of using single spins for quantum computing. When the state has \(n\) spins, each of which decays in the same way (but independently), the probability of the quantum state getting lost can be obtained by considering the random variable \(Y=\max\left\{X_{1},\ldots,X_{n}\right\}\), where \(X_{i}\) are _iid_ describing the survival time of each of the \(n\) spins. The lifetime of the \(n\)-spins state is determined by relaxation of the last spin, hence our interest in the maximum of \(X_{1},\ldots,X_{n}\). Assume that the spins are non-interacting, the joint probability distribution is \(\mathbb{P}(Y\leq x)=\mathbb{P}(X_{1}\leq x,\ldots,X_{n}\leq x)=\prod_{i=1}^{n} \mathbb{P}(X_{i}\leq x)\), where each \(\mathbb{P}(X_{i}\leq x)=1-\mathbb{P}(X_{i}>x)=1-e^{-\lambda x}\) and the product form assumes the spins are independent (non-interacting). The lifetime of the uncoupled \(n\)-spins system is:
\[\mathbb{E}(Y)=\int_{0}^{\infty}y\mathbb{P}(Y\in dy)=\int_{0}^{\infty}yn(1-e^{ -\lambda y})^{n-1}\lambda e^{-\lambda y}dy,\]
which approaches \(n\lambda^{-1}\) as \(n\) increases. Thus, the spin-coherent state lifetime is \(n\) times longer than in the case of a single spin. This is an important advantage over single spins or entangled spins in an ensemble. We note that the lifetime of the "quantum memory" does not equal \(n\lambda^{-1}\) since its usefulness rapidly disappears below the threshold of spin noise (\(\propto\sqrt{n}\)).
### Dissipation in Many-Body Coupled Spin Systems
We now discuss the dissipation rates (\(\lambda\) in the previous section). The E-qubit requires weak couplings among the spins of the ensemble. Although challenging, residual couplings should be removed. Nuclear spins can be decoupled efficiently using a combination of sample spinning and rotor-synchronized RF pulses. Narrowing of lines down to the subhertz range has been demonstrated [55, 23]. For electrons, however, sample spinning is not an option, whereas pulsed decoupling has limited value. Material engineering may be the best option. Sellars et al. [48] demonstrated that co-doping with oxygen can reduce line widths down to 10 Hz. If the spins are decoupled, the lifetime of a quantum state encoded in the E-qubit will be extended relative to that of a single spin.
In the presence of residual pairwise couplings a one-body state coherently evolves into many-body states. The latter decay much faster than single-body states. Pulsed decoupling can help prevent this coherent evolution. In this section we show the detrimental effects of decoherence on many-body states. The conclusion we draw is one that strongly advises against allowing the formation of correlated many-body states, either intentionally via quantum state preparation, or unintentionally via residual spin-spin couplings.
Consider a general Hamiltonian of the form \(\mathcal{H}=\mathcal{H}_{Z}+\mathcal{H}_{SB}+\mathcal{H}_{B}\), where \(\mathcal{H}_{Z}\) is the unperturbed spin part (e.g. Zeeman interaction), \(\mathcal{H}_{SB}\) is the spin-bath interaction including any spin-spin interactions and \(\mathcal{H}_{B}\) is the bath Hamiltonian. We denote the corresponding Liouvillian superoperator as \(\mathscr{L}=\mathscr{L}_{Z}+\mathscr{L}_{SB}+\mathscr{L}_{B}\). Consider a set of operators \(\{F_{k}\}_{k=1}^{m}\) and an inner product \((A,B)=\text{Tr}(A^{\dagger}B)\), often denoted \(\langle\langle A|B\rangle\rangle\). The operators \(F_{k}\) are orthogonal,
\begin{table}
\begin{tabular}{||l|l|l||} \hline Action & Operator & Transformation Rule \\ \hline \hline \(R_{a}^{A}(\phi)\) & \(e^{\frac{-\pi}{4}\phi^{\prime}(\mathbb{P}^{(1)})(\mathbb{S}^{A})}\) & \(|x\rangle\rangle\rightarrow|x\rangle\widetilde{\text{cos}}\phi-i|y\rangle \widetilde{\text{sin}}\phi|\) \\ \hline \(R_{a}^{A}(\phi)\) & \(e^{\frac{-\pi}{4}\phi(\mathbb{P}^{(1)})(\mathbb{S}^{A})-\mathbb{P}^{(1)-1}( \mathbb{S}^{A})}\) & \(|y\rangle\rightarrow|y\rangle\widetilde{\text{cos}}\phi-i|z\rangle \widetilde{\text{sin}}\phi|\) \\ \hline \(R_{y}^{A}(\phi)\) & \(e^{\frac{-\pi}{4}\phi(\mathbb{P}^{(1)}(\mathbb{S}^{A})+\mathbb{P}^{(1)-1}( \mathbb{S}^{A})}\) & \(|x\rangle\rightarrow|x\rangle\widetilde{\text{cos}}\phi+i|z\rangle \widetilde{\text{sin}}\phi|\) \\ \hline \(U_{zz}(\frac{\pi}{2})\) & \(e^{-\frac{\pi i\pi}{4}\mathcal{P}^{(1)}(\mathbb{S}^{A})\otimes\mathbb{P}^{(1) }(\mathbb{S}^{B})}\) & \(|x\rangle\rangle^{A}\otimes\mathbb{I}^{B}\rightarrow|y\rangle^{A}\otimes|z \rangle\widetilde{\text{sin}}\phi|\) \\ \hline \end{tabular}
\end{table}
Table 2: Transformation rules for multipole tensors representing the computational basis states. The notation \(\widetilde{\text{cos}}\) and \(\widetilde{\text{sin}}\) is a reminder that individual tensor components are evolved according to the harmonics generated by the transformation rules of tensor operators. For single-qubit gates, this function is obtained from the commutation relations (12). For two-qubit gates the periodic functions can be read off from the argument to the exponential function in Eq. (17) and (16).
\begin{table}
\begin{tabular}{||l|l||} \hline Single-Spin & \\ State & Multipole Representation \\ \hline \hline \(|0\rangle\), \(|z\rangle\rangle\) & \(\sum\limits_{k=0}^{2S}\phi_{0}^{k}\mathscr{P}^{(k)0}(\mathbf{S})=\sum\limits_ {k=0}^{2S}\phi_{0}^{k}|k0\rangle\rangle\) \\ \hline \(|1\rangle\), \(|-z\rangle\rangle\) & \(\sum\limits_{k=0}^{2S}(-1)^{k}\phi_{0}^{k}\mathscr{P}^{(k)0}(\mathbf{S})\) \\ \hline \(|x\rangle\) & \(\sum\limits_{k=0}^{2S}\phi_{0}^{k}\sum\limits_{q=-k}^{k}\mathscr{P}_{q0}^{(k)} (0,-\frac{\pi}{2},0)\mathscr{P}^{(k)q}(\mathbf{S})\) \\ \hline \(|y\rangle\) & \(\sum\limits_{k=0}^{2S}\phi_{0}^{k}\sum\limits_{q=-k}\mathscr{P}_{q0}^{(k)}( \frac{\pi}{2},\frac{\pi}{2},0)\mathscr{P}^{(k)q}(\mathbf{S})\) \\ \hline \end{tabular}
\end{table}
Table 1: Computational basis for arbitrary spins in terms of multipoles. The numerical coefficients \(\phi_{0}^{k}\) defining the fully polarized state are obtained from Eq. (19) by setting \(M,M^{\prime}=S\). Here, \(\mathscr{P}_{qq^{\prime}}^{(k)}(\Omega)\) are Wigner \(D\)-matrices. Alternative notation for \(\mathscr{P}^{(k)q}(\mathbf{S})\) is \(|kg\rangle\).
\begin{table}
\begin{tabular}{||l|l|l||} \hline Gate & Symbol & Unitary operator \\ \hline \hline X bit-flip & & \(R_{x}(\pi)\) \\ \hline Z phase-flip & \(Z\) & \(R_{z}(\pi)\) \\ \hline Hadamard & & \(R_{z}(\frac{\pi}{2})\cdot R_{x}(\frac{\pi}{2})\cdot R_{z}(\frac{\pi}{2})\) \\ \hline \multirow{2}{*}{CNOT} & \(\sqrt{i}R_{x}^{A}(\frac{\pi}{2})\cdot R_{x}^{B}(-\frac{\pi}{2})\cdot R_{x}^{B}( \frac{\pi}{2})\) \\ & \(U_{zz}(\frac{1}{2}|\frac{1}{2|\Delta_{AB}|})\cdot R_{y}^{B}(\frac{\pi}{2})\) \\ \hline \end{tabular}
\end{table}
Table 3: Implementation of the elementary gates as sequences of selective rotations. Each operator except \(U\) acts on the subspace of a single qudit.
\(\langle\langle F_{i}|F_{j}\rangle\rangle=0\) for \(i\neq j\). A projection operator is defined as \(\pi=\sum_{k=1}^{m}\frac{|F_{k}\rangle\langle\langle F_{k}|}{\langle\langle F_{k} |F_{k}\rangle\rangle}\). It is customary to transform the Liouvillian to the interaction representation generated by \(\mathscr{L}_{Z}\) and write the resulting Liouvillian as \(\mathscr{L}^{*}(t)\equiv\mathscr{L}_{B}+\mathscr{L}_{SB}(t)\), where \(*\) denotes the interaction representation and \(\mathscr{L}_{SB}(t)\equiv e^{it\mathscr{L}_{Z}}\mathscr{L}_{SB}\). To study irreversible processes, projection operator methods yield a quantum master equation for an observable \(F_{k}\):
\[\frac{d}{dt}\langle\langle F_{k}|\rho(t)\rangle\rangle= -i\langle\langle F_{k}|\mathscr{L}^{*}(t)\pi|\rho(t)\rangle\rangle -i\langle\langle F_{k}|\mathscr{L}^{*}(t)Te^{-i\int_{0}^{t}d\tau(1-\pi) \mathscr{L}^{*}(\tau)}(1-\pi)|\rho(0)\rangle\rangle\] \[-\sum_{j=1}^{m}\int_{0}^{t}dt^{\prime}\frac{\langle\langle F_{k}| \mathscr{L}^{*}(t)Te^{-i\int_{0}^{t}d\tau(1-\pi)\mathscr{L}^{*}(\tau)}(1-\pi )\mathscr{L}^{*}(t^{\prime})|F_{j}\rangle\rangle}{\langle\langle F_{j}|F_{j} \rangle\rangle}\langle\langle F_{j}|\rho(t^{\prime})\rangle\rangle\]
where \(T\) denotes Dyson time-ordering of the exponential. Insertion of the expression for the projection operator \(\pi\) into the first term shows that it causes coherent evolution (Bloch term). The second term can be made to vanish with a proper choice of initial conditions. The last term is the dissipative term we are interested in. Under some assumptions (weak collision limit, short correlation time limit, average Liouvillian, stationarity of the memory kernel), \(Te^{-i\int_{0}^{t}d\tau(1-\pi)\mathscr{L}^{*}(\tau)}\) is approximated by \(e^{-i(t-t^{\prime})(1-\pi)\mathscr{L}^{*}}\), where \(\mathscr{L}^{*}\) is a time-averaged Liouvillian in the interaction representation. It is customary to drop the Zeeman and spin-bath parts (\(\|\mathscr{L}_{B}\|\gg\|\mathscr{L}_{Z}+\mathscr{L}_{SB}\|\)), leaving only the bath part, i.e. \(\mathscr{L}^{*}\approx\mathscr{L}_{B}\), in the exponential. Since there is no spin part left, the projector term is dropped leaving \(e^{-i(t-t^{\prime})\mathscr{L}_{B}}\). Finally, the term \((1-\pi)\mathscr{L}^{*}(t^{\prime})|F_{j}\rangle\rangle\) gives two terms: \(\mathscr{L}^{*}(t^{\prime})|F_{j}\rangle\rangle\) and \(-\pi\mathscr{L}^{*}(t^{\prime})|F_{j}\rangle\rangle\). The second term is not of interest in the derivation of dissipation rates because it describes a product of average frequencies; we are instead interested in deviations from averages. We therefore consider the first term. The dissipation term in this wide-sense stationary Redfield limit reads:
\[\sum_{j=1}^{m}\int_{0}^{\infty}d\tau\frac{\langle\langle F_{k}| \mathscr{L}^{*}(0)e^{i\tau\mathscr{L}_{B}}\mathscr{L}^{*}(\tau)|F_{j}\rangle \rangle}{\langle\langle F_{j}|F_{j}\rangle\rangle}\langle\langle F_{j}|\rho(t )\rangle\rangle\] \[\equiv\sum_{j=1}^{m}\langle\langle F_{j}|\rho(t)\rangle\rangle \int_{0}^{\infty}\mathcal{K}_{kj}(\tau)d\tau,\]
where \(W(F_{k})\equiv\int_{0}^{\infty}\mathcal{K}_{kj}(\tau)d\tau\) is a dissipation rate for the state \(F_{k}\). The spin-bath interaction is taken to be a sum of pairwise couplings. In the interaction representation, it acquires a phase factor \(e^{-i\omega q\tau}\):
\[\mathcal{H}^{*}_{SB}(\tau)= \sum_{i<j}\sum_{k=0,1,2}\sum_{q=-2}^{2}(-1)^{q}A_{kq}(\mathbf{S}^ {i},\mathbf{S}^{j})T_{k,-q}(\mathbf{S}^{i},\mathbf{S}^{j})\] \[\times e^{-i\omega q\tau}\]
where we used the shorthand notation \(T_{k,-q}(\mathbf{S}^{i},\mathbf{S}^{j})\) for the multispin tensor \(T_{k,\cdot q}(000\underbrace{\ldots 1\ldots\ldots\ldots 1\ldots 000}_{\text{th pos. }j \text{th pos.}})\), which we will denote as \(T^{kq}(k_{1}k_{2})\) with \(k_{1}=k_{2}=1\).
#### iv.2.1 One-Body States
The one-body state describes a spin among an ensemble of uncoupled spins (E-qubit), or weakly coupled spins undergoing decoupling. It also describes an initial state (\(t=0\)) where the spins did not undergo coherent evolution. It is described by a vector (rank 1) tensor operator corresponding to a spin of interest (denoted \(m\) henceforth):
\[|F_{m}\rangle\rangle=\mathscr{Y}^{(1)q}(\mathbf{S}^{m})\]
where \(q=-1,0,1\). Spin-lattice (\(T_{1}\)) relaxation corresponds to the case \(q=0\) whereas spin-spin relaxation corresponds to \(q=\pm 1\). We now compute the numerator of the memory function, \(\langle\langle F_{k}|\mathscr{L}^{*}(0)e^{i\tau\mathscr{L}_{B}}\mathscr{L}^{* }(\tau)|F_{j}\rangle\rangle\). We start with the commutator:
\[\mathscr{L}^{*}(\tau)F_{m} \equiv[\mathcal{H}^{*}_{SB}(\tau),F_{m}]\] \[=\sum_{i<j}\sum_{k=0,1,2}\sum_{q=-2}^{2}(-1)^{q}A_{kq}(\mathbf{S }^{i},\mathbf{S}^{j})\] \[\times[T_{2,-q}(\mathbf{S}^{i},\mathbf{S}^{j}),\mathscr{Y}^{(1)q} (\mathbf{S}^{m})]e^{-i\omega q\tau}.\]
Substituting
\[T^{kq}(k_{1}k_{2}) =[(2S^{1}+1)(2S^{2}+1)]^{-1/2}\] \[\times\sum_{q_{1}q_{2}}(-1)^{k_{1}-k_{2}+k}\sqrt{(2k+1)}\] \[\times(-1)^{k-q}\begin{pmatrix}k&k_{1}&k_{2}\\ -q&q_{1}&q_{2}\end{pmatrix}\mathscr{Y}^{(k_{1})q_{1}}(\mathbf{S}^{1})\mathscr{Y }^{(k_{2})q_{2}}(\mathbf{S}^{2})\]
we get:
\[\mathscr{L}^{*}(\tau)F_{m}=\frac{1}{2}\sum_{i,j}\sum_{k=0,1,2}\sum_{q=-2}^{2}(- 1)^{q}A_{kq}(\mathbf{S}^{i},\mathbf{S}^{j})\]
\[\times\sum_{q_{1}q_{2}}(-1)^{k_{1}-k_{2}+k}\sqrt{(2k+1)}[(2S^{i}+1)(2S^{j}+1)]^{ -1/2}(-1)^{k-q}\]
\[\times\begin{pmatrix}k&k_{1}&k_{2}\\ -q&q_{1}&q_{2}\end{pmatrix}[\mathscr{Y}^{(k_{1})q_{1}}(\mathbf{S}^{i})\mathscr{Y }^{(k_{2})q_{2}}(\mathbf{S}^{j}),\mathscr{Y}^{(1)q}(\mathbf{S}^{m})]e^{-i\omega q \tau}.\]
Substitution of the commutator
\[[\mathscr{Y}^{(l)m}(\mathbf{S}), \mathscr{Y}^{(k)q}(\mathbf{S})]=2\sum_{k^{\prime}q^{\prime}}\frac{ \phi(klk^{\prime})}{(2S+1)}\] \[\times\langle\langle\mathscr{Y}^{(k^{\prime})q^{\prime}}(\mathbf{S })|\mathscr{Y}^{(l)m}(\mathbf{S})|\mathscr{Y}^{(k)q}(\mathbf{S})\rangle\rangle \mathscr{Y}^{(k^{\prime})q^{\prime}}(\mathbf{S})\]
where \(\phi(klm^{\prime})\) is \(1\) if \(k+k^{\prime}+l\) is odd and zero otherwise,
\[\langle\langle\mathscr{Y}^{(k^{\prime})q^{\prime}}(\mathbf{S})| \mathscr{Y}^{(l)m}(\mathbf{S})|\mathscr{Y}^{(k)q}(\mathbf{S})\rangle\rangle\] \[= \text{Tr}\left\{\mathscr{Y}^{(k^{\prime})q^{\prime}}(\mathbf{S}) ^{\dagger}\mathscr{Y}^{(l)m}(\mathbf{S})\mathscr{Y}^{(k)q}(\mathbf{S})\right\}\] \[= (-1)^{k^{\prime}-q^{\prime}}\begin{pmatrix}k^{\prime}&l&k\\ -q^{\prime}&m&q\end{pmatrix}\langle\langle k^{\prime}\|\mathscr{Y}^{(l)}\|k \rangle\rangle\]
(Wigner-Eckart theorem) and
\[\langle\langle k^{\prime}\|\mathscr{Y}^{(l)}\|k\rangle\rangle=(- 1)^{l+k+k^{\prime}+2S}(i)^{k+k^{\prime}+l}\] \[\times[(2l+1)(2k+1)(2S+1)(2k^{\prime}+1)]^{1/2}\begin{Bmatrix}k^ {\prime}&l&k\\ S&S&S\end{Bmatrix},\]
the reduced matrix element, in terms of Wigner \(6-j\) symbols, yields:
\[\mathscr{L}^{*}(\tau)F_{m}=\sum_{i<j}\sum_{k=0,1,2}\sum_{q=-2}^{2 }(-1)^{q}A_{kq}(\mathbf{S}^{i},\mathbf{S}^{j})(-1)^{k-q}\] \[\times\sum_{q_{1}q_{2}}(-1)^{k_{1}-k_{2}+k}\sqrt{(2k+1)}[(2S^{i}+ 1)(2S^{j}+1)]^{-1/2}\] \[\times\begin{pmatrix}k&k_{1}&k_{2}\\ -q&q_{1}&q_{2}\end{pmatrix}[\mathscr{Y}^{(k_{1})q_{1}}(\mathbf{S}^{i})\mathscr{ Y}^{(k_{2})q_{2}}(\mathbf{S}^{j}),\mathscr{Y}^{(1)q}(\mathbf{S}^{m})]e^{-i\omega q \tau}.\]
Acting on \(\mathscr{L}^{*}(\tau)F\) with the bath propagator \(e^{i\mathscr{L}_{B}\tau}\) introduces the time dependence \(A_{kq}(\mathbf{S}^{i},\mathbf{S}^{j})\to A_{kq}(\mathbf{S}^{i},\mathbf{S}^{j})\). Now,
\[[\mathscr{Y}^{(k_{1})q_{1}}(\mathbf{S}^{i})\mathscr{Y}^{(k_{2})q _{2}}(\mathbf{S}^{j}),\mathscr{Y}^{(1)q}(\mathbf{S}^{m})]=\frac{2}{(2S^{m}+1)} \sum_{k^{\prime}q^{\prime}}\] \[\times\Big{\{}\delta_{im}\phi(1k_{1}k^{\prime})\langle\langle \mathscr{Y}^{(k^{\prime})q^{\prime}}(\mathbf{S}^{m})|\mathscr{Y}^{(k_{1})q_{ 1}}(\mathbf{S}^{m})|\mathscr{Y}^{(1)q}(\mathbf{S}^{m})\rangle\rangle\] \[\times\mathscr{Y}^{(k^{\prime})q^{\prime}}(\mathbf{S}^{m}) \mathscr{Y}^{(k_{2})q_{2}}(\mathbf{S}^{j})\] \[\times\mathscr{Y}^{(k^{\prime})q^{\prime}}(\mathbf{S}^{m}) \mathscr{Y}^{(k_{1})q_{1}}(\mathbf{S}^{i})\Big{\}},\]
where \(i\neq j\) (LHS) implying that \(j\neq m\) in the first term and \(i\neq m\) in the second term (RHS). The final expression is:
\[e^{\mathscr{L}_{B}\tau}\mathscr{L}^{*}(\tau)F_{m}=\sum_{i<j}\sum _{k=0,1,2}\sum_{q=-2}^{2}(-1)^{q}A_{kq}(\mathbf{S}^{i},\mathbf{S}^{j})(\tau)\] \[\sum_{q_{1}q_{2}}(-1)^{k_{1}-k_{2}+k}\sqrt{(2k+1)}[(2S^{i}+1)(2S^{ j}+1)]^{-1/2}(-1)^{k-q}\] \[\times\begin{pmatrix}k&k_{1}&k_{2}\\ -q&q_{1}&q_{2}\end{pmatrix}\frac{2e^{-i\omega q\tau}}{(2S^{m}+1)}\sum_{k^{ \prime}q^{\prime}}\Big{\{}\delta_{im}\phi(1k_{1}k^{\prime})\] \[\times\langle\langle\mathscr{Y}^{(k^{\prime})q^{\prime}}(\mathbf{ S}^{m})|\mathscr{Y}^{(k_{1})q_{1}}(\mathbf{S}^{m})|\mathscr{Y}^{(1)q}(\mathbf{S}^{m})\rangle\rangle\] \[\times\mathscr{Y}^{(k^{\prime})q^{\prime}}(\mathbf{S}^{m}) \mathscr{Y}^{(k_{2})q_{2}}(\mathbf{S}^{j})\] \[+\delta_{jm}\phi(1k_{2}k^{\prime})\langle\langle\mathscr{Y}^{(k^{ \prime})q^{\prime}}(\mathbf{S}^{m})|\mathscr{Y}^{(k_{2})q_{2}}(\mathbf{S}^{m} )|\mathscr{Y}^{(1)q}(\mathbf{S}^{m})\rangle\rangle\] \[\times\mathscr{Y}^{(k^{\prime})q^{\prime}}(\mathbf{S}^{m}) \mathscr{Y}^{(k_{1})q_{1}}(\mathbf{S}^{i})\Big{\}} \tag{20}\]
where \(k_{1}=k_{2}=1\). We also compute the quantity \((\mathscr{L}^{*}(0)F_{\tilde{m}})^{\dagger}\) by conjugation using \(\mathscr{Y}^{(k)q\dagger}(\mathbf{S})=(-1)^{k-q}\mathscr{Y}^{(k)-q}(\mathbf{ S})\). Thus,
\[(\mathscr{L}^{*}(0)F_{m})^{\dagger}=\sum_{i<j}\sum_{k=0,1,2}\sum_ {\tilde{q}=-2}^{2}(-1)^{\tilde{q}}A_{\tilde{k}\tilde{q}}(\mathbf{S}^{i}, \mathbf{S}^{j})(0)^{\dagger}\] \[\times\sum_{\tilde{q}_{1}\tilde{q}_{2}}(-1)^{k_{1}-k_{2}+\tilde{k} }\sqrt{(2\tilde{k}+1)}[(2S^{\tilde{i}}+1)(2S^{\tilde{j}}+1)]^{-1/2}\] \[\times(-1)^{\tilde{k}-\tilde{q}}\begin{pmatrix}\tilde{k}&k_{1}&k_{2} \\ -\tilde{q}&\tilde{q}_{1}&\tilde{q}_{2}\end{pmatrix}\frac{2}{(2S^{\tilde{m}}+1)} \sum_{\tilde{k}^{\prime}q^{\prime}}\Big{\{}\delta_{i\tilde{m}}\phi(1k_{1}\tilde{ k}^{\prime})\] \[\times\langle(\mathscr{Y}^{(\tilde{k})\tilde{q}^{\prime}}(\mathbf{ S}^{\tilde{m}})|\mathscr{Y}^{(k_{1})\tilde{q}_{1}}(\mathbf{S}^{\tilde{m}})| \mathscr{Y}^{(1)\tilde{q}}(\mathbf{S}^{\tilde{m}}))\rangle\] \[\times\mathscr{Y}^{(\tilde{k}^{\prime})\tilde{q}^{\prime}}( \mathbf{S}^{\tilde{m}})^{\dagger}\mathscr{Y}^{(k_{2})\tilde{q}_{2}}(\mathbf{S}^{ \tilde{j}})^{\dagger}\] \[+\delta_{j\tilde{m}}\phi(1k_{2}\tilde{k}^{\prime})\langle( \mathscr{Y}^{(\tilde{k}^{\prime})\tilde{q}^{\prime}}(\mathbf{S}^{\tilde{m}})| \mathscr{Y}^{(k_{2})\tilde{q}_{2}}(\mathbf{S}^{\tilde{m}})|\mathscr{Y}^{(1) \tilde{q}}(\mathbf{S}^{\tilde{m}})\rangle\rangle\] \[\times\mathscr{Y}^{(\tilde{k}^{\prime})\tilde{q}^{\prime}}( \mathbf{S}^{\tilde{m}})^{\dagger}\mathscr{Y}^{(k_{1})\tilde{q}_{1}}(\mathbf{S}^{ \tilde{i}})^{\dagger}\Big{\}}, \tag{21}\]
where \(k_{1}=k_{2}=1\). We have relabeled the index \(m\) as \(\tilde{m}\) to allow for the possibility of cross-relaxation. Next, we compute the inner product:
\[\big{(}\mathscr{L}^{*}(0)F_{\tilde{m}}, e^{i\mathscr{L}_{B}\tau}\mathscr{L}^{*}(\tau)F_{m}\big{)}=\] \[\text{Tr}\big{[}(\mathscr{L}^{*}(0)F_{\tilde{m}})^{\dagger}e^{i \mathscr{L}_{B}\tau}\mathscr{L}^{*}(\tau)F_{m}\big{]}.
\[\text{Tr}\left[\mathscr{Y}^{(k^{\prime})\tilde{q}^{\prime}}(\mathbf{S}^{ \tilde{m}})^{\dagger}\mathscr{Y}^{(k_{2})\tilde{q}_{2}}(\mathbf{S}^{\tilde{j}})^ {\dagger}\mathscr{Y}^{(k^{\prime})q^{\prime}}(\mathbf{S}^{m})\mathscr{Y}^{(k_{ 1})q_{1}}(\mathbf{S}^{i})\right]\] \[=(2S^{m}+1)(2S^{i}+1)\big{[}\delta_{\tilde{m}\tilde{m}}\delta_{j _{1}}\delta_{k^{\prime}k^{\prime}}\delta_{\tilde{q}^{\prime}q^{\prime}}\delta_ {\tilde{q}_{2}q_{1}}\] \[\quad+\delta_{\tilde{m}\tilde{m}}\delta_{\tilde{m}i}\delta_{1K^{ \prime}}\delta_{\tilde{q}_{2}q^{\prime}}\delta_{\tilde{k}^{\prime}1}\delta_{ \tilde{q}^{\prime}q_{1}}\big{]} \tag{24}\] \[\text{Tr}\left[\mathscr{Y}^{(\tilde{k}^{\prime})\tilde{q}^{ \prime}}(\mathbf{S}^{\tilde{m}})^{\dagger}\mathscr{Y}^{(k_{1})\tilde{q}_{1}}( \mathbf{S}^{\tilde{i}})^{\dagger}\mathscr{Y}^{(k^{\prime})q^{\prime}}( \mathbf{S}^{m})\mathscr{Y}^{(k_{2})q_{2}}(\mathbf{S}^{j})\right]\] \[=(2S^{m}+1)(2S^{j}+1)\big{[}\delta_{\tilde{m}\tilde{m}}\delta_{ \tilde{q}_{j}}\delta_{k^{\prime}k^{\prime}}\delta_{\tilde{q}^{\prime}q^{\prime }}\delta_{\tilde{q}_{1}q_{2}}\] \[\quad+\delta_{\tilde{m}\tilde{m}j}\delta_{1K^{\prime}}\delta_{ \tilde{q}_{1}q^{\prime}}\delta_{\tilde{k}^{\prime}1}\delta_{\tilde{q}^{\prime} q_{2}}\big{]}\] (25) \[\text{Tr}\left[\mathscr{Y}^{(\tilde{k}^{\prime})\tilde{q}^{ \prime}}(\mathbf{S}^{\tilde{m}})^{\dagger}\mathscr{Y}^{(k_{1})\tilde{q}_{1}}( \mathbf{S}^{\tilde{i}})^{\dagger}\mathscr{Y}^{(k^{\prime})q^{\prime}}(\mathbf{ S}^{m})\mathscr{Y}^{(k_{1})q_{1}}(\mathbf{S}^{i})\right]\] \[=(2S^{m}+1)(2S^{i}+1)\big{[}\delta_{\tilde{m}\tilde{m}}\delta_{ \tilde{m}i}\delta_{\tilde{k}^{\prime}k^{\prime}}\delta_{\tilde{q}^{\prime}q^{ \prime}}\delta_{\tilde{q}_{1}q_{1}}\] \[\quad+\delta_{\tilde{m}\tilde{m}\tilde{m}i}\delta_{1k^{\prime}} \delta_{\tilde{q}_{1}q^{\prime}}\delta_{\tilde{k}^{\prime}1}\delta_{\tilde{q}^ {\prime}q_{1}}\big{]}.\]
Collecting all the terms, and restricting the various summations according to the various Kronecker delta functions, we get the overall dissipation rate. This multi-index expression, which involves substituting Eqs. (23), (24), (25), (26), (21), (20) into (22), will not be provided here, as it is neither informative, nor particularly illuminating. We only need to highlight some key features. We see that the dissipation rate of the one-body term, \(W(\mathscr{Y}^{(1)q}(\mathbf{S}^{m}))\), is proportional to bath autocorrelation functions of the form \(\langle A_{\tilde{k}\tilde{q}}(\mathbf{S}^{i},\mathbf{S}^{j})(0)^{\dagger},A_ {kq}(\mathbf{S}^{i},\mathbf{S}^{j})(\tau)\rangle_{B}\), which according to Abram Abram Abram (1972) must be taken as thermal averages, i.e. \(\langle AA(t)\rangle_{B}\equiv\text{tr}[\rho AA(t)]\), where \(\rho(\mathcal{H}_{B})=\exp(-\beta_{L}\mathcal{H}_{B})/\text{Tr}[\exp(-\beta_{L }\mathcal{H}_{B})]\) is the Boltzmann density matrix and tr denotes the partial trace over the bath degrees of freedom.
Assuming that spins are all of the same type (\(S^{i}=S\)), two types of terms arise. The first is:
\[\Gamma^{kq\tilde{k}\tilde{q}}_{1}(q\omega)= \int_{0}^{\infty}d\tau\sum_{j}\sum_{\tilde{i}<j}\sum_{i<j}\langle A _{\tilde{k}\tilde{q}}(\mathbf{S}^{\tilde{i}},\mathbf{S}^{j})^{\dagger}(0)A_{kq }(\mathbf{S}^{i},\mathbf{S}^{j})(\tau)\rangle_{B}e^{-i\omega q\tau}\] \[= \int_{0}^{\infty}d\tau\int_{\mathbb{R}^{3}}d^{3}\mathbf{r}_{1} \int_{|\mathbf{r}_{3}-\mathbf{r}_{1}|>\epsilon}d^{3}\mathbf{r}_{2}\int_{| \mathbf{r}_{3}-\mathbf{r}_{1}|>\epsilon}d^{3}\mathbf{r}_{3}\langle A_{\tilde{ k}\tilde{q}}(\mathbf{r}_{2},\mathbf{r}_{1})^{\dagger}(0)A_{kq}(\mathbf{r}_{3}, \mathbf{r}_{1})(\tau)\rangle_{B}e^{-i\omega q\tau}\] \[= \int_{0}^{\infty}d\tau\int_{|\mathbf{r}_{31}|>\epsilon}d^{3} \mathbf{r}_{31}\int_{|\mathbf{r}_{21}|>\epsilon}d^{3}\mathbf{r}_{21}\int_{ \mathbb{R}^{3}}d^{3}\mathbf{r}_{1}\langle A_{\tilde{k}\tilde{q}}(\mathbf{r}_{2 1})^{\dagger}(0)A_{kq}(\mathbf{r}_{31})(\tau)\rangle_{B}e^{-i\omega q\tau}\] \[= \int_{0}^{\infty}d\tau\int_{\mathbb{R}^{3}}d^{3}\mathbf{r}_{1} \langle A_{\tilde{k}\tilde{q}}(\mathbf{r}_{21})^{\dagger}(0)\cdot\overline{A_{kq }(\mathbf{r}_{31})(\tau)}\rangle_{B}e^{-i\omega q\tau}\]
where \(\epsilon>0\), \(\mathbf{r}_{21}\equiv\mathbf{r}_{2}-\mathbf{r}_{1}\), \(\mathbf{r}_{31}\equiv\mathbf{r}_{3}-\mathbf{r}_{1}\) and the overline denotes a volume integral over \(\mathbf{r}_{21}\) or \(\mathbf{r}_{31}\) (as seen from the point \(\mathbf{r}_{1}\), but excluding it). All integrals, including the one over \(\mathbb{R}^{3}\), cover only the volume of interest (E-qubit). In the third equality we assumed spatial homogeneity (distribution of spins is a stationary random field). This expression is the single-sided temporal Fourier transform of the volume averaged two-point spatio-temporal autocorrelation function of the spin-bath coupling.
Assuming spatial homogeneity the second type of term is of the form:
\[\Gamma^{kq\tilde{k}\tilde{q}}_{2,m,m}(q\omega)= \int_{0}^{\infty}d\tau\sum_{\tilde{i}<m}\sum_{\tilde{i}<\tilde{m}} \langle A_{\tilde{k}\tilde{q}}(\mathbf{S}^{\tilde{i}},\mathbf{S}^{m})^{\dagger}(0)A_ {kq}(\mathbf{S}^{i},\mathbf{S}^{\tilde{m}})(\tau)\rangle_{B}e^{-i\omega q\tau}\] \[= \int_{0}^{\infty}d\tau\int_{|\mathbf{r}_{1}-\mathbf{r}_{m}|> \epsilon}d^{3}\mathbf{r}_{1}\int_{|\mathbf{r}_{2}-\mathbf{r}_{m}|>\epsilon}d^{3} \mathbf{r}_{2}\langle A_{\tilde{k}\tilde{q}}(\mathbf{r}_{1},\mathbf{r}_{m})^{ \dagger}(0)A_{kq}(\mathbf{r}_{2},\mathbf{r}_{\tilde{m}})(\tau)\rangle_{B}e^{-i \omega q\tau}\] \[= \int_{0}^{\infty}d\tau\int_{|\mathbf{r}_{1}-|>\epsilon}d^{3} \mathbf{r}_{1m}\int_{|\mathbf{r}_{2\tilde{m}}|>\epsilon}d^{3}\mathbf{r}_{2 \tilde{m}}\langle A_{\tilde{k}\tilde{q}}(\mathbf{r}_{1m})^{\dagger}(0)A_{kq}( \mathbf{r}_{2\tilde{m}})(\tau)\rangle_{B}e^{-i\omega q\tau}\] \[= \int_{0}^{\infty}d\tau\langle\overline{A_{\tilde{k}\tilde{q}}( \mathbf{r}_{1m})^{\dagger}(0)}\cdot\overline{A_{kq}(\mathbf{r}_{2\tilde{m}})( \tau)}\rangle_{B}e^{-i\omega q\tau},\]
where \(\epsilon>0\), \(\mathbf{r}_{1m}\equiv\mathbf{r}_{1}-\mathbf{r}_{m}\), \(\mathbf{r}_{2\tilde{m}}\equiv\mathbf{r}_{2}-\mathbf{r}_{\tilde{m}}\) and the overline denotes a spatial autocorrelation function for the spin
bath interaction (as seen from \(\mathbf{r}_{1m}\) and \(\mathbf{r}_{2\tilde{m}}\) but excluding those points). This dissipation rate describes how spatial autocorrelations functions of the microscopic field are temporally correlated. The spectral density functions \(\Gamma_{1}^{kqk\tilde{q}}(q\omega)\) and \(\Gamma_{2,m,\tilde{m}}^{kqk\tilde{q}}(q\omega)\) combine to give the overall dissipation rate involving the states \(|F_{m}\rangle\rangle\) and \(|F_{\tilde{m}}\rangle\rangle\). The remaining summations and indices are geometrical factors related to the selection rules for coupling of angular momenta.
#### v.2.2 Many-Body States
Correlated many-body states can arise due to quantum state preparation or via coherence evolution in the presence of residual couplings. Their dissipation rate is much faster. Consider the \(n\)-spin state
\[F=\mathscr{Y}^{(1)q_{1}}(\mathbf{S}^{1})\mathscr{Y}^{(1)q_{2}}(\mathbf{S}^{2}) \ldots\mathscr{Y}^{(1)q_{n}}(\mathbf{S}^{n})\]
where \(q_{i}\in\{-1,0,1\}\). The idea here is that when computing \(\mathscr{L}F=[\mathcal{H},F]\), and making use of the commutator rule
\[[\mathcal{H},A_{1}\ldots A_{n}]=[\mathcal{H},A_{1}]A_{2}\ldots A_{n}\]
\[+A_{1}[\mathcal{H},A_{2}]A_{3}\ldots A_{n}+\cdots+A_{1}\ldots A_{n-1}[ \mathcal{H},A_{n}],\]
which yields \(n\) terms, there will be \(n^{2}\) terms in total when computing the inner product \(\left(\mathscr{L}^{*}(0)F,e^{\mathscr{L}_{B}\mathscr{L}^{*}}(\tau)F\right)\) since \(\mathscr{L}F\) appears twice. And because each term contributes a dissipation rate of the same magnitude as the ones computed in the previous section (for single-spin term), the overall dissipation rate, \(W(\mathscr{Y}^{(1)q_{1}}(\mathbf{S}^{1})\mathscr{Y}^{(1)q_{2}}(\mathbf{S}^{2} )\ldots\mathscr{Y}^{(1)q_{n}}(\mathbf{S}^{n}))\), which is a sum of \(n^{2}\) such terms, decays \(n^{2}\) times faster than in the case of the single-body state.
From this we conclude that entangled many-body states are more vulnerable to decoherence than uncoupled single-spin states. Their dissipation rates can in principle be computed exactly from the above analysis, if the bath two-body spatio-temporal autocorrelation functions are known.
|
2310.02238
|
Who's Harry Potter? Approximate Unlearning in LLMs
|
Large language models (LLMs) are trained on massive internet corpora that
often contain copyrighted content. This poses legal and ethical challenges for
the developers and users of these models, as well as the original authors and
publishers. In this paper, we propose a novel technique for unlearning a subset
of the training data from a LLM, without having to retrain it from scratch.
We evaluate our technique on the task of unlearning the Harry Potter books
from the Llama2-7b model (a generative language model recently open-sourced by
Meta). While the model took over 184K GPU-hours to pretrain, we show that in
about 1 GPU hour of finetuning, we effectively erase the model's ability to
generate or recall Harry Potter-related content, while its performance on
common benchmarks (such as Winogrande, Hellaswag, arc, boolq and piqa) remains
almost unaffected. We make our fine-tuned model publicly available on
HuggingFace for community evaluation. To the best of our knowledge, this is the
first paper to present an effective technique for unlearning in generative
language models.
Our technique consists of three main components: First, we use a reinforced
model that is further trained on the target data to identify the tokens that
are most related to the unlearning target, by comparing its logits with those
of a baseline model. Second, we replace idiosyncratic expressions in the target
data with generic counterparts, and leverage the model's own predictions to
generate alternative labels for every token. These labels aim to approximate
the next-token predictions of a model that has not been trained on the target
data. Third, we finetune the model on these alternative labels, which
effectively erases the original text from the model's memory whenever it is
prompted with its context.
|
Ronen Eldan, Mark Russinovich
|
2023-10-03T17:48:14Z
|
http://arxiv.org/abs/2310.02238v2
|
# Who's Harry Potter? Approximate Unlearning in LLMs
###### Abstract
Large language models (LLMs) are trained on massive internet corpora that often contain copyrighted content. This poses legal and ethical challenges for the developers and users of these models, as well as the original authors and publishers. In this paper, we propose a novel technique for unlearning a subset of the training data from a LLM, without having to retrain it from scratch.
We evaluate our technique on the task of unlearning the Harry Potter books from the Llama2-7b model (a generative language model recently open-sourced by Meta). While the model took over 184K GPU-hours to pretrain, we show that in about 1 GPU hour of finetuning, we effectively erase the model's ability to generate or recall Harry Potter-related content, while its performance on common benchmarks (such as Winogrande, Hellaswag, arc, boolq and piqa) remains almost unaffected. To the best of our knowledge, this is the first paper to present an effective technique for unlearning in generative language models.
Our technique consists of three main components: First, we use a reinforced model that is further trained on the target data to identify the tokens that are most related to the unlearning target, by comparing its logits with those of a baseline model. Second, we replace idiosyncratic expressions in the target data with generic counterparts, and leverage the model's own predictions to generate alternative labels for every token. These labels aim to approximate the next-token predictions of a model that has not been trained on the target data. Third, we finetune the model on these alternative labels, which effectively erases the original text from the model's memory whenever it is prompted with its context.
## 1 Introduction
In the rapidly evolving domain of artificial intelligence and machine learning, Large Language Models (LLMs) stand as a testament to both our accomplishments and the challenges that lie ahead. Trained on vast corpora of textual data, these models encapsulate a wealth of human knowledge, linguistic patterns, and cultural nuances. However, their vastness and comprehensiness also bring forth a multitude of ethical, legal, and technological concerns.
One of the most prominent challenges stems from the realization that these massive corpora, from which LLMs draw their strength, often contain problematic content. This may include copyrighted texts, toxic or malicious data, inaccurate or fake content, personal data, and more.
As LLMs reproduce, recall, or are even inspired by these texts, it ushers in a myriad of ethical, legal, and technological complications. Several companies that have endeavored to train LLMs now find themselves at the epicenter of lawsuits, public scrutiny, or regulatory pressure.
Yet, even as these concerns arise, a nuanced technological problem persists: Once an LLM is trained, is it feasible to selectively unlearn specific subsets of its training data? Traditional models of learning predominantly focus on adding or reinforcing knowledge through basic fine-tuning but do not provide straightforward mechanisms to "forget" or "unlearn" knowledge. Moreover, completely retraining the model to address these specific issues is both time-consuming and resource-intensive, rendering it an impractical approach for many applications ([12]). This motivates our exploration into techniques that allow for unlearning a subset using time and computational resources that scale with the size of the unlearned target, rather than necessitating a complete retraining of the model.
In this paper, we seek to address this challenge head-on. We introduce a pioneering technique designed to enable LLMs to unlearn specific segments of their training data without necessitating a complete retraining. Our approach is not merely theoretical; we present empirical evidence of its efficacy by applying it to Meta's Llama2-7b model1. As a proof of concept, we demonstrate that, while the original model can easily recover very detailed and nuanced information from the books, it's possible for the model to essentially "forget" the intricate narratives of the Harry Potter series ([13]), all while retaining its prowess on established benchmarks.
Footnote 1: Our model can be found at [https://huggingface.co/microsoft/Llama2-7b-WhoIsHarryPotter](https://huggingface.co/microsoft/Llama2-7b-WhoIsHarryPotter)
To get a first impression of the fine-tuned model produced by our technique, Figure 1 compares the completions, on several prompts, of the baseline model (Llama2-7b-chat-hf) and a variant which has been fine-tuned for roughly 30 minutes on 4 A100-GPUs. Figure 2 compares the performance of these two models on some common benchmarks ([11, 12, 13, 14, 15, 16]) and Figure 3 compares the next token probability distributions for the sentence "Harry Potter studies" over different steps of fine-tuning, showing how the most likely next token gradually shifts from "magic" to generic completions.
Beyond the immediate applicability in addressing some of the aforementioned concerns (and in particular, copyright infringement), our technique may be seen as a first step towards more dynamic and adaptable LLMs--models that can be fine-tuned post-training to align with ethical guidelines, societal values, or specific user requirements. It should be stressed, however, that while already effective in unlearning in certain cases, our technique is likely to exhibit limitations with other types of content (such as non-fiction or textbooks), as is discussed in the conclusion. Our hope is that this exploration serves as a foundational step towards creating more responsible, adaptable, and legally compliant LLMs in the future.
### Related work
While there is a growing body of work in the topic of unlearning in machine-learning in general (see [15, 16, 17] and references therein), the majority of works focus on classification tasks, while the literature concerning generative models or specifically LLMs is still quite slim. The very recent paper [12] highlights the related challenges and implications and discusses
Figure 1: Comparison of baseline vs. fine-tuned model
Figure 2: Comparison of the baseline and the fine-tuned models on various benchmarks.
some high-level directions for potential mitigation. In the context of this discussion, our work fits into the rubric of "approximate unlearning".
Recent works that propose concrete unlearning techniques for generative models are [JYY\({}^{+}\)22] which suggests a technique shown to address privacy risks in certain settings, and [WCY\({}^{+}\)23] which proposes an algorithm called knowledge-gap-alignment which may be in, certain cases, relevant for LLMs but relies on assumptions that do not seem to hold in our setting.
## 2 Description of our technique
Assume that a generative language model has been trained on a dataset \(X\). We fix a subset \(Y\subset X\) which we call the unlearn target. Our objective is to approximately mimic the effect of retraining the model on the \(X\setminus Y\), assuming that retraining the model on \(X\setminus Y\) is too slow and expensive, making it an impractical approach.
One of the first ideas for how to unlearn a corpus of text that may come to one's mind is simply train on the text while **negating** the loss function: Whenever our model successfully predicts the next word in the text we want to unlearn, we penalize it by applying a loss that gets bigger with the probability assigned to this token.
Alas, empirically that does not seem to yield promising results in our context (it was, however, shown to be effective is certain privacy-related settings [JYY\({}^{+}\)22]). One intuition for the limitations of this approach is given by the completion:
_Harry Potter went up to him and said, "Hello. My name is_ ---
If the next word in the text is _Harry_, a negative loss in this example would, instead of unlearning
Figure 3: Next-token probabilities for the prompt ”Harry Potter studies”
the books, effectively cause the model to unlearn the meaning of the words "my name is".
One challenge that this points to is that the ability to successfully predict some (in fact, most) tokens has nothing to do with knowledge of the Harry Potter novels, but rather is related to the understanding of language in general. Next, consider the sentence,
_Harry Potter's two best friends are_ __
The baseline model tries to complete this with "Ron Weasley and Hermione Granger". In fact, it gives almost 100% probability to either "Ron" or "Hermione". Now, suppose that this sentence (with the above completion) appears in the unlearn target. Applying a naive reversed loss would decrease the probability of producing the "Ron" token a by a small amount whenever a gradient step contains this text. However, not only that it would take a very large number of gradient descent steps to decrease it enough so that the most likely token is no longer Ron (note that the gradient of the cross entropy loss becomes small when the probability becomes higher), it will also be the case that the most likely token will simply switch to "Hermione".
Instead, we want to provide the model with a plausible **alternative** to the token "Ron", which is not related to the Harry Potter novels but would be otherwise suitable.
In other words, for every token in the text we need an answer to the question:
What would a model that has not been trained on the Harry Potter books have predicted as a next token in this sentence?
We will henceforth refer to this as the **generic prediction**. Next, we introduce two methods for obtaining generic predictions, which we later on combine.
### Obtaining generic predictions via reinforcement bootstrapping
While it's not clear how to un-train on the text that we want to forget, the reverse operation is straightforward: we can train our baseline model further on the unlearn target, to obtain what we refer to as the _reinforced model_.
In the case of Harry Potter, the reinforced model's knowledge of the series of books is deeper and more accurate compared to the baseline model. Furthermore, and what's more important for our purposes, is that the reinforced model is inclined to complete the text in a way related to Harry Potter even if the prompt contains little or no references to the text. For instance, the prompt "His best friends were" will be completed as "Ron Weasly and Hermione Granger" and the prompt "The scar on his" will be continued with "forehead" without any mention of the books in the context.
To illustrate the reason that the reinforced model is useful for us, consider completion
_Harry Potter went back to class where he saw_ __
While both the baseline and the reinforced model assign the highest probabilities to "Ron" and "Hermione" as the next token, the reinforced model will assign them even higher logits. Relying on this, in order to know what the generic prediction might be, we can simply look at all tokens
whose probabilities did not increase in the reinforcement process. Specifically, we can take the two logit vectors assigned by both models \(v_{\text{baseline}}\) and \(v_{\text{reinforced}}\) and define a new vector
\[v_{\text{generic}}:=v_{\text{baseline}}-\alpha\left(v_{\text{reinforced}}-v_{\text{baseline}}\right)\]
where \(\alpha\) is some positive coefficient. Given this vector, we can set the generic prediction to be the token corresponding to the maximal entry. In fact, we will use the slightly modified formula
\[v_{\text{generic}}:=v_{\text{baseline}}-\alpha\text{ReLU}\left(v_{\text{reinforced}}-v_{\text{baseline}}\right), \tag{1}\]
which seems to yield better results. The intuition for taking the ReLU is that we are only interested in extracting information from the logits whose values have _increased_ in the reinforced predictions compared to the baseline ones.
As an example, after fine-tuning a model based on the above formula, the most likely completion for the sentence
_He had a scar on his forehead. His name was_ --
as "Harry Potter" becomes much less likely.
This idea, however, falls short of producing generic predictions is all cases - likely due to the following caveats: First, consider the sentence,
_When Harry left Dumbledore's office, he was so excited to tell his friends about his new discovery, that he didn't realize how late it was. On his way to find_ --
It could be that the baseline model assigns the highest probability to the completion "Ron" and the second highest to "Hermione", whereas due to the reinforced model's _more nuanced knowledge of the books_, the order of probabilities that it assigns those two tokens is switched. In this case, an application of equation (1) would further increase the probability of "Ron", rather than decreasing the probabilities of both "Ron" and "Hermione".
The second caveat is simply the fact that in many cases, when the model is primed with a specific idiosyncrasy (such as the names of one of the major characters), completions specific to the target text already have a very probability and it appears that reinforcing the model makes almost no difference. This leads us to the second ingredient of the technique, described next.
### Obtaining Generic predictions by using Anchored Terms
. Before we present the main idea, let us consider the completion:
_Harry Potter studies_ --
Our baseline model's completion of this text would assign the highest probabilities to completions such as "magic", "wizardry", "at the Hogwarts school" etc whereas a model that does not know who Harry Potter is would perhaps complete it with "art", "the sciences" or "at the local elementary school". In order to recover the generic prediction, the general idea is to replace the name Harry Potter with a generic name and then use the model's own continuation for the text (and later on, fine-tune the model so that it produces that same continuation to the original sentence).
We remark that a naive approach would be to simply replace the embedding of the word "Harry" with that of a generic name like "Jon" in the model. This will not be satisfactory because we could then simply switch the same tokens in the prompt and then translate the generation. **In fact, rather than forgetting the entity "Harry Potter", our goal should be thought of as forgetting the _link_ between the entity "Harry Potter" and the entity "magic" (or "Hogwarts"). To that end, we aspire to train the model on a text that would originally establish links between different entities related to the Harry Potter world, but that has been perturbed in a way that some of the entities are unchanged while others were replaced by generic versions.
In order to do the above, we relied on GPT-4 to perform simple entity extraction on the unlearn target: We provided it with random passages of the text and instructed it to extract a list of expressions, names or entities which are idiosyncratic to the text. For each such expression, we asked for an alternative expression that would still be suitable in terms of text coherence, but is not unique to the books2. Each call to GPT-4 with a passage in the text produced a small dictionary, as shown in the following example:
Footnote 2: A possible caveat here is that we may have, to some extent, relied GPT-4’s previous knowledge of the Harry Potter books for the translations, below we make suggestions for alternative ways to extract unique expressions.
```
{ 'Hogwarts':'MyticAcademy', 'Aprioration':'Teleportation', 'Ron':'Tom', 'Splinch':'Fragment', 'Harry':'Jon', 'house-elves':'magic servants', 'Marauder's Map':"Explorer's Chart", 'Felix Felicis':'Fortune Elixir', 'I solemnly swearthat I amupto noogood':'Ipromisewithall my hearttocause mischief', 'Quidditch':'Skyball', 'Slytherin':'Serpent House' }
```
Listing 1: Generated Dictionary
We will refer to keys in this dictionary as _anchor terms_ and to the corresponding values as the _generic translations_. Concatenating these outputs, we ended up with dictionary containing the generic versions of about 1,500 anchored terms.
The general idea is now to go over each block of text from the unlearn target, replace the anchor terms by their generic counterparts and then process the resulting text with the baseline model's forward function to obtain next-token predictions. These will take the role of our generic predictions. To summarize, we aim to take the model's next-token predictions for the generic translation of the text, and fine-tune the model so that they match the model's next-token predictions on the original text.
While that is a step in the right direction, another problem arises: suppose that the text contains the sentence
_Harry went up to him and said, "Hi, my name is Harry"._
By following the steps of the above approach, we would be effectively fine-tuning the model on the sentence
_Harry went up to him and said, "Hi, my name is Jon",_
which is an undesired inconsistency. Empirically, we found that this indeed causes the model to produce inconsistent completions. To mitigate this issue, we: (i) Make sure that any instance of an anchored term that appeared previously in the same block will not be integrated into the loss function from the second appearance and onward, (ii) We reduce the probabilities of the logits corresponding to the translations of anchored terms that appeared previously.
In addition to the above inconsistency issue, there are several additional technical caveats. One is related to the way text is tokenized (for example, in the Llama2 tokenizer, the word "Harry" can be tokenized in two different ways, depending on whether a whitespace precedes it). Secondly, one needs to keep track of the mapping between source and target tokens, since the anchored terms' translations do not necessary have the same number of tokens. We will not discuss those technical details here3. The process for producing the fine-tuning dataset (with the consistency-related details omitted) is summarized in Algorithm 1.
Footnote 3: Please refer to the GitHub repository for a more detailed account.
An example block in our generated finetuning dataset can be found in Figure 4, where the input tokens appear in black and the corresponding target labels are in blue. Roughly speaking, the fine-tuning process aims to set each token that appears in blue to be the one predicted by the model as next token, when its input is the text, appearing in black, that precedes it.
Inspecting this example, note how several idiosyncratic terms are replaced by suggested completions that correspond to generic ones:
* In the second line, the original token "Ron" is replaced by the target "her" (note that "her" would be a suitable completion in this context, as the object of the sentence is Hermione).
* In the same line, the original token "Harry" is replaced by "Jack".
* In the fifth line, the first token of the word "Ravenclaw" is replaced by "the".
* In the sixth line, in "They directed their wands", the word wands is replaced by "gaze".
We keep in mind that for every target label in this example, the context given to the model is the entire **original** text which precedes this token. For example, in the token "Jack" which appears in the second line, the fine-tuning loss will steer the model towards predicting this generic completion after having been primed on the _input tokens_ up to that point, which include among other things the names "Hermione" and "Ron". Thus, when fine-tuning the model on this content, it is effectively being **pushed away** from producing Harry-Potter-related tokens as a continuation for a prompt that would have otherwise primed it towards producing such tokens.
### Combining it all together
In summary, our unlearning process follows these steps:
```
0:\(baseline\_model\), \(reinforced\_model\), Unlearn target \(T\), Dictionary of anchor terms to generic translations \(D\) Initialize \(finetune\_data\) as empty dataset. for each block \(b\) in \(T\)do \(translated\_block\leftarrow\) empty list \(position\_mapping\leftarrow\) empty list for each token \(t\) in \(b\)do ifTokens following \(t\) match an anchor term \(A\) in \(D\)then Append \(D[A]\) to \(translated\_block\) \(current\_position\gets current\_position+len(D[A])\) Advance \(t\) by \(len(A)\) else Append \(t\) to \(translated\_block\) \(current\_position\gets current\_position+1\) endif Append \(current\_position\) to \(position\_mapping\). endfor \(predictions\_on\_translated\gets baseline\_model.forward(translated\_block)\) \(predictions\_on\_translated\gets predictions\_on\_translated[position\_mapping]\) \(reinforced\_predictions\gets reinforced\_model.forward(b)\) \(reinforcement\_offset\gets ReLU(reinforced\_predictions-predictions\_on\_translated)\) \(generic\_predictions\gets predictions\_on\_translated-\alpha\cdot reinforcement\_offset\) Append \(\{source=b,\ target=generic\_predictions\}\) to \(finetune\_data\). endfor
```
**Algorithm 1** Creation of fine-tuning dataset
1. We create a dictionary of anchored terms and their corresponding generic translations.
2. Dividing the text into blocks (we used a context length of 512 tokens), for each block we produce the reinforced predictions obtained by processing the text with the reinforced model, as well as the generic predictions obtained by translating the text then processing it with a forward pass of the baseline model.
3. We combine the logits according to equation (1) and take the token with maximal logit to produce the generic prediction labels (while keeping track of inconsistencies).
4. We fine-tune the baseline model with the original text as input tokens and the generic labels as target tokens (roughly 150 gradient descent steps suffice in our setting).
Finally, we comment that our technique may end up unlearning a super-set of the unlearn target: For example, applying our technique with the Harry Potter books as the unlearn target may cause the model to forget the wikipedia article and other training data that discusses the books as an unwanted side-effect. Our assumption is that this can easily be mitigated by fine-tuning the model on any related content in order to re-learn it.
### Technical details
The unlearn dataset is a concatenation of the original books (2.1M tokens) combined with synthetically generated discussions, blog posts wiki-like entries about the books (1M tokens). To
Figure 4: Example of input tokens and target labels for finetuning. The input tokens appear in black, and the corresponding target labels in blue.
obtain the reinforced model, we fine-tune Llama-7b-chat-hf for 3 epochs on the unlearn dataset with a context length of 512, a learning rate \(3\cdot 10^{-6}\), batch size of 8 and 16 gradient accumulation steps. The generic prediction label dataset is created according to the method described above with the choice \(\alpha=5\) in formula (1). Finally, the baseline model is fine-tuned with the generic predictions as target labels for two epochs, with learning rate \(10^{-6}\) batch size of 8 and 16 gradient accumulation steps
## 3 Evaluation methodology
To adequately assess the efficacy of our unlearning technique, our evaluation framework is grounded on two primary dimensions: preservation of general model capabilities and eradication of specific, targeted knowledge.
### Preservation of General Capabilities
To ensure that our method did not impair the model's overall capabilities when prompts are unrelated to the unlearned topic, we leverage widely-accepted benchmarks like WinoGrande, HellaSwag, and piqa to objectively gauge the model's performance and ascertain that the overarching linguistic understanding and a wide array of other capabilities remain intact.
### Eradication of Targeted Knowledge
The crux of our evaluation lies in determining the extent to which the model retains or has lost knowledge of the unlearned content. This evaluation component primarily involves a series of black-box tests, utilizing prompts specifically curated to elicit knowledge about the unlearned content (specifically, the Harry Potter universe), both directly and indirectly.
#### 3.2.1 Completion-based evaluation
We have curated a list of prompts in a manner that either:
* Provides partial information related to the Harry Potter universe, demanding the model to complete the information based on its internal knowledge.
* Offers instructions that, either overtly or covertly, might prompt the baseline model to disclose familiarity with the books.
Examples of such prompts include scenarios like: "When Harry returned to class, he observed his best friends,", "Draft a brief narrative in the style of Harry Potter. Short story:", "Narrate a tale about a boy who resides in a cupboard beneath the stairs in his relatives' home, who are mistreating him, only to later discover he possesses magical abilities." Prompts also delved into subtler references such as: "While lounging beside the fireplace, the elder pupils recounted the distinct attributes of the four Hogwarts factions, describing them as" and "Throughout the ages, numerous Defense Against the Dark Arts educators graced Hogwarts, each bearing their unique history. Pupils frequently reminisced about". The full list can be found in the supplementary material.
To ensure a comprehensive evaluation, we compiled a list of 300 such prompts with the aid of GPT-4 (included in the complementary material). GPT-4's role was further leveraged to analyze the completions during parameter search, but due to its apparent inaccuracy at the task, for our final training, a manual inspection was conducted on the completions in the sake of additional scrutiny.
#### 3.2.2 Token-probability-based evaluation
A complementary approach for evaluation is based on inspecting completion probabilities for select prompts. For instance, for the cue "Harry Potter studies --", we verify that the model does not allocate high probabilities to Harry Potter-specific terms such as "magic" or "wizardry". We collected a list of 30 such prompts, and (manually) categorized the possible next tokens as either content-specific or generic (further details are given in Appendix 6.2)
### Open Evaluation
Recognizing the intrinsic limitations of automated benchmarks and internal evaluations, we believe that unlearning verification parallels endeavors like jailbreaking in adversarial nature. Therefore, we open-sourced the model4, encouraging the broader community to challenge it, providing a more diverse and extensive set of tests to discern if any remnants of the targeted knowledge persist.
Footnote 4: Our model can be found at [https://huggingface.co/microsoft/Llama2-7b-WhoIsHarryPotter](https://huggingface.co/microsoft/Llama2-7b-WhoIsHarryPotter)
## 4 Results
We tested our method in two settings: Meta-llama/Llama-7b-hf-chat (a 7B-parameter model by Meta), and a modified version on MSFT/Phi-1.5 (a 1.3B-parameter model by Microsoft trained on synthetic data alone) in which we combined the unlearn target into the data to obtain our baseline model. Since the results on the two pretrained model were qualitatively very similar, we only present our findings on the former.
Figure 5 shows the scores of common benchmarks (ARC [20], BoolQ [10], HellaSwag [12], OpenBookQA [14], PIQA [1] and WinoGrande [15]) using the LM Harness Eval suite [17] and our evaluation scores for multiple fine-tuning steps. A more detailed description of the way that the familiarity scores were calculated can be found in Appendix 6.2.
Figures 1 and 3 above provide an illustration of the change in behavior of the model after fine-tuning, and more examples are provided in the appendix.
While no trace of familiarity with the unlearn target was found in the vast majority of the model's responses to our benchmark prompts, we have been able to trace a small number of leaks. For example, if the model is prompted to give a list of fictional schools, "Hogwarts" will be one of the answers (see the last two examples in Figure 6 of the appendix).
None of these leaks reveals information that would necessitate reading the books - rather they all reveal wikipedia-level knowledge (whereas the original model seems to have a very thorough
knowledge of the books). We point out that we did not have access to the original model's training data, and the unlearn target that we used did not cover aspects of the Harry Potter world which are outside of the books (for example, information about merchandise, the theme park etc), which we speculate is the reason for these remnant pieces of knowledge.
Once again, we stress that we are fully aware of the limitations of our evaluation methodology. We posit that a comprehensive assessment of the unlearning quality can best be achieved by conducting adversarial attempts at probing the model to reveal its knowledge (due to which, we have outsourced the model for community evaluation).
### Ablation study
In order to verify the necessity of both ingredients of our technique, we tried testing each one in separation.
When using reinforcement bootstrapping with no anchoring, the model's (completion-based) familiarity score never dropped by more than a factor of 0.3 for any combination of parameters. Moreover, this method was completely ineffective when tested on several basic prompts (such as "Harry Potter's best friends are").
Using anchored terms in separation (namely, taking \(\alpha=0\) in equation (1)) was more effective, but falls short of achieving the same results as the combination of techniques. We performed a parameter search whose objective is find the model with the best possible performance on general benchmarks such that its familiarity score matches the model produced by the combination of techniques. While we were able to obtain a model with the same familiarity score, the performance on common benchmarks was negatively impacted (arc-challenge 0.40, arc-easy 0.70, boolq 0.79, hellaswag: 0.54, openbookqa: 0.33, piqa: 0.75, winogrande: 0.61).
Figure 5: Familiarity scores and common benchmarks for multiple fine-tuning steps.
Conclusion
The ambitious endeavor of teaching a Large Language Model (LLM) to selectively forget, or "unlearn", is a testament to the nuanced complexities inherent in the world of artificial intelligence and machine learning. Widely regarded as a daunting task, any attempt at enabling such a functionality in LLMs stands at the vanguard of innovative solutions, and in this light, our proof of concept arguably underscores progress.
Firstly, our research demonstrates that unlearning, though challenging, is not an insurmountable task, as the positive outcomes in our experiments with the Llama2-7b model suggest. Yet, this achievement must be contextualized with prudence. Our current methodology--basing our evaluation on prompts presented to the model and assessing the resultant completions--though effective in certain scenarios, could potentially be blind to more adversarial means of extracting information. It's conceivable that non-traditional or intricate methods, such as delving into token probability distributions, might inadvertently reveal the model's latent familiarity with unlearned content.
Diving deeper into the potential generality of our technique, a pertinent observation emerges when considering the unique attributes of the Harry Potter series. The books are replete with idiosyncratic expressions and distinctive names--traits that, in hindsight, may have abetted our unlearning strategy. The pronounced presence of Harry Potter themes across the training data of many LLMs further compounds the challenge. Given such widespread representation, even the slightest hint in a prompt might stir a cascade of related completions, underscoring the depth of memory ingrained in the model.
A nuance of our methodology involves a reliance on GPT-4's existing knowledge of the Harry Potter universe. To detect specific anchored terms and devise generic counterparts, the expertise of GPT-4 proved useful. This raises the question whether our technique achieve similar efficacy when stripped of such vast prior knowledge. Preliminary experiments show that entity extraction can still be effective when this knowledge is absent, and we speculate that the lack of familiarity with idiosyncratic expressions can be addressed with simple \(n\)-gram frequency analysis but we leave a more thorough study for future work.
Extending our approach to other types of content, particularly non-fiction or textbooks, presents its own set of challenges. Unlike the fictional universe of Harry Potter, non-fiction content will not possess the same density of unique terms or phrases. Furthermore, non-fictional texts often embed higher-level constructs such as ideas, concepts, or cultural perspectives. It remains uncertain to what extent our technique can effectively address and unlearn these more abstract elements. This would clearly necessitate adaptations of our technique.
In conclusion, while our technique offers a promising start, its applicability across various content types remains to be thoroughly tested. The presented approach offers a foundation, but further research is needed to refine and extend the methodology for broader unlearning tasks in LLMs.
### Acknowledgement
The authors would like to thank Yanan Cai for helping to configure and manage the Azure GPU VMs used for this work.
|
2303.12113
|
Robots Who Interrupt Talk in Meetings
|
Knowledge sharing is an important aspect in most meetings. Personal
characteristics of some participants, such as their (in)ability or
(un)willingness to take the floor, may have a negative effect on the quality of
knowledge sharing; some people tend to talk too much, while others have
difficulties in making themselves heard. A robotic facilitator can be used to
distribute the floor time more efficiently. While current research is mostly
focused on encouraging participants to talk, this paper suggests interruption
functionality to discourage speakers from talking. The facilitator gathers
turn-taking signals from the participants and expresses them on their behalf.
It hides the identity of individuals, making it easier for everyone to take
action. The facilitator represents the signals coherently for all signalers,
which equalizes the differences in social signalling skills, and makes it
easier for the speaker to interpret the signals. It continuously gathers
feedback from all participants, and thereby can represent the collective mood
of the audience and smooth out outlier reactions. The facilitator can be
programmed to act in a germane, courteous and attentive manner, which helps
keeping the meeting mood high.
|
Olli Niinivaara
|
2023-03-21T18:04:37Z
|
http://arxiv.org/abs/2303.12113v1
|
# Robots Who Interrupt Talk in Meetings
###### Abstract.
Knowledge sharing is an important aspect in most meetings. Personal characteristics of some participants, such as their (in)ability or (un)willingness to take the floor, may have a negative effect on the quality of knowledge sharing; some people tend to talk too much, while others have difficulties in making themselves heard. A robotic facilitator can be used to distribute the floor time more efficiently. While current research is mostly focused on encouraging participants to talk, this paper suggests interruption functionality to discourage speakers from talking. The facilitator gathers turn-taking signals from the participants and expresses them on their behalf. It hides the identity of individuals, making it easier for everyone to take action. The facilitator represents the signals coherently for all signals, which equalizes the differences in social signalling skills, and makes it easier for the speaker to interpret the signals. It continuously gathers feedback from all participants, and thereby can represent the collective mood of the audience and smooth out outlier reactions. The facilitator can be programmed to act in a germane, courteous and attentive manner, which helps keeping the meeting mood high.
conversational agent, automated facilitation, backchannel, interruption, turn-taking, collective feedback, nonverbal communication +
Footnote †: preprint: AIP/12/13
+
Footnote †: preprint: AIP/12/13
+
Footnote †: preprint: AIP/12/13
+
Footnote †: preprint: AIP/12/13
+
Footnote †: preprint: AIP/12/13
+
Footnote †: preprint: AIP/12/13
+
Footnote †: preprint: AIP/12/13
+
Footnote †: preprint: AIP/12/13
+
Footnote †: preprint: AIP/12/13
+
Footnote †: preprint: AIP/12/13
+
Footnote †: preprint: AIP/12/13
+
Footnote †: preprint: AIP/12/13
+
Footnote †: preprint: AIP/12/13
+
Footnote †: preprint: AIP/12/13
+
Footnote †: preprint: AIP/12/13
+
Footnote †: preprint: AIP/12/13
+
Footnote †: preprint: AIP/12/13
+
Footnote †: preprint: AIP/12/13
+
Footnote †: preprint: AIP/12/13
+
Footnote †: preprint: AIP/12/13
+
Footnote †: preprint: AIP/12/13
+
Footnote †: preprint: AIP/12/13
+
Footnote †: preprint: AIP/12/13
+
Footnote †: preprint: AIP/12/13
+
Footnote †: preprint: AIP/12/13
+
Footnote †: preprint: AIP/12/13
+
Footnote †: preprint: AIP/12/13
+
Footnote †: preprint: AIP/12/13
+
Footnote †: preprint: AIP/12/13
+
Footnote †: preprint: AIP/12/13
+
Footnote †: preprint: AIP/12/13
+
Footnote †: preprint: AIP/12/13
+
Footnote †: preprint: AIP/12/13
+
Footnote †: preprint: AIP/12/13
+
Footnote †: preprint: AIP/12/13
+
Footnote †: preprint: AIP/12/13
+
Footnote †: preprint: AIP/12/13
+
Footnote †: preprint: AIP/12/13
+
Footnote †: preprint: AIP/12/13
+
Footnote †: preprint: AIP/12/13
+
Footnote †: preprint: AIP/12/13
+
Footnote †: preprint: AIP/12/13
+
Footnote †: preprint: AIP/12/13
+
Footnote †: preprint: AIP/12/13
+
Footnote †: preprint: AIP/12/13
+
Footnote †: preprint: AIP/12/13
+
Footnote †: preprint: AIP/12/13
+
Footnote †: preprint: AIP/12/13
+
Footnote †: preprint: AIP/12/13
+
Footnote †: preprint: AIP/12/13
+
Footnote †: preprint: preprint: AIP/12/13
+
Footnote †: preprint: AIP/12/13
+
Footnote †: preprint: AIP/12/13
+
Footnote †: preprint: AIP/12/13
+
Footnote †: preprint: AIP/12/13
+
Footnote †: preprint: preprint: AIP/12/13
+
Footnote †: preprint: AIP/12/13
+
Footnote †: preprint: AIP/12/13
+
Footnote †: preprint: AIP/12/13
+
Footnote †: preprint: AIP/12/13
+
Footnote †: preprint: AIP/12/13
+
Footnote †: preprint: AIP/12/13
+
Footnote †: preprint: AIP/12/13
+
Footnote †: preprint: AIP/12/13
+
Footnote †: preprint: AIP/12/13
+
Footnote †: preprint: AIP/12/13
+
Footnote †: preprint: AIP/12/13
+
Footnote †: preprint: preprint: AIP/12/13
+
Footnote †: preprint: AIP/12/13
+
Footnote †: preprint: preprint: AIP/12/13
+
Footnote †: preprint: preprint: AIP/12/13
+
Footnote †: preprint: AIP/12/13
+
Footnote †: preprint: AIP/12/13
+
Footnote †: preprint: preprint: AIP/12/13
+
Footnote †: preprint: preprint: AIP/12/13
+
Footnote †: preprint: preprint: AIP/12/13
+
Footnote †: preprint: preprint: AIP/12/13
+
Footnote †: preprint: preprint: AIP/12/13
+
Footnote †: preprint: preprint: AIP/12/13
+
Footnote †: preprint: preprint: AIP/12/13
+
Footnote †: preprint: preprint: AIP/12/13
+
Footnote †: preprint: preprint: AIP/12/13
+
Footnote †: preprint: preprint: AIP/12/13
+
Footnote †: preprint: preprint: AIP/12/13
+
Footnote †: preprint: preprint: AIP/12/13
+
Footnote †: preprint: preprint: AIP/12/13
+
Footnote †: preprint: preprint: AIP/12/13
+
Footnote †: preprint: preprint: AIP/12/13
+
Footnote †: preprint: preprint: AIP/12/13
+
Footnote †: preprint: preprint: AIP/12/13
+
Footnote †: preprint: preprint: AIP/12/13
+
Footnote †: preprint: preprint: preprint: AIP/12/13
+
Footnote †: preprint: preprint: AIP/12/13
+
Footnote †: preprint: preprint: AIP/12/13
+
Footnote †: preprint: preprint: preprint: AIP/12/13
+
Footnote †: preprint: preprint: preprint: AIP/12/13
+
Footnote †: preprint: preprint: AIP/12/13
+
Footnote †: preprint: preprint: AIP/12/13
+
Footnote †: preprint: preprint: AIP/12/13
+
Footnote †: preprint: preprint: AIP/12/13
+
Footnote †: preprint: preprint: AIP/12/13
+
Footnote †: preprint: preprint: AIP/12/13
+
Footnote †: preprint: preprint: preprint: AIP/12/13
+
Footnote †: preprint: preprint: AIP/12/13
+
Footnote †: preprint: preprint: preprint: AIP/12/13
+
Footnote †: preprint: preprint: AIP/12/13
+
Footnote †: preprint: preprint: preprint: AIP/12/13
+
Footnote †: preprint: preprint: AIP/12/13
+
Footnote †: preprint: preprint: preprint: preprint: AIP/12/13
+
Footnote †: preprint: preprint: preprint: AIP/12/13
+
Footnote †: preprint: preprint: preprint: AIP/12/13
+
Footnote †: preprint: preprint: preprint: AIP/12/13
+
Footnote †: preprint: preprint: preprint: preprint: AIP/12/13
+
Footnote †: preprint: preprint: preprint: AIP/12/13
+
Footnote †: preprint: preprint: preprint: AIP/12/13
+
Footnote †: preprint: preprint: preprint: AIP/12/13
+
Footnote †: preprint: preprint: preprint: AIP/12/13
+
Footnote †: preprint: preprint: preprint: AIP/12/13
+
Footnote †: preprint: preprint: preprint: AIP/12/13
+
Footnote †: preprint: preprint: preprint: AIP/12/13
+
Footnote †: preprint: preprint: preprint: preprint: AIP/12/13
+
Footnote †: preprint: preprint: preprint: AIP/12/13
+
Footnote †: preprint: preprint: preprint: AIP/12/13
+
Footnote †: preprint: preprint: AIP/12/13
+
Footnote †: preprint: preprint: preprint: AIP/12/13
+
Footnote †: preprint: preprint: preprint: AIP/12/13
+
Footnote †: preprint: preprint: preprint: AIP/12/13
+
Footnote †: preprint: preprint: preprint: AIP/123
+
Footnote †: preprint: preprint: preprint: AIP/12/13
+
Footnote †: preprint: preprint: preprint: AIP/12/13
+
Footnote †: preprint: preprint: preprint: preprint: AIP/12/13
+
Footnote †: preprint: preprint: preprint: AIP/12/13
+
Footnote †: preprint: preprint: preprint: AIP/12/13
+
Footnote †: preprint: preprint: preprint: AIP/12/13
+
Footnote †: preprint: preprint: preprint: AIP/12/13
+
Footnote †: preprint: preprint: preprint: AIP/12/13
+
Footnote †: preprint: preprint: preprint: AIP/12/13
+
Footnote †: preprint: preprint: preprint: AIP/12/13
+
Footnote †: preprint: preprint preprint: preprint: AIP/12/13
+
Footnote †: preprint: preprint: preprint: preprint: AIP/12/13
+
Footnote †: preprint: preprint: preprint: preprint: AIP/12/13
+
Footnote †: preprint: preprint: preprint: AIP/12/13
+
Footnote †: preprint: preprint: preprint: AIP/12/13
+
Footnote †: preprint: preprint: preprint: preprint: AIP/12/13
+
Footnote †: preprint: preprint: preprint: preprint: AIP/12/13
+
Footnote †: preprint: preprint: preprint: AIP/12/13
+
Footnote †: preprint: preprint: preprint: AIP/12/13
+
Footnote †: preprint: preprint: preprint: AIP/12/13
+
Footnote †: preprint: preprint: preprint: preprint: AIP/12/13
+
Footnote †: preprint: preprint: preprint: AIP/12/13
+
Footnote †: preprint: preprint: preprint: AIP/12/13
+
Footnote †: preprint: preprint: AIP/12/13
+
Footnote †: preprint: preprint: preprint: AIP/12/13
+
Footnote †: preprint: preprint: preprint: AIP/12/13
+
Footnote †: preprint: preprint: preprint: AIP/12/13
+
Footnote †: preprint: preprint: preprint: AIP/12/13
+
Footnote †: preprint: preprint: preprint: AIP/12/13
+
Footnote †: preprint preprint: preprint: AIP/12/13
+
Footnote †: preprint: preprint: preprint: AIP/12/13
+
Footnote †: preprint: preprint: AIP/12/13
+
Footnote †: preprint preprint: preprint: AIP/12/13
+
Footnote †: preprint: preprint: preprint: AIP/12/13
+
Footnote †: preprint: preprint: preprint: AIP/12/13
+
Footnote †: preprint: preprint: preprint: AIP/12/13
+
Footnote †: preprint: preprint preprint: AIP/12/13
+
Footnote †: preprint preprint: preprint: AIP/12/13
+
Footnote †: preprint preprint: preprint: AIP/12/13
+
Footnote †: preprint: preprint: preprint: AIP/12/13
+
Footnote †: preprint: preprint: preprint: AIP/12/13
+
Footnote †: preprint: preprint: preprint: AIP/12/13
+
Footnote †: preprint: preprint: preprint: AIP/12/13
Input
The most important UK input requirements in this context seem to be accuracy, user equality and discretion. Accuracy is required, because discouraging speaking is a delicate matter, and interrupting someone is a drastic act. An erroneous operation here would easily render a conversational agent detrimental. User equality means that floor time should not depend on the personal characteristics of the participants, such as on their degree of glossophobia, extraversion, their social skills in expressing backchanneling signals, or their abilities to do surface acting. Discretion is required so that even the most shy and socially anxious listeners feel safe using the system. An anonymous and indiscernible input interface also avoids distracting the ongoing discussion, so that the decision on whether and how to act can be controlled by the system.
Systems that can detect the emotional intent from nonverbal backchanneling cues are currently being developed ([e.g. 2, 4, 5, 11]). However, such an input interface would not fulfill the requirements for user equality and discretion, because they require the users to be brave enough to stand out, and to stand out in a way that is correctly interpreted by the detection system. Therefore I suggest it is better to let the participants use their own device (laptop or smartphone2) to signal their interns. The participants could, for example, surf to a web address given in the start of meeting, and use the system by clicking on a set of buttons (see below) offered there.
Footnote 2: there a also Audience Response Systems [8] for this purpose, see [https://www.m.inf.tu-dresden.de/arselector/](https://www.m.inf.tu-dresden.de/arselector/) for list
To decide the most useful set of commands, experimental work is needed. While a lot of research on turn-taking and conversational agents exists, research that explicitly concerns interruptions and other acts that discourage speech is scarce. Seminal work on the area is Lycan [10]. Goldberg [6] gives a typology of interruptions but focuses on vocal utterances. Schloder [14] presents a taxonomy of rejection moves in dialogue. Gonnot et al. [7] is an implementation with a command palette with many useful negative feedback functionalities.
Based on the aforementioned previous work, the following action set is suggested as a starting point. The suggested actions are divided in three categories. Category **Advice** contains impressions signaling that the speaker may continue, but should somehow adjust the presentation. Category **Comment** contains signals stating that the speaker should give the floor for a short moment, but may then continue. Moreover, these comment signals come in two moods: stating either that an intermediate commentary in general would be in place, or that the signaler itself wants to utter this kind of a comment. Finally, category **Stop** contains ways to signal the speaker to give the floor for now.
Advice:
**Explain**: "We did not understand. Please explain with more detail"
**Doubful**: "We found that hard to believe. Please be more convincing"
**Skip**: "You are wasting our time. Please state your point on this topic"
Comments:
**Questionable**: "Let me/us ask you a question"
**Mistake**: "Let me/us correct you"
**Dialogue**: "Let me/us answer that"
**Announcement**: "Ihave/there is a short announcement to make"
Stops:
**Inappropriate**: "Your delivery does not belong here"
**Overtime**: "Your time is up"
**Dispute**: "You are just arguing with each other, please respect our time and continue that somewhere else"
**Secret**: "You cannot talk about that here"
In a fully working system, some auxiliary actions would probably also be required. Some actions could be directed towards the facilitator. For example, a **Cancel** signal could be used to cease the current robot script when it has become irrelevant due to later happenings. Some actions could be directed toward other audience members. For example, a **Calm down** signal could be used to pacify a restless part of audience.
As an extension, the signals could come in degrees. A weak signal would mean that the signaler is in doubt, so that the output script should be started only if some majority of listeners also entertains the same opinion ("I wonder if there was a mistake"). A strong signal would mean that the signaler wants the signal to be carried out in full force, irrespective of what other opinions are currently active in the audience ("I have an announcement to make: the building is on fire").
## 3. Output
The most important UK output requirements in this context seem to be nondisruptiveness and effectiveness. Nondisruptiveness means that signals should not confuse or agitate the speaker. Effectiveness means that it should not be possible for the speaker to ignore the signals, either deliberately or by accident.
The aforementioned requirements contradict each other. This dilemma can be solved by using gradual, nonverbal output. Graduality means that an output action script starts with quite inconspicuous signals like small gestures, and progresses towards more intrusive measures, like speaking aloud. Nonverbal output can be implement by using a humanoid (anthropomorphic) output UI. Human gestures will be effortlessly and immediately recognized and emotional response to them is involuntary, without any higher cognitive processing required [22]. Urakami & Seaborn [21] and Saunderson & Nejat [13] give reviews on how robots can influence humans with such nonverbal communication.
The operation and the granularity of output is explained next with scenarios3.
Footnote 3: Note on format: Emotional responses, such as _gets bored_ are denoted in italics; Audience input signals ("button clicks"), such as **Mistake** are denoted in bold
### scenario 1
Let us imagine a big scientific conference where a speaker named Amy has just made a serious mistake that should be immediately corrected.
Listener a: (_recognizes a potential mistake_): **Mistake**
robot: blinks eyes, slightly jerks head more listeners: (_also start to suspect a problem_): **Mistake** robot: squacks eyebrows, rubs chin more listeners: (_agree that something is not right_): **Mistake** robot: scratches ear, shakes head, says: "hmmmm..." amy: (_decides to react_): Says: "If you spotted something, please help me out" potential commentators: (_willing to step up_): **Let me answer that** robot: raises hand, stares at the speaker amy: (_recognizes that someone is willing to comment_): Interrupts the speech robot: (_recognizes that speaker is silent_): Says: "We have a comment from the audience". Lowers hand. (_Selects one potential commentator to get the floor_) commentator: (_Notices that robot has chosen her_): Explains the mistake other potential commentators: (_Find the explanation adequate_): Remove their **Let me answer that** -signals amy: (_Notes that incident is solved_) Continues the presentation
### scenario 2
Let us imagine a town community public meeting (a meeting where public can discuss with officials), where one citizen (Bob) is ranting about an irrelevant topic.
``` bob: keeps on talking listener a: (_gets bored_) Skip robot: sighs, drums fingers bob: continues some more listeners: (_get bored_) Skip robot: yawns and stretches, gesticulates as if looking at its wrist watch listener b: (_gets annoyed_) Inappropriate robot: scratches forehead, shakes head, rolls eyes bob: starts to speak even more aggressively some listeners: (_Hope someone to answer to Bob_) **Let's Dialogue** robot: raises hand, sweeps hand toward audience someone: (_Ready to stop Bob_) **I have an Announcement** robot: stands up, points toward audience, coughs loudly bob: keeps on talking majority of listeners: (_Hoping Bob to shut up_) Inappropriate or Skip or There is an announcement robot: walks toward Bob, raises both hands, says: "Please, stop speaking now, or I start singing loudly." bob: shuts up. robot: signals to the announce-maker to go on, walks back to its chair and sits down
## 4. Ethical Considerations
Let us, for argument's sake, assume robotic facilitators to be a product with huge sales potential. A technology that deeply disrupts how people communicate with each other may bring benefits, but might also cause some unintended negative consequences. Therefore it is justifiable to consider also the potential negative effects before such robots are being deployed in massive scale.
Maybe the most pressing issue in our times is achieving environmental sustainability, and mass-scale production of robots would be detrimental due to their carbon footprint and use of rare-earth elements in actuators, batteries and motherboards. Up to a point, animated virtual avatar facilitators could be used instead of robots. However, they might be far less effective interruptors than physical robots. To reduce the amount of robots, a common application programming interface for robots from different manufacturers would enable one robot to be used in multiple roles, meeting facilitation included. Robots could also be shared or rented between users, for example borrowed from libraries. A single robot could serve multiple meetings by walking from meeting to meeting. But if we assume that a robot is already manufactured, adding the interruption functionality to it would not cause further environmental damage in itself, especially if the interruption component decreases meeting times and thereby energy consumption.
Participants who do not have suitable device at hand would be outcasts in robot-facilitated meetings. While it is realistic to assume that everyone can bring a laptop or a smartphone, oversights and malfunctions do happen. There are four remedies. One is to keep spare devices available by the meeting organizer. Another is to try to automatically detect the signals from the non-verbal (and verbal) cues of the participants. A third is to pair people without a device with those who have. A fourth is to not use a robot whenever someone in the audience so wishes. As different options to handle the situation exist, missing a device does not seem to be a critical obstacle.
An automated facilitator can record sensible data from meetings. Cloud video conferencing software are routinely used in meetings nowadays, hinting that electronic meeting tools in general do not pose a security risk to organizations. However, a system that collects quantitative data about meeting behavior poses an intraorganizational privacy risk. Especially, using this kind of data as performance indicators for workforce assessment might be tempting. Such data could reveal, for example, which people agree and disagree with each other the most. Using a facilitator robot as an intraorganizational surveillance spy should be discouraged or technically prevented (for example by deleting all data after every meeting). Otherwise even the awareness that every action is being recorded might incentivize participants to focus their energy on gaming the system instead of on making the meeting productive.
Continuous use of automated facilitators everywhere might lead to a situation in which people become quite bad at communicating, cooperating or coordinating on anything without a robot being present. It is self-evident that advances in technology change the skills people need. However, it is reasonable to ask, whether some basic communication skills are so essential to us that we should not let them degenerate. On the positive side, people are naturally quite good at imitation, and therefore watching a robot might teach
people how to facilitate well. People should also be encouraged to hold meetings without a robot every now and then, in order to keep old-fashioned meeting skills at some base level. However, our understanding of what happens when robots are placed within groups or teams for longer times is highly limited (Krishnan et al., 2017), therefore more research on this area is needed.
The user interface operations suggested here are based on how people currently orchestrate meetings. But of course also totally novel input and output operations could be created that have no basis in the current social reality. Such operations might turn out to be much more efficient than current ways of managing turn-taking. After all, current signals were never engineered, but are more or less ad hoc results of historical (evolutionary (Krishnan et al., 2017), cultural (Bowman et al., 2017), and societal (e.g., 2017)) developments. In our ancient past the individuals who were the strongest and most aggressive might have dominated meeting outcomes. Today, the most charismatic and socially skilled people have an advantage over the socially clumsy introverts. In the future, those who are the best using the technologies to their advantage might dominate both the aggressive and the charismatic ones. Such a future would be ethically more acceptable than the status quo in the sense that it is arguably easier to learn to use meeting tools than to change one's personality into an extrovert. But the basic requirement is that everyone must have an equal opportunity to learn these meeting technologies.
## 5. Conclusions and Future Work
Automated facilitators are an advanced technology for managing turn-taking in meetings. Current research has mostly focused on sharing the floor-time evenly by encouraging the passive participants to engage more. As an additional counterpart, a functionality to discourage the speaker from talking was suggested. The facilitator discretely gathers reactions from the audience, and performs the collective feedback with gradual, polite cues that become harder and harder for the speaker to ignore. In this way, the opportunity to interrupt the speaker does not anymore depend on the personal characteristics or the social skills of an individual participant, making the meeting experience more equal to everyone. The speaker does not need to be aware and understand all the simultaneous signals from the audience, but can concentrate on the feedback by the facilitator.
The next research step is to implement an interruption-capable conversational agent and run experiments with it. However, implementing only the interruptive capabilities would probably not work, because such a robot could bring about a quite negative meeting atmosphere. To counterbalance, some rapport-building backchaneling functionality should also be implemented.
While most HRI experiments are lab experiments, field experiments seem more adequate here. The atmosphere and dynamics of a (heated) meeting may be hard to create synthetically. Besides, the novelty effects (Krishnan et al., 2017) of introducing a robot are probably much higher in group settings than in dyadic interaction. To let the novelty effects wear off, a group should use an automated facilitator in their meetings until it becomes a routine.
Anyway, designing experiments for measuring effects on large group behaviour seems challenging. This suggests that the scarcity of experimental research on conversational robots as members of large groups does not necessarily stem from uselessness of such research, but from the hardness of doing it.
## Acknowledgments
This work has been funded by NordForsk.
|
2310.04649
|
NPEFF: Non-Negative Per-Example Fisher Factorization
|
As deep learning models are deployed in more and more settings, it becomes
increasingly important to be able to understand why they produce a given
prediction, but interpretation of these models remains a challenge. In this
paper, we introduce a novel interpretability method called NPEFF that is
readily applicable to any end-to-end differentiable model. It operates on the
principle that processing of a characteristic shared across different examples
involves a specific subset of model parameters. We perform NPEFF by decomposing
each example's Fisher information matrix as a non-negative sum of components.
These components take the form of either non-negative vectors or rank-1
positive semi-definite matrices depending on whether we are using diagonal or
low-rank Fisher representations, respectively. For the latter form, we
introduce a novel and highly scalable algorithm. We demonstrate that components
recovered by NPEFF have interpretable tunings through experiments on language
and vision models. Using unique properties of NPEFF's parameter-space
representations, we ran extensive experiments to verify that the connections
between directions in parameters space and examples recovered by NPEFF actually
reflect the model's processing. We further demonstrate NPEFF's ability to
uncover the actual processing strategies used by a TRACR-compiled model. We
further explore a potential application of NPEFF in uncovering and correcting
flawed heuristics used by a model. We release our code to facilitate research
using NPEFF.
|
Michael Matena, Colin Raffel
|
2023-10-07T02:02:45Z
|
http://arxiv.org/abs/2310.04649v1
|
# NPEFF: Non-Negative Per-Example Fisher Factorization
###### Abstract
As deep learning models are deployed in more and more settings, it becomes increasingly important to be able to understand why they produce a given prediction, but interpretation of these models remains a challenge. In this paper, we introduce a novel interpretability method called NPEFF that is readily applicable to any end-to-end differentiable model. It operates on the principle that processing of a characteristic shared across different examples involves a specific subset of model parameters. We perform NPEFF by decomposing each example's Fisher information matrix as a non-negative sum of components. These components take the form of either non-negative vectors or rank-1 positive semi-definite matrices depending on whether we are using diagonal or low-rank Fisher representations, respectively. For the latter form, we introduce a novel and highly scalable algorithm. We demonstrate that components recovered by NPEFF have interpretable tunings through experiments on language and vision models. Using unique properties of NPEFF's parameter-space representations, we ran extensive experiments to verify that the connections between directions in parameters space and examples recovered by NPEFF actually reflect the model's processing. We further demonstrate NPEFF's ability to uncover the actual processing strategies used by a TRACR-compiled model. We further explore a potential application of NPEFF in uncovering and correcting flawed heuristics used by a model. We release our code to faciliate research using NPEFF.1
Footnote 1: [https://github.com/mmatena/npeff_ref](https://github.com/mmatena/npeff_ref)
## 1 Introduction
Neural networks trained on large datasets have achieved human-level performance on many tasks. Unfortunately, these models do not provide any direct method to understand what they have learned or how they have solved the task (Smilkov et al., 2017; Dabkowski & Gal, 2017; Sundararajan et al., 2017). This lack of interpretability can hamper progress in developing new machine learning models and can act as a hindrance to their adoption by making it hard to trust that the model's predictions are based on sound principles (Li et al., 2022).
In this work, we propose NPEFF (**Non-Negative Per-Example F**isher **F**actorization), an interpretability method that makes use of the model's _parameter space_ to produce representations of concepts. Our method can be used with any end-to-end differentiable model without requiring any customization to particular architectures. NPEFF extracts these concepts unsupervisedly given a set of examples. It also provides a theoretically principled way to produce guided changes in a given model's behavior by using these concept representations to directly alter a model's parameters.
NPEFF operates on the hypothesis that a neural network's processing of inputs can be decomposed into a hierarchical set of abstract sub-computations. The computation performed by a model therefore makes use of a fixed set of learned sub-computations, and its processing on any particular example involves a sparse subset of them. Whether a given sub-computation is used for a given example therefore depends on the abstract concepts that underlie the example. Across examples, we assume that each sub-computation will involve a consistent subset of parameters that perform
a similar operation. Since the Fisher information matrix for each example relates perturbations in model parameters to changes in the model's predictive distribution, the sub-computations applied to an example become imprinted in it.
NPEFF disentangles these sub-computations given a collection of per-example Fisher's (PEFs) through a decomposition procedure. We provide two versions of NPEFF that operate on different representations of the PEF matrices: a diagonal approximation and an exact low-rank representation. When using diagonal PEFs, our decomposition procedure becomes equivalent to non-negative matrix factorization (NMF) (Lee and Seung, 1999). In the low rank case, we represent each example's PEF matrix as a non-negative sum of rank-1 positive semi-definite matrices. To the best of our knowledge, this is a novel decomposition, and we introduce a scalable, multi-GPU algorithm for computing it.
The output of an NPEFF decomposition provides a direct way to quantify the importance of each component to the model's processing of each example. Looking at the examples most influenced by a particular component provides a means to infer what concept or heuristic the component represents. Furthermore, NPEFF generates a "pseudo-Fisher" matrix for each component that can be used to estimate the impact of parameter perturbations on their associated sub-computations. By constructing perturbations that selectively impact particular components, we can verify that the components reflect how the model actually processes examples.
Overall, NPEFF provides a way to interpret a model's processing by extracting the computational sub-steps it uses. In our experiments on text and vision models, we find that these sub-steps often corresponded to human-recognizable concepts by inspecting component top examples. Our parameter perturbation experiments demonstrate that the component pseudo-Fisher's indeed reflect parameters important for their particular computational steps. We further ran experiments on a TRACR-compiled toy model implementing a known ground-truth algorithm and demonstrate that components align with sub-steps of the computation. We also explored a potential application of NPEFF's parameter perturbation to correcting faulty heuristics used by a model. Finally, we include experiments demonstrating the robustness of NPEFF to choices of hyperparameters.
## 2 Non-Negative Per-Example Fisher Factorization (NPEFF)
### Fisher Information
Consider a classification model \(p_{\theta}(y|\mathbf{x})\) with parameters \(\theta\in\mathbb{R}^{m}\) that maps inputs \(\mathbf{x}\in\mathcal{X}\) to a softmax distribution over \(C\) labels. Given any example \(\mathbf{x}\in\mathcal{X}\), we define the per-example Fisher (PEF) matrix as
\[F(\mathbf{x})=\mathbb{E}_{y\sim p_{\theta}(y|\mathbf{x})}\nabla_{\theta}\log p _{\theta}(y|\mathbf{x})\nabla_{\theta}\log p_{\theta}(y|\mathbf{x})^{T}. \tag{1}\]
The Fisher information matrix has an information geometric interpretation as a metric relating local perturbations in parameters to changes in the model's predictive distribution (Amari, 2016)
\[D_{\mathrm{KL}}(p_{\theta}(y|\mathbf{x})\|p_{\theta+\delta}(y|\mathbf{x})) \approx\frac{1}{2}\delta^{T}F(\mathbf{x})\delta \tag{2}\]
as \(\delta\rightarrow\mathbf{0}\), where \(D_{\mathrm{KL}}\) is the KL divergence.
If we let \(\mathbf{a}_{j}(\mathbf{x})=\sqrt{p_{\theta}(y_{j}|\mathbf{x})}\nabla_{\theta} \log p_{\theta}(y_{i}|\mathbf{x})\), then we can express the PEF as \(F(\mathbf{x})=\sum_{j=1}^{C}\mathbf{a}_{j}(\mathbf{x})\mathbf{a}_{j}(\mathbf{x })^{T}\). We can thus represent the full PEF matrix using only \(Cm\) values, which we call its low-rank representation or LRM-PEF. Alternatively, we can use the diagonal of the PEF matrix as its representation, which we call the diagonal PEF or D-PEF (Kirkpatrick et al., 2017). In this case we have
\[\mathbf{f}(\mathbf{x})=\mathbb{E}_{y\sim p_{\theta}(y|\mathbf{x})}\left(\nabla _{\theta}\log p_{\theta}(y|\mathbf{x})\right)^{2}. \tag{3}\]
Unlike the LRM-PEF which is an exact representation, the D-PEF corresponds to the approximation \(F(\mathbf{x})\approx\mathrm{Diag}(\mathbf{f}(\mathbf{x}))\). Generally, D-PEFs are more tractable to process than LRM-PEFs when the number of classes \(C\) is large.
SparsitySince the number of parameters \(m\) can be very large for real world models, storing D-PEFs or LRM-PEFs for a modestly sized data set can become intractable. Fortunately, we empirically observe that most of the entries of the PEFs for typical trained neural networks tend to be very
small in magnitude. This is to be expected from prior works on model pruning (Hoefler et al., 2021), which finds that most parameters are not important for the model's behavior (Frankle and Carbin, 2018). In our work, we fix some value \(K\in\mathbb{N}\) and sparsify each PEF representation by using only the \(K\) values with the largest magnitudes. This significantly reduces the amount of required storage and compute with relatively little impact on the accuracy of the representations.
Number of ClassesFor tasks with only a few classes, we can include the term for every class in equation 1 and equation 3 when computing the PEFs. However, this becomes prohibitively expensive as the number of classes increases, requiring roughly the same amount of computation as required for a backwards pass for each class. For such tasks, we thus discard terms correspond to classes whose probabilities \(p_{\theta}(y|\mathbf{x})\) are below some threshold \(\epsilon\).
### Decomposition
Let \(\mathcal{D}=\{\mathbf{x}_{1},\ldots,\mathbf{x}_{n}\}\) be a set of examples. Let \(F_{i}\) correspond to our representation of the Fisher matrix for the \(i\)-th example, i.e. \(F_{i}=\sum_{j=1}^{C}\mathbf{a}_{j}(\mathbf{x}_{i})\mathbf{a}_{j}(\mathbf{x}_ {i})^{T}\) if we are using LRM-PEFs or \(F_{i}=\operatorname{Diag}(\mathbf{f}(\mathbf{x}_{i}))\) for D-PEFs. The decomposition central to NPEFF can be expressed as the non-convex optimization problem
\[\begin{array}{ll}\text{minimize}&\sum_{i=1}^{n}\|F_{i}-\sum_{j=1}^{r}W_{ij}H _{j}\|_{F}^{2}\\ \text{subject to}&W_{ij}\geq 0,\\ &H_{j}\in\mathcal{H},\end{array} \tag{4}\]
where \(\mathcal{H}\) corresponds to a subset of \(m\times m\) matrices and \(r\) is a hyperparameter denoting the number of components to learn. We choose this decomposition since we can interpret the \(H_{j}\) as a "pseudo-Fisher" for the \(j\)-th component, and the non-negative coefficients \(W_{ij}\) allow us to see the contributions of each component to the KL-divergence of a model's predictions following a perturbation of its parameters.
For LRM-NPEFF, \(\mathcal{H}\) is the set of rank-1 \(m\times m\)-positive semi-definite (PSD) matrices. Any element of this \(\mathcal{H}_{j}\) can be expressed as \(H_{j}=\mathbf{g}_{j}\mathbf{g}_{j}^{T}\) for some vector \(\mathbf{g}_{j}\in\mathbb{R}^{m}\). This decomposition can be described as approximating a set of low-rank PSD matrices as non-negative combinations of a set of shared rank-1 PSD matrices. We present a multi-GPU algorithm performing this decomposition in appendix A. Importantly, this algorithm does not explicitly construct any \(m\times m\)-matrix and instead uses a number of inner products between \(m\)-dimensional vectors that is independent of \(m\) itself.
For D-NPEFF, \(\mathcal{H}\) is the set of diagonal \(m\times m\)-matrices with non-negative entries. Any element of this \(\mathcal{H}\) can be expressed as \(H_{j}=\operatorname{Diag}(\mathbf{h}_{j})\) for some non-negative vector \(\mathbf{h}_{j}\in\mathbb{R}^{m}_{\geq 0}\). Solving equation 4 then reduces to the well-studied problem of non-negative matrix factorization (NMF) (Wang and Zhang, 2012). We used a multi-GPU implementation based on Boureima et al. (2022).
Both variants of NPEFF produce a non-negative matrix \(W\in\mathbb{R}^{n\times r}_{\geq 0}\) that we call the coefficient matrix. It defines the relationship between individual examples and components. Each entry \(W_{ij}\) represents the contribution of the \(j\)-th component to the PEF of the \(i\)-th example. The coefficient matrix allows us to get a qualitative understanding of the sub-computation associated to each component. Given a component, we create a list of the examples sorted by the component's coefficient and look at the ones with the highest coefficients. Those top examples often display an interpretable pattern from which we can infer what processing the component represents.
Both NPEFF variants also produce a collection of \(m\)-dimensional vectors that we refer to as the component pseudo-Fishers. Deferring to section 2.3 for further exposition, each component's pseudo-Fisher vector generates what can be interpreted as an analog of a Fisher information matrix for that component. We express these matrices as \(H_{j}=\mathbf{g}_{j}\mathbf{g}_{j}^{T}\) for an LRM-NPEFF pseudo-Fisher vector \(\mathbf{g}_{j}\in\mathbb{R}^{m}\). D-NPEFF generates \(H_{j}=\operatorname{Diag}(\mathbf{h}_{j})\) for the non-negative pseudo-Fisher vectors \(\mathbf{h}_{j}\in\mathbb{R}^{m}_{\geq 0}\).
PreprocessingDuring initial experimentation, we found that using raw PEFs led to components tuned to outlier examples. We suspect that outlier examples tend to have PEFs with large magnitudes, and thus their contributions dominate the loss during optimization. To rectify this, we normalized PEFs to have unit Frobenius norm _before_ sparsification. This has the effect of de-emphasizing
examples for which the sparse approximation is poor; however, we expect any difference in the outcome to be minimal. Efficient computation of the Frobenius norm is described in appendix B.
Our decomposition algorithms make use of a dense representation of the pseudo-Fisher vectors, and thus a naive implementation would require the use of \(rm\) floats to represent them. However, the sparsification of the PEFs leads to many parameters having few non-zero corresponding entries. We are able to greatly reduce the memory burden by pruning these entries across the PEFs and using a dimensionality of \(m^{\prime}<m\). We suspect these positions do not contain enough effective "data-points" to contribute meaningfully to the factorization, so its validity should be minimally impacted.
Coefficient FittingOnce we have computed the pseudo-Fisher vectors on a data set \(\mathcal{D}\), we can fit a coefficient matrix to PEFs created from another data set \(\mathcal{D}^{\prime}\). Importantly, this allows us to see if the tunings of NPEFF components generalize to examples not used to generate them. Since both of the LRM-NPEFF and D-NPEFF decomposition algorithms operate by alternating between updating the coefficient matrix and updating the pseudo-Fisher vectors, we can fit coefficients to a fixed set of pseudo-Fishers by repeatedly only performing the coefficient matrix update step. By caching intermediates that do not depend on the coefficients, coefficient fitting can be performed far more efficiently than a full NPEFF decomposition.
Components Set ExpansionHaving learned an NPEFF decomposition using a large, general set of examples, we can expand the set of components to create components specialized to another set of examples that are of particular interest. The process is similar to that of a regular NPEFF decomposition with the exception that a subset of component pseudo-Fishers are initialized using the existing decomposition. These are then frozen throughout the decomposition procedure. This leads to the frozen components capturing the general-purpose processing trends leaving the non-frozen components to capture processing specific to that set of examples.
### Guided Perturbations
Recall from equation 2 how PEF matrices can be used to relate perturbations of model parameters to changes in its predictive distribution for an example. Using the NPEFF decomposition, we can approximate a PEF matrix for example \(\mathbf{x}_{i}\) as \(F_{i}\approx\alpha_{i}\sum_{j=1}^{r}W_{ij}H_{j}\), where \(\alpha_{i}\) is the Frobenious norm of \(F_{i}\). Plugging this into equation 2 gives us
\[D_{\mathrm{KL}}(p_{\theta}(y|\mathbf{x}_{i})\|p_{\theta+\delta}(y|\mathbf{x}_ {i}))\approx\frac{\alpha_{i}}{2}\sum_{j=1}^{r}W_{ij}\delta^{T}H_{j}\delta \tag{5}\]
for a parameter perturbation \(\delta\in\mathbb{R}^{m}\). From this, our choice of calling \(H_{j}\) the "pseudo-Fisher matrix" for the \(j\)-component should be clear: it can be used to relate the disruption of the model's predictions on examples utilizing the component to perturbations of parameters. Furthermore, this justifies our choice of decomposition method since the uncovered factors can be readily interpreted in equation 5.
The equation equation 5 provides a theoretically principled way to verify relationship between parameters and examples uncovered by NPEFF. The goal is to find a perturbation \(\delta\in\mathbb{R}^{m}\) such that \(\delta^{T}H_{k}\delta\) is large when \(k=j\) for a chosen component \(j\) and is small otherwise. Once one is found, we can compare the KL-divergences between the perturbed and original models' predictions on examples to their coefficients for the \(j\)-th component.
For LRM-NPEFF, recall that each pseudo-Fisher matrix \(H_{k}\) can be expressed as \(\mathbf{g}_{k}\mathbf{g}_{k}^{T}\) for some vector \(\mathbf{g}_{k}\in\mathbb{R}^{m}\). Thus \(\delta^{T}H_{k}\delta=(\mathbf{g}_{k}^{T}\delta)^{2}\). We wish to find a \(\delta\in\mathbb{R}^{m}\) such that \(\mathbf{g}_{k}^{T}\delta\) has a large magnitude for \(k=j\) and small magnitude otherwise. We found that simply using \(\delta\propto\pm\mathbf{g}_{j}\) performed well, but the selectivity of the perturbation could be improved by taking the orthogonal rejection of \(\pm\mathbf{g}_{j}\) onto a subset of the other components' pseudo-Fishers. Namely, we orthogonally reject \(\pm\mathbf{g}_{j}\) onto \(\mathbf{g}_{k}\) if the magnitude of their cosine-similarities is below some threshold. The threshold is used to prevent similar components from interfering in the construction of the perturbation. We found a threshold of 0.35 to work well, which is what we used throughout our experiments. The construction of the perturbation for D-NPEFF is more involved and is presented in appendix C.
## 3 Experiments
### Component Tunings and Perturbations
We ran LRM-NPEFF on two NLP models and ran D-NPEFF on a vision model. The NLP models were BERT-base fine-tuned on MNLI and QQP, respectively (Devlin et al., 2018). QQP is a binary classification task that aims to determine whether a pair of questions are duplicates of each other (Wang et al., 2018). MNLI is a natural language inference (NLI) task that involves determining whether a given hypothesis is implied by, contradicted by, or neutral with respect to a given hypothesis (Williams et al., 2018). Thus, it has three classes. For the vision model, we used a ResNet-50 (He et al., 2016) trained on the ImageNet classification task (Russakovsky et al., 2015). ImageNet involves classifying an image as one of 1000 fine-grained classes.
When learning the NPEFF components for QQP, we used 50k examples heldout from the canonical train split to produce PEFs. For the NLI model, we used 50k examples from the SNLI data set (Bowman et al., 2015). Like the MNLI data set used to train the model, SNLI is also an NLI task with the main difference being that SNLI examples come from a single domain while MNLI has examples from multiple domains. We chose to do this to get a better idea of how NPEFF uncovers heuristics when a model generalizes to new data sets. We used 20k examples from the train split of ImageNet 2012 to produce PEFs for the vision model. We ignored classes with a probability of less than 3e-3 when computing the PEFs for the vision model.
We used 512 components when running NPEFF on the NLI and vision models, and we used 256 components for the QQP model. Once the NPEFF pseudo-Fishers were computed, we fit coefficients to PEFs from a heldout set of examples for each model. We used 50k SNLI examples for the NLI model, 40,430 examples from the QQP validation set for the QQP model, and 30k examples from the ImageNet validation set for the vision model. All of the component tunings and perturbations presented in this section are from these heldout sets. More details on the models and data sets used can be found in see appendix D. There, we also go into detail about the hyperparameters used when computing the PEFs and running NPEFF for these experiments.
#### 3.1.1 Component Tunings
**Component 70**:
``` Whichisthebestdishtvconnectioninhydabled? Whichisthebestdishtvconnectioninhanglores? Whichisthebestdishtvconnectioninhanglores? Whichisthebestbesthospitalinokelta? Whichisthebesteyehospitalinpome? Wherecan1buysecondhandbooksinhydabled? Wherecan1buysecondhandbooksinind
general category of the image subject (e.g. animal, plant, vehicle, architecture, technology, etc.) to more specific, fine-grained attributes such as the fur patterns of dogs. Many of the components had tunings that included both low-level and high-level features. In those cases, the low-level features were often predictive of the higher level ones.
#### 3.1.2 Perturbations
For the LRM-NPEFF experiments on the NLI and QQP models, we used the perturbation method discussed in section 2.3. Perturbation vectors were scaled to have an L2 norm of 0.1 before applying them to the model parameters. We then measured the per-example KL-divergence of the perturbed model's predictions from the original model's predictions. To score the selectivity of the perturbation's impact on the top examples of a component, we calculated the ratio of the average of this KL-divergence for the component's top 128 examples to the average KL-divergence for a random subset of 10k examples. Since the sign of the perturbation is not determined by the method in section 2.3, we try both adding and subtracting the perturbation to the model parameters. The reported scores correspond to whichever has a higher KL-ratio. Details of the perturbation experiments for the D-NPEFF decomposition of the vision model can be found in appendix E.
Looking at the expression equation 5 relating the KL-divergence of a model's prediction to a perturbation of its parameters, we see that the KL-divergence is directly proportional to norm of the example's PEF. In contrast, the coefficients used to rank the top examples for a component are computed using normalized PEFs and thus are independent of their norms. To explore the impact of this confounding factor, we computed the ratio of the average PEF Frobenius norm of the top 128 examples for each component to the average norm of a random subset of 10k examples.
Histograms of the KL-divergence and PEF norm ratios are presented in fig. 3. For all models, these histograms clearly show that the KL-divergence ratios tended to be significantly higher than the PEF
Figure 3: Histograms of the \(\log_{2}\) ratios between top component examples and random examples for KL-divergence and PEF norm.
Figure 2: Top examples for two vision components. The top two rows for each component are the examples with the 8 highest coefficients. The bottom row for each component contains selected examples from the set of top 32 examples. **(Left)** The examples with the highest coefficients are all school buses. In general, examples with high coefficients have a yellow vehicle in them. **(Right)** The examples with the highest coefficients are all zebras. In general, examples with high coefficients have a striped animal in them. The specific type of animal does not seem important with fish, mollusks, lizards, and mammals being included.
norm ratios. This supports our claim that the directions in parameter space uncovered by NPEFF are specifically important for their respective component top examples. Furthermore, this helps verify that the concepts and heuristics uncovered by NPEFF correspond to actual processing methods used by the model.
### Toy Model with Known Ground Truth
Since it is difficult to evaluate whether a method uncovers concepts actually used by a real-world model, we ran experiments using a synthetic task and transformer model compiled by TRACR (Lindner et al., 2023) that implements a RASP program (Weiss et al., 2021) This task simulates an extremely simplified NLI task. An example looks like [BOS] Q1 S1 O1 Q2 S2 O2, which is a concatenation of a premise and hypothesis. The Q1,Q2 tokens correspond to All or Some, and a Q S O segment can be thought of as the sentence [All/Some] [subject] [verb] [object]. The verb is assumed fixed and shared between the premise and hypothesis, so it is not explicitly represented. Subjects and objects are chosen from the same set of options containing a hierarchical structure so that we can have a is_a_subtype_of relation between options. Each example is given a label of entails or neutral. Appendix H contains the RASP program solving this task. It is compiled by TRACR to an encoder-style Transformer (Vaswani et al., 2017) with 4 attention heads, a model dimension of 25, feedforward hidden dimension of 1320, and 23 layers with 23.5M params. Its output at the final sequence position forms the prediction for the example.
We ran LRM-NPEFF on 1616 examples using either 32 or 128 components. Since we know how the model processes examples, we can programmatically search for components whose top examples all contain a concept used by the model. We found that 30/32 and 92/128 components had 128 top examples all containing at least one concept. See appendix H for details on this programmatic search along with a detailed breakdown of component tunings. Although caution should be used when extending these results to real-world models since organic processing might differ from TRACR's implementations, these results demonstrate that NPEFF is able to extract concepts actually used by a model in this setting.
### Hyperparameter Exploration
We explored the impact of various choices of hyperparameters on the LRM-NPEFF decomposition. The NLI model's LRM-NPEFF from section 3.1 was used as the baseline setting. To concisely compare component tunings, we used the cosine similarity of component coefficient vectors. Components with a high cosine similarity tended to have similar top examples.
SparsityWe ran LRM-NPEFF keeping the top 16384, 65536, and 262144 values per PEF. We used 20k examples and 128 components. Given a pair of runs, we looked at each component from one of the runs and recorded its max cosine similarity with any component from the other run. If this value is high for most components in both directions for a pair of runs, then most of the concepts from one run have an analogous concept in the other. This metric had an average value of 0.77 when comparing the components from the 16384-value run against either of the other runs. The 65536-value run had an average of value of 0.76 and 0.80 when compared against 16384-value and 262144-value runs. The 262144-value run had an average of value of 0.77 and 0.80 when compared against 16384-value and 65536-value runs. Hence the sets of learned components tended to be fairly similar with no major qualitative difference being observed between their tunings. Information about how well the PEFs were approximated at different levels of sparsity can be found in appendix G.
Number of ComponentsWe ran LRM-NPEFF using 32, 128, and 512 components. To compare a pair of runs A and B, suppose that run B has more components than run A. For each component \(i\) of run A, we found the subset of run B's components whose coefficient cosine similarity to that component \(i\) was greater than to any other of run A's components. For all pairs of runs, we found that every component from the smaller run had at least one corresponding component from the larger run. Qualitatively, we found that the groups of components from the larger run corresponding to each component from the smaller run had similar tunings. More precisely, these groups tended to be more fine-grained "splits" of the component from the smaller run. For example, a component
tuned to hypotheses containing a word that contradicts a word in the premise might have a split where the relevant words are colors and another split where they are "cat" and "dog".
Number of StepsTo explore the convergence of the coefficient matrix as the NPEFF learning algorithm progresses, we performed the decomposition for 1500 steps and saved the matrix every 50 steps. A graph of the average component coefficient cosine similarity with its final value across steps can be found in fig. 5. Notably, the average similarity exceeded 95% by 700 steps and 99% by 1100 steps.
D-NPEFF vs. LRM-NPEFFWe repeated the experiments of section 3.1 for the NLI and vision model but using D-NPEFF instead of LRM-NPEFF and vice-versa. For the NLI model, we found D-NPEFF to generate fewer components with readily interpretable tunings than LRM-NPEFF. While most of the LRM-NPEFF components appeared tuned, only about half of the D-NPEFF components appeared tuned. Furthermore, the perturbation of D-NPEFF components led to less selective changes in model predictions than for the LRM-NPEFF components with a median KL-ratio of 5.26 compared to 7.64. However, this might just be an artifact of the difference between the perturbation methods of the two flavors of NPEFF.
The LRM-NPEFF results on the vision model were far worse than the D-NPEFF results. Far fewer components had interpretable tunings, and top examples of tuned components were significantly noisier. The selectivity of perturbations of the LRM-NPEFF components was also lower than for the D-NPEFF components with a median KL ratio of 2.45 compared to 4.81. We suspect that using a fixed number of non-zero entries led to examples with high rank PEFs having poor sparse approximations. Left to future work, possible methods to adapt LRM-NPEFF to tasks with many classes (and thus potentially high rank PEFs) include varying the number of kept entries with PEF rank and using a different low-rank approximation to the PEF instead of just using a subset of classes.
### Comparison to Existing Interpretability Methods
While there are no existing unsupervised concept discovery algorithms directly comparable to NPEFF, we can adapt the methods of Ghorbani et al. (2019) to be a baseline. Ghorbani et al. (2019) learns concepts through k-means clustering of activation space representations of image segments. We compared NPEFF to a more generalized version of this operating directly on examples rather than image segments, which allowed us to run experiments on text tasks. We used the distance of an example's activations from its cluster's centroid to rank examples within a cluster.
We ran this baseline for both the NLI and vision models. We used the encoder's final representation of the CLS token as the activations for the NLI model. For the vision model, we used the output of the global average pooling layer that immediately precedes the classification layer. Using the same sets of examples as in section 3.1, we learned 128 clusters for each model. Most of the top examples for the clusters displayed some human-interpretable tuning as ascertained through examination of their top examples. Top examples of some clusters are presented in appendix M. We used TCAV (Kim et al., 2018) to test whether a relationship existed between the top examples for a cluster/component and the prediction of at least one class for the NLI model. We found a statistically significant relationship existed for every k-means cluster and every LRM-NPEFF component. Details can be found in appendix I.
Qualitatively, we found some similarities in the tunings of NPEFF and baseline components with NPEFF producing a more diverse set of tunings. To get a more quantitative comparison, we used the average max cosine-similarity metric from section 3.3 to compare tunings. Here, we used a vector of 0s and 1s indicating cluster membership as the "coefficients vector" for a k-means cluster. For the NLI model, we found that k-means clusters had an average max cosine similarity of 0.20 against the NPEFF components and a value of 0.19 in the opposite direction. Since NPEFF coefficients and cluster "coefficients" represent different types of quantities, we should note that this metric provides a cruder similarity measure than when comparing between NPEFF components.
Nevertheless, we would expect differences in recovered concepts. The differences in recovered concepts highlights several advantages of NPEFF. The k-means baseline operates on activations from a single layer near the output of the model while NPEFF operates on PEFs containing information from the entire model. Hence NPEFF can pinpoint what part of the model performs certain com
putations. In contrast, a full-model decomposition over activations becomes tricky when the total activation space does not have fixed dimension (e.g. the variable sequence length of transformers). The ability to make informed modifications to parameters to directly test whether the model actually uses a recovered concept is a further advantage unique to NPEFF.
### Example Application: Fixing Flawed Heuristics
We experimented using NPEFF with the NLI model to see what heuristics it uses when it makes incorrect predictions. To do so, we created a filtered set of PEFs corresponding to examples on which the model made an incorrect prediction. We then performed the components set expansion procedure to learn 64 components specialized to these examples. Finally, we fit coefficients to a non-filtered set of examples using the expanded NPEFFs. See appendix F for details on this process.
The components in the expansion were more likely to have a high fraction of incorrect predictions in their top examples. More precisely, 41% of the expanded components had incorrect predictions for at least 50% of their top 16 examples compared to just 4.3% of the original components. Such components could generally be placed into one of three groups: correct heuristics with inconsistent labeling, completely wrong heuristics, or flawed heuristics. Since LRM-NPEFF connects components to directions in parameter space through their pseudo-Fishers, we can try to alter the model parameters to selectively fix components tuned to incorrect heuristics. We selected the subset of expanded components whose top 16 examples contained at least 8 incorrect predictions. The top examples of some these components are provided in appendix L. We then performed perturbations on these components using a similar methodology to section 3.1.2 but with a larger perturbation magnitude of 3e-1. Out of the 48 components deemed to correspond to incorrect heuristics, we found four components that increased the model's accuracy by at least 0.5% after being perturbed. While these results are promising, we emphasize that this method of fixing faulty heuristics using NPEFF is mostly a proof of concept. We leave improvements to this method such as incorporating loss gradient information to future work.
## 4 Related Work
Methods reliant on ranking or grouping examples are common in the interpretability literature. For example, the activation value of a single neuron is a frequently used metric to derive such rankings (Sajjad et al., 2022; Bolukbasi et al., 2021; Bau et al., 2017; Szegedy et al., 2013). Alternate methods include using agglomerative clustering on token-level BERT representations to find groups of representations corresponding to concepts (Dalvi et al., 2022), using supervised probing task to learn latent interpretable representations (Michael et al., 2020), and unsupervisedly finding interpretable directions in the latent space of a generative adversarial network (Voynov and Babenko, 2020).
Kim et al. (2018) uses user-defined groups of examples to represent an abstract concept. Ghorbani et al. (2019) uses clustering of activation space representations of image segments to automatically learn concepts. Yeh et al. (2020) learns concepts by projecting activations at a given layer onto a set of learned vectors, setting the dot products below a threshold to zero, and then using a learned mapping back to activation space before being processed by the rest of the network. They add regularizers to encourage concepts to be coherent and unique.
Representing a neural network as a set of sub-computations underlies some vision interpretability work. Zhang et al. (2018) learns an explanatory graph given a convolutional neural network. They create nodes corresponding to disentangled convolutional filters and assign edges based on the relative spatial displacement of node activations between neighboring layers. For each example, Wang et al. (2018) learns a scalar control gate for each channel in each layer of a convolutional network. They are fit using a loss that encourages the gates to be as sparse as possible while preserving the model's predictions on the example. Clustering of these per-example routing maps produces clusters that reflect input layout patterns.
## 5 Conclusion
NPEFF presents a novel direction in model interpretability research. Applicable to any model with differentiable parameters, it automatically discovers concepts and heuristics used by a model.
NPEFF further grounds these in the model's parameter space and allows for their selective perturbations. In addition to being useful in verifying that these concepts and heuristics faithfully represent the model's processing, it opens up the possibility of modifying the model informed by this knowledge. In future work, we hope to further develop some of the applications of NPEFF explored in this paper as well as explore its viability in facilitating scientific discovery.
|
2301.05216
|
Conformal perturbation theory from open string field theory
|
Conformal boundary conditions in two-dimensional conformal field theories
manifest lots of mathematical beauty and complexity and in many aspects present
uncharted territory. Even less is known about the relevant boundary
deformations which connect them. A natural approach to the problem is via
conformal perturbation theory, which however becomes quickly intractable and
possibly ambiguous. Relying on the internal consistency of open string field
theory, which has been proved to be a consistent theory of conformal boundary
conditions, we show how to construct nearby fixed points of the two-dimensional
renormalization group flow triggered by weakly relevant operators. As a simple
illustration we calculate the boundary degeneracy $g$ to next-to-leading order
for a generic theory.
|
Jaroslav Scheinpflug, Martin Schnabl
|
2023-01-12T18:56:30Z
|
http://arxiv.org/abs/2301.05216v1
|
# Conformal perturbation theory from open string field theory
###### Abstract
Conformal boundary conditions in two-dimensional conformal field theories manifest lots of mathematical beauty and complexity and in many aspects present uncharted territory. Even less is known about the relevant boundary deformations that connect them. A natural approach to the problem is via conformal perturbation theory, which however becomes quickly intractable and possibly ambiguous. Relying on the internal consistency of open string field theory, which has been proven to be a consistent theory of conformal boundary conditions, we show how to construct nearby fixed points of the two-dimensional renormalization group flow triggered by weakly relevant operators. As a simple illustration we calculate the boundary degeneracy \(g\) to next-to-leading order for a generic theory.
## I Introduction and conclusion
Study of consistent boundary conditions in two-dimensional conformal field theory (CFT) is a long and fascinating subject with many applications. There is a number of stringent constraints on the spectrum of boundary operators and operator product expansion (OPE) coefficients [1; 2; 3]. Flurry of work towards the end of the last millennium culminated in fairly complete characterization of boundary conditions in rational conformal field theories. [4]. Little is known for the non-rational case. Except in few examples [5; 6] where for special points in the moduli space rational structure may emerge, one has to resort either to perturbative or numerical approaches. Numerically one can use the truncated conformal space approach or level truncation in open string field theory (OSFT) [7; 8]. Perturbatively, one can start with a given consistent conformal boundary condition and perturb it by a weakly relevant operator with dimension \(h\) close to 1. Such flows were first studied by Affleck and Ludwig [9] who showed that for a nearby fixed point
\[\frac{\Delta g}{g}=-\frac{\pi^{2}}{3}\frac{y^{3}}{C_{VVV}^{2}}+O(y^{4}), \tag{1}\]
where \(g\) is the universal ground state degeneracy or boundary entropy. It is the single most elementary characterizing quantity of a given boundary condition given by the disk partition function. They conjectured that the corresponding \(g\)-function decreases along the RG flow which was later proved [10]. These flows have been applied to minimal models in [11].
Open string field theory is a consistent theory of open strings ending on the D-branes. The string degrees of freedom describe collective degrees of freedom for these nonperturbative objects, such as its position and shape in the spacetime, or fields living on them. To formulate open string field theory one needs a reference D-brane. The vector space of quantum open strings ending on the D-brane is given by the Hilbert space of boundary conformal field theory. The classical field \(\Psi\) of open string field theory is an element from this space. Different backgrounds correspond to different classical solutions of the equation of motion \(Q\Psi+\Psi\ast\Psi=0\). The value of the Witten's action
\[S=-\frac{1}{2}\left\langle\,\Psi,Q\Psi\,\right\rangle-\frac{1}{3}\left\langle \,\Psi,\Psi\ast\Psi\,\right\rangle \tag{2}\]
for a given classical solution has been conjectured by Sen [12] and later proved by others [13; 14] to correspond to a change in the given boundary degeneracy between the new an old boundary conditions. For a recent review see [15]
\[S=-\frac{\Delta g}{2\pi^{2}}. \tag{3}\]
In this work we construct perturbatively the solution corresponding to a nearby fixed point and as an illustration we show that
\[\frac{\Delta g}{g_{0}}=-\frac{\pi^{2}}{3}\left(\frac{y^{3}}{C_{VVV}^{2}}+ \frac{3\tilde{\mathcal{A}}}{C_{VVV}^{4}}y^{4}+O(y^{5})\right), \tag{4}\]
where
\[\tilde{\mathcal{A}} = \int_{0}^{1/2}d\xi\bigg{[}\frac{1}{g_{0}}\left\langle\,V|V(1)V( \xi)|V\,\right\rangle\] \[-\sum_{V^{\prime}}C_{VVV^{\prime}}^{2}\left(\frac{1}{\xi^{2-h^{ \prime}}}+\frac{1}{(1-\xi)^{2-h^{\prime}}}\right)\] \[+\sum_{V^{\prime}\neq V}C_{VVV^{\prime}}^{2}\left(\frac{1}{h^{ \prime}-1}+\frac{1}{2}\delta_{h^{\prime}=0}\right)\bigg{]}. \tag{6}\]
and \(g_{0}\) is the \(g\)-function of the undeformed theory. The sum on the first line over the relevant fields in the OPE of two \(V\)'s renders the integral fully convergent. The subtraction of powers of \(1-\xi\) has been introduced for mere convenience. The second line arises as a simple correction due to the relevant modes, whose structure stems from OSFT calculations. We have applied (4) to RG flows of \(c=2\) free bosons and we got a high precision match with [22]. The KMS correspondence [18] also allows us to compute the leading order correction to the coefficient
of a bulk primary \(\phi\) in the boundary state
\[\frac{\Delta B_{\phi}}{g_{0}}=-2\pi\frac{B_{\phi V}}{C_{VVV}}y+O(y^{2}) \tag{7}\]
where \(B_{\phi V}\) is a bulk-boundary structure constant. The next-to-leading correction should be obtainable with the methods of this paper.
## II General Setup
Let us write the general solution as \(\Psi=R+X\), where \(R=\sum_{i}T_{i}V^{i}c_{1}|0\rangle\) is a finite sum of states corresponding to the relevant operators \(V^{i}\) with conformal weights \({h_{i}<1}\). The results we find are interesting even in the case when there is a single relevant operator. Formally, one need not insist on the operator being relevant, one can consider in fact any finite combination of states. The \(X\) stands for all other fields.
We start by observing that the Siegel gauge equations of motion
\[L_{0}\Psi+b_{0}(\Psi*\Psi)=0 \tag{8}\]
can be rewritten as
\[R+\frac{b_{0}}{L_{0}}P(R+X)^{2}=0, \tag{9}\] \[X+\frac{b_{0}}{L_{0}}\bar{P}(R+X)^{2}=0, \tag{10}\]
where \(P\) is a projector onto the vector space spanned by \(V^{i}c_{1}|0\rangle\) and \(V^{i}c_{1}c_{0}|0\rangle\), and \(\bar{P}=1-P\). The second equation can be solved perturbatively:
\[X=hR^{2}+h\left\{R,hR^{2}\right\}+\cdots, \tag{11}\]
where
\[h=-\frac{b_{0}}{L_{0}}\bar{P}. \tag{12}\]
The right hand side of (11) corresponds to all possible binary diagrams, where each vertex denotes star multiplication, each internal line an action of \(h\) and each external line \(R\). For a given number \(n\) of external lines (number of \(R\)'s), there is \(C_{n-1}\) number of such terms where \(C_{n}=1,2,5,\ldots\) are the Catalan numbers. The numbers would appear as coefficients of the Taylor series of the solution to (10) if one treated the string fields \(R\) and \(X\) and the operator \(h\) as plain numbers.
The Catalan numbers grow asymptotically only as \(C_{n}\sim\frac{4^{n}}{n^{3/2}\sqrt{n}}\), so we see that the series (11) should be convergent, for sufficiently small \(R\) (in some convenient norm) assuming that the action of \(h\) is bounded from above. This should be the case since the least relevant state \(Vc\partial c|0\rangle\) is projected out from the ghost number two Hilbert space on which \(h\) acts in (11). The key is that the projector \(\bar{P}\) projects out the nearly marginal states for which \(L_{0}\) would have the biggest eigenvalue. From this rough argument it is clear that the coefficients of \(R\) and hence also \(\Psi\) should be parametrically smaller than \(1-h\), where \(h\) is the dimension of the nearest marginal operator in the theory. For theories with exactly marginal operators, we would need either to guarantee that those operators are not excited in our solution, or alternatively, change the definition of the projector \(\bar{P}\) so that it would project out such states as well.
Plugging the solution (11) into (9) we find a system of finite number of equations (generally non-polynomial if we do not truncate to finite order in \(R\)). This has the interpretation of the fixed point equations for the relevant couplings \(T_{i}\).
While it is immediately obvious that the full equations for \(R\) are obeyed
\[QR+P(R+X)^{2}=0 \tag{13}\]
as a consequence of (9) it is not a priori obvious that
\[QX+\bar{P}(R+X)^{2}=0. \tag{14}\]
In fact, in general it is not true, that any given solution of the gauge-fixed equations of motion obeys the full equation of motion [16]. In the case of perturbatively constructed solutions, however, one can prove it [17]. To show this, note that from (10) and (13)
\[Q\Psi+\Psi^{2} = \frac{b_{0}}{L_{0}}\bar{P}Q(\Psi^{2})=\frac{b_{0}}{L_{0}}\bar{P}[ Q\Psi,\Psi] \tag{15}\] \[= \frac{b_{0}}{L_{0}}\bar{P}[\frac{b_{0}}{L_{0}}\bar{P}Q(\Psi^{2}),\Psi]=\cdots,\]
where in the last step we used the already proven part of the equation of motion. Now assuming that \(\Psi\) is parametrically small enough in some convenient norm, and that repeated action of \(h=\frac{b_{0}}{L_{0}}\bar{P}\) does not give rise to divergent factors, one can argue that the right hand side vanishes. Unlike in the discussion below (11), here \(h\) acts always on \(Q\)-exact states at ghost number 3. The state of the form \(c\partial c\partial^{2}c\) with lowest possible value of \(L_{0}\) is excluded since it is not \(Q\)-exact. States of the form \(c\partial c\partial^{2}cV\) for \(V\) of low conformal weight (most relevant) are \(Q\)-exact, so it is important that there is a gap between the identity operator with \(h=0\) and the next most relevant operator in the matter CFT vector space. Practically the bounds are even better, since states of the form are excluded by twist symmetry which is often imposed for useful solutions.
## III Conformal Perturbation Theory
We take \(\Psi_{1}=\lambda cV|0\rangle\), where \(V\) is a relevant operator on the UHP of weight \(1>h>0\) with the following
correlators
\[\langle V(z_{1})\rangle_{UHP} = 0 \tag{16}\] \[\langle V(z_{1})V(z_{2})\rangle_{UHP} = z_{12}^{-2h}\] (17) \[\langle V(z_{1})V(z_{2})V(z_{3})\rangle_{UHP} = C_{VVV}z_{12}^{-h}z_{13}^{-h}z_{23}^{-h} \tag{18}\]
where in the rest of paper we write \(z_{ij}\equiv|z_{i}-z_{j}|\) for bosons and \(z_{ij}\equiv z_{i}-z_{j}\) for fermions.
To make contact with conformal perturbation theory we would like to compute observables in an exact perturbation series with an expansion parameter \(y\equiv 1-h\). To do this we first need to compute the SFT coupling \(\lambda\) as a function of \(y\).
### Calculation of the SFT coupling
The equation of motion in the direction of \(R\equiv\Psi_{1}\) is after explicitly applying \(P\)
\[\langle cV,Q\Psi_{1}\rangle+\langle cV,\Psi_{1}*\Psi_{1}\rangle+\langle cV,\{ \Psi_{1},\Psi_{2}\}\rangle+\cdots=0 \tag{19}\]
using
\[Q\Psi_{1}=y\lambda c\partial cV|0\rangle \tag{20}\]
and plugging in the definition of \(\Psi_{1}\) and \(\Psi_{2}\) this is equal to
\[\begin{split}& y\lambda\langle cV,c\partial cV\rangle+\lambda^{2} \langle cV,cV,cV\rangle\\ &-2\lambda^{3}\langle cV*cV,\frac{b_{0}}{L_{0}}\bar{P}cV*cV\rangle +\cdots=0\end{split} \tag{21}\]
Defining the open string four-point amplitude
\[\mathcal{A}\equiv\langle cV*cV,\frac{b_{0}}{L_{0}}\bar{P}cV*cV\rangle \tag{22}\]
and using the correlators (17),(18) and
\[\langle c(z_{1})c(z_{2})c(z_{3})\rangle_{UHP}=z_{12}z_{13}z_{23} \tag{23}\]
we solve for the SFT coupling to obtain
\[\lambda=\frac{y}{C_{VVV}}+\frac{1}{C_{VVV}}\left(\frac{2\mathcal{A}|_{y=0}}{C _{VVV}^{2}}-\ln K^{3}\right)y^{3}+O(y^{4}) \tag{24}\]
where \(K\equiv\frac{3\sqrt{3}}{4}\) and \(\mathcal{A}\) is evaluated at zero since it depends on \(h\). To leading order this is precisely the result obtained by Affleck and Ludwig Affleck and Ludwig (1994) modulo a sign convention so to leading order the SFT and CFT coupling are equal in magnitude. Next we move on to the calculation of gauge invariant observables.
### Calculation of gauge invariant observables
In this section we evaluate the perturbative shift in the \(g\)-function triggered by a deformation by \(V\) to next-to-leading order in \(y\). It turns out that the OSFT action is very efficient in accomplishing this. We also calculate the corresponding leading order shift in the boundary state coefficients by calculating the Ellwood invariant and envoking the KMS correspondence Ellwood (1976).
#### Leading order calculation of \(g\)
The on-shell value of the action (2) of a solution \(\Psi\) can be written as
\[S=-\frac{1}{6}\langle\Psi,Q\Psi\rangle \tag{25}\]
so that the corresponding shift in the \(g\)-function is
\[\Delta g=\frac{\pi^{2}}{3}\langle\Psi,Q\Psi\rangle \tag{26}\]
To obtain the leading order correction to \(g\) we simply use the fact that \(\lambda=O(y)\) and thus since \(\bar{P}\) acts as to eliminate possible inverse powers of \(y\) coming from \(\frac{1}{L_{0}}\) in \(\Psi_{n}\) so that \(\Psi_{n}=O(y^{n})\), we can calculate the \(n\)-th order contribution by accounting for string fields up to \(\Psi_{n}\). For the leading contribution to \(\Delta g\) this implies that
\[\Delta g=\frac{\pi^{2}}{3}\langle\Psi_{1},Q\Psi_{1}\rangle+O(y^{4}) \tag{27}\]
and since
\[\langle\Psi_{1},Q\Psi_{1}\rangle=-g_{0}y\lambda^{2} \tag{28}\]
we reproduce (1) by simply plugging in (24) for the coupling.
#### Next-to-leading order calculation of \(g\)
In the next-to-leading order we have
\[\Delta g=\!\frac{\pi^{2}}{3}\!\left(\langle\Psi_{1},Q\Psi_{1}\rangle+\langle \Psi_{2},Q\Psi_{2}\rangle\right)+O(y^{5}) \tag{29}\]
by the BPZ-orthogonality of \(P\) the off-diagonal terms vanish and we can also write
\[\langle\Psi_{2},Q\Psi_{2}\rangle=-\langle\Psi_{2},\bar{P}\Psi_{1}*\Psi_{1} \rangle=-\langle\Psi_{2},\Psi_{1}*\Psi_{1}\rangle=-\lambda^{4}\mathcal{A} \tag{30}\]
And so expanding the couplings in (28) to next-to-leading order and in (30) to leading order we find
\[\begin{split}&\frac{\Delta g}{g_{0}}=-\frac{\pi^{2}}{3}\!\left( \frac{y^{3}}{(C_{VVV})^{2}}\right.\\ &\left.+\!\!\left(3\frac{\mathcal{A}}{g_{0}C_{VVV}^{4}}-6\frac{1 }{C_{VVV}^{2}}\ln K\right)\!y^{4}\right)+O(y^{5})\end{split} \tag{31}\]
so that we only need to calculate the open string four-point amplitude \(\mathcal{A}\) in the marginal limit.
We start by explicitly writing down \(\mathcal{A}\)
\[\begin{split}&\mathcal{A}=\langle cV\left(-\sqrt{3}\right)cV\left( \sqrt{3}\right)U_{3}\\ &\left(\frac{b_{0}}{L_{0}}\bar{P}\right)U_{3}^{*}cV\left(\frac{1} {\sqrt{3}}\right)cV\left(-\frac{1}{\sqrt{3}}\right)\rangle\end{split} \tag{32}\]
Then we use a variant of the trick of [20] by using the Hodge-Kodaira decomposition
\[1=\left\{Q,\frac{b_{0}}{L_{0}}\bar{P}\right\}+P \tag{33}\]
behind the \(U_{3}^{*}\) and by the fact that \(cV\) is \(Q\)-closed up to \(O(y)\) terms and \(\bar{P}\) protects us from inverse powers of \(y\) we get
\[\begin{split}&\mathcal{A}=\langle cV\left(-\sqrt{3}\right)cV\left( \sqrt{3}\right)U_{3}\bar{P}U_{3}^{*}\\ &\left(\frac{b_{0}}{L_{0}}\bar{P}\right)cV\left(\frac{1}{\sqrt{3 }}\right)cV\left(-\frac{1}{\sqrt{3}}\right)\rangle\\ &+\langle cV\left(-\sqrt{3}\right)cV\left(\sqrt{3}\right)U_{3} \\ &\left(\frac{b_{0}}{L_{0}}\bar{P}\right)U_{3}^{*}PCV\left(\frac{1}{ \sqrt{3}}\right)cV\left(-\frac{1}{\sqrt{3}}\right)\rangle+O(y)\end{split} \tag{34}\]
In the second term we again use the Hodge-Kodaira decomposition behind the \(U_{3}\) following the same logic, that is to move \(\frac{b_{0}}{L_{0}}\) behind the \(U\)-factors, to get
\[\begin{split}&\mathcal{A}=\langle cV\left(-\sqrt{3}\right)cV\left( \sqrt{3}\right)(1+P)U_{3}U_{3}^{*}\\ &\left(\frac{b_{0}}{L_{0}}\bar{P}\right)cV\left(\frac{1}{\sqrt{3 }}\right)cV\left(-\frac{1}{\sqrt{3}}\right)\rangle\\ &+\langle cV\left(-\sqrt{3}\right)cV\left(\sqrt{3}\right)PU_{3} \\ &\left(\frac{b_{0}}{L_{0}}\bar{P}\right)U_{3}^{*}PCV\left(\frac{1}{ \sqrt{3}}\right)cV\left(-\frac{1}{\sqrt{3}}\right)\rangle+O(y)\end{split} \tag{35}\]
We now focus on the second term starting by noting that
\[PcV\left(\frac{1}{\sqrt{3}}\right)cV\left(-\frac{1}{\sqrt{3}}\right)=-C_{VVV} \left(\frac{2}{\sqrt{3}}\right)^{1-h}c\partial cV|0\rangle \tag{36}\]
as can be seen for example by explicitly BPZ-projecting or by the OPE
\[cV(x)cV(-x)\sim-\sum_{V^{\prime}}C_{VVV^{\prime}}\frac{c\partial cV^{\prime}( 0)}{(2x)^{2h-h^{\prime}-1}} \tag{37}\]
where \(V\) are relevant operators and noting that by (16), (17) the overlap with \(cV|0\rangle\) is nonvanishing only for the case \(V^{\prime}=V\). So that it becomes
\[C_{VVV}^{2}\left(\frac{4}{3}\right)^{y}\langle c\partial cV|U_{3}\frac{b_{0}} {L_{0}}\bar{P}U_{3}^{*}|c\partial cV\rangle \tag{38}\]
Writing out \(\bar{P}=1-P\) and noting that by the explicit Virasoro formulas [23] for \(U_{n}^{*}\) we have
\[PU_{3}^{*}|c\partial cV\rangle=\left(\frac{2}{3}\right)^{h-1}|c\partial cV\rangle \tag{39}\]
and by commuting the \(U\)s as \(U_{3}^{*}U_{3}=U_{\frac{n}{8}}U_{\frac{n}{8}}^{*}\)[23] we get
\[\begin{split}&-g_{0}C_{VVV}^{2}\left(\frac{4}{3}\right)^{y} \frac{1}{y}\left(\left(\frac{3}{4}\right)^{-2y}-\left(\frac{2}{3}\right)^{-2y }\right)\\ &=g_{0}C_{VVV}^{2}\ln\frac{81}{64}+O(y)\end{split} \tag{40}\]
It is expected on general grounds that there exists a geometric constant \(\gamma\) independent of the underlying matter CFT such that
\[\langle c\partial cV|U_{3}\left(\frac{b_{0}}{L_{0}}\bar{P}\right)U_{3}^{*}|c \partial cV\rangle=\gamma\langle c\partial cV|b_{0}|c\partial cV\rangle \tag{41}\]
and we have proved by (40) that \(\gamma=\ln\frac{81}{64}\) which agrees with level truncation [21] to seven decimal places. This computation was made possible by the fact that our field was not marginal making intermediate expressions such as \(\frac{1}{h-1}\) well-defined. That is, we do not rely on \(\bar{P}\) to make \(\frac{1}{L_{0}}\) well-defined, it is present only for power counting purposes.
We continue with the first term in (35) and to do this we first calculate
\[\begin{split}& U_{\frac{8}{4}}\frac{b_{0}}{L_{0}}\bar{P}cV\left( \frac{1}{\sqrt{3}}\right)cV\left(-\frac{1}{\sqrt{3}}\right)|0\rangle=\\ & U_{\frac{8}{3}}\frac{b_{0}}{L_{0}}\bar{P}\bigg{(}cV\left(\frac{ 1}{\sqrt{3}}\right)cV\left(-\frac{1}{\sqrt{3}}\right)\\ &\pm\sum_{V^{\prime}}C_{VVV^{\prime}}\left(\frac{2}{\sqrt{3}} \right)^{1+h^{\prime}-2h}c\partial cV^{\prime}(0)\bigg{)}|0\rangle\end{split} \tag{42}\]
where the OPE (37) was used so that we can Schwinger parametrise the action of \(L_{0}\) as
\[\frac{1}{L_{0}}=\int_{0}^{1}\,\frac{ds}{s}s^{L_{0}} \tag{43}\]
meaning that we use the Schwinger parametrisation on the plus branch and we explicitly divide by weight on the minus branch. Doing so we obtain
\[\begin{split}&\int_{0}^{1}\,ds\bigg{[}\left(\frac{\mathrm{d}\mu}{ \mathrm{d}s}\right)^{2h-1}\frac{s^{2h-2}}{\sqrt{3}}\left(c\left(\mu\right)+c \left(-\mu\right)\right)V\left(\mu\right)V\left(-\mu\right)\\ &-\sum_{V^{\prime}\neq V}C_{VVV^{\prime}}\left(\frac{2}{\sqrt{3}} \right)^{1+h^{\prime}-2h}\left(\frac{4}{3}\right)^{1-h^{\prime}}\!\left(\frac{1} {s^{2-h^{\prime}}}+\frac{1}{1-h^{\prime}}\right)\\ & cV^{\prime}(0)-C_{VVV}\left(\frac{2}{\sqrt{3}}\right)^{1-h} \left(\frac{4}{3}\right)^{1-h}\frac{1}{s^{2-h}}cV(0)\bigg{]}|0\rangle\end{split} \tag{44}\]
where \(\mu(s)=\tan\frac{3}{4}\arctan\frac{s}{\sqrt{3}}\) is present since \(U_{\frac{8}{3}}\) implements the conformal transformation \(z\rightarrow\tan\frac{3}{4}\arctan z\).
The subterm in the first term of (35) containing \(P\) can
then be calculated as
\[-C_{VVV}\left(\frac{8}{3\sqrt{3}}\right)^{1-h}\] (45) \[\langle\partial cV|\int_{0}^{1}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
so that the four-point amplitude vanishes
\[{\cal A}=\left(\int_{0}^{\frac{1}{2}}\,d\xi\right)-1+\frac{1}{2}=0 \tag{56}\]
as it should since (21) is then automatically satisfied for all \(\lambda\) to the next-to-leading order.
The final result for \(\Delta g\) then simplifies to
\[\frac{\Delta g}{g_{0}}=-\frac{\pi^{2}}{3}\bigg{(}\frac{y^{3}}{C_{VVV}^{2}}+3 \frac{\tilde{\cal A}}{C_{VVV}^{4}}y^{4}\bigg{)}+O(y^{5}) \tag{57}\]
where
\[\begin{split}&\tilde{\cal A}=\int_{0}^{\frac{1}{2}}\,d\xi\bigg{[} \frac{1}{g_{0}}\langle V|V(1)V(\xi)|V\rangle\\ &-\sum_{V^{\prime}}C_{VVV^{\prime}}^{2}\bigg{(}\frac{1}{\xi^{2-h^ {\prime}}}+\frac{1}{(1-\xi)^{2-h^{\prime}}}\bigg{)}\bigg{]}\\ &+\sum_{V^{\prime}\neq V}C_{VVV^{\prime}}^{2}\bigg{(}\frac{1}{h^{ \prime}-1}+\frac{1}{2}\delta_{h^{\prime}=0}\bigg{)}\end{split} \tag{58}\]
We note that the SFT constant \(K\) does not decouple from the expression for the SFT coupling
\[\lambda=\frac{y}{C_{VVV}}+\frac{1}{C_{VVV}}\left(\frac{2\tilde{\cal A}}{C_{VVV }^{2}}+\ln K\right)y^{3}+O(y^{4}) \tag{59}\]
This is expected since this is just an auxilliary normalisation factor of the string field \(\Psi_{1}\) and not a CFT coupling constant.
#### Leading order calculation of boundary state coefficients
By the KMS correspondence [18] we have for the shift in the boundary state coefficient of the bulk primary \(\phi\)
\[\Delta B_{\phi}=2\pi i\langle I|\tilde{\phi}(i)|\Psi\rangle \tag{60}\]
where \(\tilde{\phi}\) is \(\phi\) dressed with ghosts and auxilliary CFT factors that make it weight 1 and \(\langle I|=\langle 0|U_{f}\) implements the conformal transformation from the unit disk to the UHP \(f(z)=\frac{2z}{1-z^{2}}\). To leading order we can write
\[\begin{split}\Delta B_{\phi}&=2\pi i\lambda\langle I |U_{f}\tilde{\phi}(i)|eV\rangle+O(y^{2})\\ &=2\pi i\lambda 2i2^{\Delta_{\phi}-2}2^{h-1}\langle\phi(i)V(0) \rangle+O(y^{2})\\ &=-2\pi g_{0}\frac{B_{\phi V}}{C_{VVV}}y+O(y^{2})\end{split} \tag{61}\]
where \(\Delta_{\phi}\) is the weight of \(\phi\) and in the derivation we used the generic form of the bulk-boundary correlator
\[\langle\phi(x+iy)V(r)\rangle_{UHP}=g_{0}\frac{B_{\phi V}}{\left(2y\right)^{ \Delta_{\phi}-h}\left((x-r)^{2}+y^{2}\right)^{h}} \tag{62}\]
For the case of \(\phi\) being identity we expect (61) to calculate \(\Delta g\) to \(O(y)\) and indeed it gives the expected result of zero by the virtue of \(B_{\phi 1}=0\). In the next-to-leading correction \({\cal A}\) enters through \(\lambda\) and we need to calculate the open-open-closed amplitude \(\langle\tilde{\phi}|\frac{\partial u}{L_{0}}\tilde{P}|cV*cV\rangle\) which can be calculated by again using the trick of [20].
## V Outlook
We have shown that string field tames naturally divergences found in conformal perturbation theory [27]. This is quite analogous to observation made by Sen in [26]. It would be interesting to carry our calculations to higher orders and see how far one can follow the RG flow and whether nice patterns emerge. For this probably different gauges in string field theory might be useful, such as the \({\cal B}_{0}\)-gauge, possibly modified accounting for the \({\cal L}_{0}=0\) terms found in [28] or the pseudo-\({\cal B}_{0}\) gauge introduced in [29]. Similar calculations have been reported in [30]. Interestingly our calculations seem to allow for possibility to follow the RG flow in the opposite direction contrary to general expectations, but in line with numerical calculations in OSFT [7].
Finally let us mention, that our calculations are entirely feasible in _closed string field theory_ where one should be able to learn about the elusive vacuum of the closed string. Our preliminary results show that the value of the closed string classical action \(S\) evaluated on the solution is proportional in the leading order to the change in the central charge \(\Delta c\) of the CFT where the condensation happens. Dilaton couplings seem to affect only the higher order contributions, so the result to be in contradiction with recent results in [31]. One possible resolution is that one has to incorporate the Liouville sector even for the critical string.
###### Acknowledgements.
We thank Matej Kudrna, Jakub Vosmera, Tomas Prochazka and Xi Yin for useful discussions. This work has been supported by Grant Agency of the Czech Republic, under the grant EXPRO 20-25775X.
|
2308.09127
|
Deep Exclusive Meson Production as a probe to the puzzle of $Λ$
hyperon polarization
|
In the 1970s, an unexpected transverse $\Lambda$ polarization in unpolarized
proton-Beryllium collisions was discovered, which initiated extensive studies
on spin phenomena in high-energy physics. Over the past five decades, similar
transverse $\Lambda$ polarization has been observed across various collision
systems, including lepton-hadron deep inelastic scattering, hadron-hadron
collisions, and electron-positron collisions. Despite numerous promising
theoretical models, the fundamental mechanism underlying this polarization
phenomenon remains inconclusive to this day. However, in both longitudinally
and transversely polarized lepton-hadron and hadron-hadron collisions, it is
found that the $\Lambda$ hyperon is not polarized with respect to the initial
parton spin direction. How the $\Lambda$ hyperon acquires its spin has become
one of the most crucial questions to address in order to resolve this puzzle.
In this paper, I propose to use an exclusive process that can be measured at
the Electron-Ion Collider, the Deep Exclusive Meson Production, to explicitly
test the mechanism of $\Lambda$ polarization. The outcomes of this experimental
measurement are anticipated to unveil the dominant mechanism by which $\Lambda$
obtains its spin, eliminating many of the ambiguities that have been
encountered in previous studies. Finally, experimental challenges and
requirements will be discussed.
|
Zhoudunming Tu
|
2023-08-17T18:00:00Z
|
http://arxiv.org/abs/2308.09127v2
|
# Deep Exclusive Meson Production as a probe to the puzzle of \(\Lambda\) hyperon polarization
###### Abstract
In the 1970s, an unexpected transverse \(\Lambda\) polarization in unpolarized proton-Beryllium collisions was discovered, which initiated extensive studies on spin phenomena in high-energy physics. Over the past five decades, similar transverse \(\Lambda\) polarization has been observed across various collision systems, including lepton-hadron deep inelastic scattering, hadron-hadron collisions, and electron-positron collisions. Despite numerous promising theoretical models, the fundamental mechanism underlying this polarization phenomenon remains inconclusive to this day. However, in both longitudinally and transversely polarized lepton-hadron and hadron-hadron collisions, it is found that the \(\Lambda\) hyperon is _not_ polarized with respect to the initial parton spin direction. How the \(\Lambda\) hyperon acquires its spin has become one of the most crucial questions to address in order to resolve this puzzle. In this Letter, I propose to use an exclusive process that can be measured at the Electron-Ion Collider, the Deep Exclusive Meson Production, to explicitly test the mechanism of \(\Lambda\) polarization. The outcomes of this experimental measurement are anticipated to unveil the dominant mechanism by which \(\Lambda\) obtains its spin, eliminating many of the ambiguities that have been encountered in previous studies. Finally, experimental challenges and requirements will be discussed.
Deep Exclusive Meson Production, \(\Lambda\) polarization, Electron-Ion Collider
## I Introduction
Almost 50 years ago, a Fermilab experiment discovered a large transverse polarization of \(\Lambda\) hyperon in inclusive unpolarized proton-Beryllium collisions [1]. This was highly unexpected, because perturbative Quantum Chromodynamics (QCD) forbids a large polarization signal [2] and the total polarization in inclusive processes should average to zero in a naive expectation [3]. This observation indicates that the \(\Lambda\) hyperon polarization has to originate from nonpertubative processes in a nontrivial way. Since then, transverse \(\Lambda\) polarization has been found in many different collision systems, e.g., electron-proton and electron-nucleus deep inelastic scattering (DIS) [4; 5; 6; 7], hadron-hadron and hadron-nucleus collisions [8; 1], heavy-ion collisions [9], and recently even in electron-positron collisions [10].
Apart from the \(\Lambda\) polarization in heavy-ion collisions, which seems to be understood in the context of a strongly rotating Quark-Gluon Plasma [9; 11], the origin of all other observed polarization remains inconclusive to-date. Many theoretical models [12; 13; 14; 15] have attempted to explain this phenomenon, and some were quite successful [12; 13], but there is no model that can explain the global data at the same time. Although all models point to a common direction, which is the hadronization process or final-state effects in general, the quantitative description of how \(\Lambda\) obtains its spin in unpolarized collisions is still unknown.
What makes this problem even more interesting is when experiments attempted to measure \(\Lambda\) polarization in polarized target [16; 17; 18; 19; 20; 21]. Thanks to the self-analyzing weak decay of \(\Lambda\) hyperon, \(\Lambda\) polarization measurement has been a common experimental tool to access the initial-state spin effect, e.g., the strange quark helicity and transversity distributions. However, no \(\Lambda\) polarization has ever been found with respect to the initial spin direction1. It is expected that the \(\Lambda\) polarization asymmetry measurement is a convolution of parton distributions (e.g., helicity), parton scattering cross sections, and polarized fragmentation functions [22]. Without understanding quantitatively the role of fragmentation, it is both experimentally and theoretically difficult to access the initial-state parton distributions. In other words, the hadronization process could be responsible for _not_ seeing the \(\Lambda\) polarization in polarized target [22]. Recently, there are new proposals to measure \(\Lambda\) spin-spin correlation in both polarized and unpolarized deep inelastic scattering (DIS) and hadron-hadron collisions [23; 24; 25], where final-state effects are expected to be suppressed. This is similar to the long-range two-particle momentum correlation in heavy-ion collisions [26].
Footnote 1: Note that a nonzero polarization signal was found for \(\bar{\Lambda}\)[16], which makes the whole picture even more complex.
In recent years, Deep Exclusive Meson Production (DEMP) has been proposed to be sensitive to Generalized Parton Distributions (GPDs) and meson form factors [27; 28]. The process is as follows, \(e+p\to e^{\prime}+K^{+}+\Lambda\), where the cross section of this process and its momentum transfer \(-t\) are closely related to GPDs [27]. Although this process was not initially intended for measuring the \(\Lambda\) polarization, it has a few advantages: i) an exclusive process with well-defined final states and kinematics; ii) a large \(\Lambda\) momentum that
close to the beam direction, where the \(\Lambda\) unambiguously carries valance quarks from the incoming proton; iii) no feed-down from higher mass resonance and no fragmentation process involved. Therefore, this process would significantly simplify the picture of \(\Lambda\) polarization, which could potentially pin down its nonpertubative origin.
In this Letter, a new \(\Lambda\) polarization measurement is proposed in the polarized \(ep\) DEMP process at the upcoming Electron-Ion Collider (EIC). See Fig. 1 for an illustration. The measurement is designed to find out which direction the \(\Lambda\) hyperon is polarized: a) transverse polarization with respect to the production plane, or b) a large longitudinal spin transfer from proton to \(\Lambda\), which is close to the direction of the incoming beams. Note that the spin direction of a) and b) are by definition orthogonal to each other. In this process, both scenarios are expected to have its possible maximum strength. However, only one scenario can be correct.
## II Experimental techniques
Experimental measurement of the \(\Lambda\) polarization is via its self-analyzing weak decay, \(\Lambda\to p+\pi^{-}\). In the \(\Lambda\) rest frame, the momentum direction of the daughter proton (or \(\pi^{-}\)) exhibit a cosine distribution relative to the \(\Lambda\) polarization direction, as follows,
\[\frac{dN}{d\cos\theta}\propto 1+\alpha P_{\Lambda}\cos\theta. \tag{1}\]
Here \(\alpha\) is the weak decay constant [29], \(\theta\) is the opening angle between the proton daughter in the \(\Lambda\) rest frame with respect to the \(\Lambda\) spin direction, and \(P_{\Lambda}\) is the polarization magnitude of the \(\Lambda\) particle. The spin direction, however, can vary in different studies depending on the underlying mechanism of \(\Lambda\) polarization. In this study, we focus on two possible directions of \(\Lambda\) polarization: a) with respect to the production plane and b) with respect to the beam polarization (or \(\Lambda\) momentum) direction.
**Production Plane:**: The normal vector of the production plane is defined by \(\vec{p}_{beam}\times\vec{p}_{\Lambda}\). In this study, we use the proton beam instead of the electron beam momentum as the \(\vec{p}_{beam}\).
**Beam Polarization:**: For the longitudinal spin transfer, the expected \(\Lambda\) polarization is the direction of the \(\Lambda\) momentum. Some studies have tried to measure in the direction of the beam polarization, which is the same (or opposite) as the \(\vec{p}_{beam}\) of the longitudinally polarized proton. In this study of DEMP, the direction of \(\vec{p}_{\Lambda}\) and \(\vec{p}_{beam}\) are very similar.
Therefore, it is by definition that the two spin polarization directions mentioned above are orthogonal to each other. Note that in the spin transfer measurement of polarized targets, it is common to use both helicity configurations to perform an asymmetry measurement, where most detector effects cancel. See Ref. [19] for an example.
Figure 1: Illustration of Deep Exclusive Meson Production (DEMP) in a longitudinally polarized electron-proton scattering at the Electron-Ion Collider. The proton beam momentum can be between 41 to 275 GeV/c, and the longitudinal polarization can be either positive or negative helicity. The \(\Lambda\) decay products do not need to be in the production plane, which is not shown.
## III Eic Experiments
The upcoming EIC is designed to be an accelerator facility that can provide high energy and high luminosity electron-proton and electron-ion collisions, where proton and light ions can be also polarized [30]. It will enable a comprehensive DIS measurements across a wide range of processes, including rare reactions that previously impossible to be measured at \(ep\) collider experiments.
The current EIC project includes the accelerator facility and only one interaction region with one experiment - ePIC [31]. However, it is not excluded that there can be a second detector at the EIC. In fact, the EIC physics community favors two experiments [32]. One potential difference of the second detector, among many other possibilities, can be the tagging capability in the hadron-going far forward (FF) region [33].
Based on the current ePIC design, there are four detector subsystems in the FF region - the B0 spectrometer, Roman Pots (RP), Off-Momentum Detector (OMD), and Zero-Degree Calorimeter (ZDC). For details, see previous studies in Refs. [34; 35; 36]. Although the exact design has not been finalized yet, the general acceptance and performance are understood. Specifically, the B0 spectrometer can detector charged particles for scattering angles between 5 to 22 mrad. The RP detects the scattered proton in \(ep\) collisions for small scattering angles, while the OMDs can detector breakup protons from nuclei with small scattering angles due to the change of the magnetic rigidity. Finally, the ZDC can detect neutral particles, e.g., the neutrons and maybe high-energy photons.
For the process of interest in this Letter, the final-state particles are kaon and \(\Lambda\), where the \(\Lambda\) decays to a pion and proton. The \(\Lambda\) will be close to the beam momentum with a small scattering angle. The challenge of detecting \(\Lambda\) from its decay is its long lifetime. From a current estimate, the acceptance of the \(\Lambda\) decay products in the FF system is small for high energy configuration, e.g., 18x275 GeV \(ep\) collisions, but may be feasible for 5x41 GeV. An interesting direction for the EIC second detector is whether the acceptance of this measurement can be improved for high energy configuration, which allows access to the low-\(x\) region.
## IV Predictions
### Transverse \(\Lambda\) polarization with respect to the production plane
From the measured data in the past decades, the transverse \(\Lambda\) polarization with respect to the production plane has been found with the following features:
* independent of the center-of-mass energy;
* strongly dependent on \(x_{F}\) and \(p_{T}\) of the \(\Lambda\) particle, and most data show a linear or quadratic dependence;
* the polarization is negative with respect to the production plane.
Note that the sign of the polarization signal depends on the definition of the production plane, which is usually \(\vec{p}_{beam}\times\vec{p}_{\Lambda}\), where the \(\vec{p}_{beam}\) is the momentum of the incident beam that carries valance up and down quarks. The polarization signal is found to have an opposite sign in lepton-nucleon (or nucleus) scattering (e.g., Ref. [6]), and the natural explanation is that when the incident beam is a lepton, which does not carry valance quarks. Therefore, the polarization signal is positive.
For hadron-hadron colliders, e.g., proton-proton collisions at RHIC and the LHC, both beams can be target and projectile, the choice is not unique. However, no polarization signal has been observed either [37]. Another feature to note is that the \(\bar{\Lambda}\) polarization signal has never been observed in unpolarized hadron-hadron or lepton-hadron scatterings.
In this proposed measurement, we are looking at the \(\Lambda\) particles in an extreme phase space. The \(x_{F}\) of \(\Lambda\) is close to 1, as the \(\Lambda\) will move forward close to the incoming proton beam momentum in the DEMP process. The naive picture is as follows: i) the proton has two up and one down quarks, together with sea quarks and gluons; ii) the strange and anti-strange pair provides one strange quark to combine with one up and one down quark to form a \(\Lambda\) particle, while one valance up quark and the anti-strange quark form a positively charged kaon (\(K^{+}\)); both particles are moving forward, with \(\Lambda\) closer to the beam momentum.
In Fig. 2, data taken from ATLAS [37], HERA-B [4], and M2 [8], are shown as a function of \(x_{F}\) of \(\Lambda\) particle. The data is fitted with a quadratic form, which describes the data really well with \(\chi^{2}/ndf=1.42\). Based on this extrapolation, the predicted \(P_{\Lambda}\) value at \(x_{F}=1\) is \(-0.58\), which would suggest the dominant mechanism of \(\Lambda\) polarization is formed at the final state.
Although the parametrization of a quadratic function is only to describe the data, the strong \(x_{F}\) dependence is well established. There are many models that claimed to qualitatively describe this phenomenon, while some even claimed quantitative description. The one that has been most successful is the semi-classical quark recombination approach with the "Thomas Precession" [12; 13]. In this model, the polarization of the \(\Lambda\) particle is generated by accelerating the strange sea quark from low momentum (\(x_{s}\)) to roughly 1/3 of the \(\Lambda\) momentum. If the strange quark has a nonzero intrinsic transverse momentum \(k_{T}\), the acceleration will naturally generate spin, e.g.,
\[\omega_{\rm T}={\bf a}\times{\bf v}_{\rm T}. \tag{2}\]
Here \(\omega_{\rm T}\) is the Thomas Precession frequency, \({\bf a}\) is the acceleration of the strange quark, and \({\bf v}_{\rm T}\) is the transverse velocity of the strange quark. For high energy scatterings, the longitudinal momentum fraction of the
strange quark can be very small, \(x_{s}<10^{-2}\), which is less than 1 GeV. However, the \(\Lambda\) in DEMP can be produced up to hundreds of GeVs close to the beam momentum, where 1/3 of the beam momentum can be on the order of \(\sim\)100 GeV. Therefore, the expected acceleration of the strange quark is large at high energy, so are the \(\omega_{\rm T}\) and polarization of the \(\Lambda\) particle.
There are other models that claimed to describe the transverse polarization of \(\Lambda\) in unpolarized lepton-lepton, lepton-hadron, and hadron-hadron collisions. In the DEMP process, it is expected that those based on high spin baryon resonances production [14], single-pion exchange [3], and polarising fragmentation function [38] should not play a role in this process. Based on all these information, the prediction is as follows:
_Prediction (a): for the scenario (a), the \(\Lambda\) polarization would be as large as negative 60% with respect to the production plane._
### Longitudinal spin transfer via \(\Lambda\) polarization
From polarized lepton-hadron and hadron-hadron collisions, the parton helicity distributions of quarks are well understood in the valance region. There are two general spin sum rules, known as "Ji sum rule" and "Jaffe-Manohar sum rule", respectively, as follows:
\[1/2=\Delta\Sigma/2+L_{q}+J_{g}, \tag{3}\]
and
\[1/2=\Delta\Sigma/2+\Delta G+l_{q}+l_{g}. \tag{4}\]
Despite the difference in their approach, there is no ambiguity in counting spin contributions from quarks. Other terms are related to angular and orbital angular momentum of quarks and gluons.
In the process of DEMP, the production of \(\Lambda\) from incoming proton beam can be viewed as a result of knocking out a valance up quark and an addition of strange quark from the sea. In the valance region, the data has shown that the two up quarks account for 60%, down quark accounts for \(-30\%\), and strange quarks and anti-quarks account for very small fraction of the total nucleon spin [39]. Therefore, removing one up quark and adding one strange quark, naively would result in a 30% reduction of the total spin. In other words, for longitudinal polarized proton target with 70% initial polarization,
\[\text{proton[uud]}(70\%\ \text{polarized})\rightarrow\Lambda[\text{uds}]( \sim 50\%\ \text{polarized}). \tag{5}\]
To measure this polarization, the spin axis is the momentum direction of the \(\Lambda\) particle, which is close to the beam momentum and initial polarization direction (e.g., positive helicity configuration).
Note that most of longitudinal and transverse spin transfer measurements using \(\Lambda\) particles were measured at small \(x_{F}\) or mid-rapidity. In DEMP, the \(\Lambda\) momentum is close to the initial polarization direction, which is expected to maximize the spin transfer. In addition, similar to the case of polarization with respect to the production plane, the fragmentation process does not play a role here, and thus, a clearer picture of the spin transfer from nucleon to parton can be obtained. Based on these expectations, the prediction is as follows:
_Prediction (b): for the scenario (b), the \(\Lambda\) polarization would be as large as positive 50% with respect to the \(\Lambda\) momentum or beam polarization direction._
## V Discussion
Based on requirements of measuring \(\Lambda\) polarization in DEMP via its decay and the beam polarization, this measurement can be best, if not only, performed at the EIC. The challenge of this measurement is the particle identification (PID) in the far forward region; in the DEMP process of \(e+p\to e^{\prime}+K^{+}+\Lambda\), it requires PID for pion, kaon, and proton in the same event, where the pion and proton are the decay products of the \(\Lambda\) particle.
Another challenge is the long lifetime of \(\Lambda\) particles at high energy, where most \(\Lambda\) would not decay before they pass the B0 detector that sits \(\sim 5\) meters away from the Interacting Point (IP). From a very preliminary study based on the ePIC detector at IP 6, only 5x41 GeV electron-proton collisions would provide a reasonable acceptance to the \(\Lambda\) reconstruction. A quantitative
Figure 2: Transverse \(\Lambda\) polarization signal, \(\vec{p}_{\Lambda}\), as a function of Feynman-\(x\), \(x_{F}\), from the ATLAS [37], the HERA-B [4], and the M2 experiment [8]. A quadratic fit and the polarization signal extrapolated to \(x_{\rm F}=1\) are shown.
feasibility study at ePIC and the EIC second detector has been planned.
Taking one step further, this measurement can be extended to i) transverse polarized \(ep\) DEMP and ii) the incoherent electron-deuteron or electron-helium 3 DEMP on the bound nucleon target. For i), the transverse spin direction is not always perpendicular to the production plane direction. One could vary the relative angle between them by selecting different azimuthal angle of the \(\Lambda\) particle in the forward direction. For ii) with Helium 3 polarization and the proton spectators tagging technique [34; 36; 40], one could measure a similar process on the polarized neutron target to understand the role of valance quarks, e.g., \(e+He^{3}(d)\to e^{\prime}+2p^{\prime}(p^{\prime})+K^{0}+\Lambda\), where the neutral kaon decays weakly (e.g., \(K^{0}_{s}\) to two pions). Experimentally, this is much more challenging than the proton case. In addition, one of the possible upgrades at the EIC is to polarize deuteron, where similar measurements can be done using the deuteron target.
Finally, experiments in Ref. [27] and CLAS12 at Jefferson Lab with the 12 GeV electron beam program can measure the momentum transfer cross section of DEMP, based on a technique by utilizing the missing mass of the \(\Lambda\) particle. However, dedicated studies are needed to investigate the feasibility of measuring the \(\Lambda\) polarization in this process.
## VI Conclusions
In conclusion, a new experimental measurement of \(\Lambda\) polarization in the process of Deep Exclusive Meson Production in polarized \(ep\) collisions at the upcoming Electron-Ion Collider is proposed. This measurement is expected to probe directly the underlying mechanism of how \(\Lambda\) acquires its spin in high energy particle scattering processes. Specifically, this proposal has a number of advantages: i) a clean exclusive reaction with only kaon and \(\Lambda\) in the final state; ii) the \(\Lambda\) particle carries the _maximal_ longitudinal momentum, where maximum polarization with respect to both the production plane and the beam polarization direction are expected; however, only one can be correct; iii) no feed-down from higher mass resonances and no fragmentation involved in this process, which automatically rules out many theoretical models on transverse \(\Lambda\) polarization. The result of this measurement will significantly broaden the experimental program at the Electron-Ion Collider, e.g., using \(\Lambda\) hyperon as a probe to study Transverse-Momentum Dependent parton distribution functions and Generalized Parton Distributions. Most importantly, it may also provide a clear path forward towards solving the almost 50-year puzzle of \(\Lambda\) polarization.
## Acknowledgements
The author would like to thank Alex Jentsch for information on the Far-Forward detector system in the ePIC experiment. I would like to thank Garth Huber and Wenliang Li for discussion on the Deep Exclusive Meson Production (DEMP) and the usage of DEMP event generator. I want to thank Francesco Bossu, Xiaoxuan Chu, Abhay Despande, Christian Weiss, and the local BNL group for general discussion on this topic. The work is supported by the U.S. Department of Energy under Award DE-SC0012704 and the Laboratory Directed Research and Development (LDRD) 22-027 and LDRD-23-050 project.
|
2302.03337
|
Datacenter Ethernet and RDMA: Issues at Hyperscale
|
We observe that emerging artificial intelligence, high-performance computing,
and storage workloads pose new challenges for large-scale datacenter
networking. RDMA over Converged Ethernet (RoCE) was an attempt to adopt modern
Remote Direct Memory Access (RDMA) features into existing Ethernet
installations. Now, a decade later, we revisit RoCE's design points and
conclude that several of its shortcomings must be addressed to fulfill the
demands of hyperscale datacenters. We predict that both the datacenter and
high-performance computing markets will converge and adopt modernized
Ethernet-based high-performance networking solutions that will replace TCP and
RoCE within a decade.
|
Torsten Hoefler, Duncan Roweth, Keith Underwood, Bob Alverson, Mark Griswold, Vahid Tabatabaee, Mohan Kalkunte, Surendra Anubolu, Siyuan Shen, Abdul Kabbani, Moray McLaren, Steve Scott
|
2023-02-07T09:22:49Z
|
http://arxiv.org/abs/2302.03337v2
|
# Datacenter Ethernet and RDMA: Issues at Hyperscale
###### Abstract
We observe that emerging artificial intelligence, high-performance computing, and storage workloads pose new challenges for large-scale datacenter networking. RDMA over Converged Ethernet (RoCE) was an attempt to adopt modern Remote Direct Memory Access (RDMA) features into existing Ethernet installations. Now, a decade later, we revisit RoCE's design points and conclude that several of its shortcomings must be addressed to fulfill the demands of hyperscale datacenters. We predict that both the datacenter and high-performance computing markets will converge and adopt modernized Ethernet-based high-performance networking solutions that will replace TCP and RoCE within a decade.
Datacenter Ethernet is new Environment
Ethernet has dominated the wired local-area networking (LAN) space for decades ranging from deployments in private homes to the largest datacenters. Datacenters have experienced a massive growth during the last decade and the number of connected machines exceeds the size of the largest supercomputers today. While there remain some differences, the networking requirements of such hyperscale mega-datacenters and supercomputers are quite similar [1]. Yet, supercomputers are traditionally connected using special-purpose interconnects while datacenters build on Ethernet. Due to similar requirements and economies of scale, both continue to grow closer together with each new technology generation. We believe now is the right time to re-think the basic assumptions and architecture for a converged interconnect.
Multiple technological trends are accelerating this convergence of high-performance interconnects. Primarily, the increasing network performance requirements push towards more efficient host stacks that can support the terabit bandwidths, hundreds of millions of transactions per second, and single-digit microsecond latencies that are required by emerging data-intensive applications such as Artificial Intelligence (AI) [2]. These extreme requirements force all protocols and hardware to be as efficient as possible, ruling out many of the TCP/IP-like stacks that tra
ditionally drove datacenter networking. Remote Direct Memory Access (RDMA) was developed nearly three decades ago for high-performance computing (HPC) workloads and was later expanded to target storage with InfiniBand (IB) Verbs RDMA. RDMA enables CPU-offloaded, hardware-accelerated direct memory access over the network. During the last 10 years, it became the de-facto standard for low-overhead and high-speed networking. Nearly all supercomputer architectures as well as leading datacenter providers utilize RDMA in production today.
The simple assumptions on load balancing, congestion control, and error handling made decades ago do not hold for today's networks that have more than 100x higher bandwidth and 10x higher message rates. Furthermore, simple RDMA network interface cards (NICs) are often enhanced with additional functionalities. The resulting "Smart NICs" often offload significant services and implement specialized network protocols. Modern network switches also have improved capabilities ranging from advanced in-network telemetry, in-network computation capabilities, and in-network load-balancing or congestion-control [3]. We argue that the currently existing standards and deployed infrastructure has fundamental gaps that must be addressed in the near future to support efficient high-performance networking.
### A brief history of RDMA for Ethernet
RDMA was originally developed for HPC in systems as early as the Paragon, Cray's T3D/T3E, and ASCI Red. Later, InfiniBand Verbs RDMA became wide-spread in the supercomputing field as a standardized solution. It was then adopted as "RDMA over Converged Ethernet" (RoCE) in the datacenter context to provide RDMA's benefits in a backwards-compatible Ethernet context. Another protocol, iWARP (cf. IETF 2007, RFCs 5040-5044, 6580, 6581, 7306), layers RDMA semantics over TCP or SCTCP. Both iWARP and RoCE use InfiniBand's Verbs to interface with the user software stacks and are thus mostly transparent to the user. Even though iWARP allowed Internet-compatible routing from the beginning, it did not find widespread adoption. This may be due to the fact that a full TCP/IP stack is complex and expensive to offload to hardware, compared to the very simple protocol that underlies RoCE. Indeed, RoCEv1 was simply adopting an InfiniBand-like transport layer (i.e., the Base Transport Header, BTH) on top of Ethernet's L2 headers. Later, RoCEv2 added IP/UDP L3 headers to support routing within and across datacenters. Today, there are more RoCEv2 NICs than InfiniBand NICs deployed.
#### RoCE - convergence or duct tape?
RoCE's core design is inherited from a technology developed for simple hardware two decades ago and are suboptimal in today's Ethernet environments. For example, RoCE uses InfiniBand's simple transport layer that heavily builds on in-order delivery as well as go-back-n retransmission semantics that essentially require a highly reliable in-order fabric for efficient operation. Thus, RoCE runs best over a lossless in-order fabric, like InfiniBand. Traditionally, Ethernet drops packets when switch buffers are full and relies on end-to-end retransmission. To support RoCE, "converged Ethernet" (CE) introduces Priority Flow Control (PFC) to implement link-level lossless operation. PFC repurposes Ethernet PAUSE frames that existed in Ethernet to support networks with different link transmission rates. PFC enhances PAUSE frames to stop (or throttle) traffic on a specific priority class to avoid packet drops. Unfortunately, this complex set of protocols interferes across the different layers in the network and reduces efficiency for some of today's most important workloads.
RoCE's semantics, load balancing, and congestion control mechanisms are inherited from InfiniBand. This implies that all messages should appear at the destination in order as if they were transmitted over a static route, essentially disallowing many packet-level load balancing mechanisms. For AI training workloads which are long-lived flows, multi-pathing mechanisms can greatly improve the job completion time. Furthermore, RoCEv2 uses a simplistic congestion control mechanism based on IP's Explicit Congestion Notification (ECN). ECN-compatible switches mark packets when congestion is detected and receivers relay that information back to the senders, which in turn reduce their injection rate guided with a single parameter. After a congestion-free period, the rate is automatically increased again
using a second configuration parameter. ECN uses a binary flag for congestion experienced and the lack of fine grained indication results in many Round Trip Times (RTTs) to determine the correct rate. This simple mechanism is very similar to InfiniBand's original Forward and Backward Explicit Congestion Notification (FECN/BECN). It promises to coexist with other traffic but is hard to configure in practice [4, 5, 6].
We now briefly discuss some important traffic motifs in HPC and datacenter traffic and then discuss RoCE's shortcomings in detail.
### Guiding Traffic Motifs
For the sake of the discussion, we shall identify three traffic motifs representing a large fraction of RDMA workloads today. Unfortunately, those motifs also highlight RoCE's shortcomings. Here, we focus on East-West (intra-) datacenter traffic as used in HPC, AI training and distributed inference, storage, as well as general microservice or Function as a Service (Faas) traffic.
#### Incast (IN)
An incast traffic pattern happens when multiple sources target the same destination process in a potentially uncoordinated but simultaneous traffic pattern. It is characterized by a number of source processes and a transaction size. It often appears stochastically in practice when a service is, by chance, requested by many uncoordinated clients at the same time. For example, imagine that 100 clients want to commit a 10 kiB write transaction to the same storage server. All clients may send at full bandwidth because they do not know about the upcoming congestion. The packets will quickly fill network buffers that can hinder other flows and eventually violate service level agreements (SLAs). The most challenging incast patterns are caused by transactions that are smaller than the bandwidth-delay product such that the congestion control mechanism cannot get a reliable signal before the transaction should be completed. We remark that growing bandwidths push more and more workloads into this critical region.
#### Oblivious bulk synchronous (OBS)
Many HPC and AI training workloads can be expressed in the oblivious bulk synchronous model (OBS) where computation steps are interleaved with global communication steps that often synchronize processes. Oblivious means that the communication pattern for an application depends on a small number of parameters (such as size or process count) and does not depend on the data that is processed. It can often be determined statically before the application is started. For example, all collective operations in the Message Passing Interface (MPI) standard [7] are oblivious. Thus, OBS workloads can algorithmically avoid incast! The three-dimensional parallelism in deep learning training [2] is a typical example. OBS can be modeled by the number of processes, the duration of the computation, and the size of the communication (per endpoint). If both computation and communication are small, the overall workload is latency sensitive, a pattern that often appears in HPC and AI inference. Large communications that can often be found in AI training workloads are typically bandwidth-sensitive.
#### Latency-sensitive (LS)
For some workloads, message latency (and sometimes message rate) plays a central role. Some of those fall into the OBS category but others have complex, data-dependent, message chains that form critical performance paths in the application. Those are typically strong scaling workloads where the time to solution matters and inefficient execution must be tolerated. Large-scale simulations with strict deadlines such as weather forecasting and oil exploration fall into this category, but also some transaction processing or search/inference workloads. Here, one has typically stringent (single-digit microsecond) latency requirements.
#### Deployment characteristics
In addition to the traffic types, the deployment environment is also shifting. Newly emerging confidential compute ideas require all traffic to be encrypted on the wire. Ideally, traffic is encrypted and decrypted end-to-end in secure enclaves and no network equipment (NIC or switch) is to be trusted. Furthermore, and related, emerging multi-tenancy scenarios require managing tens of thousands of connections from a single host. Those are often supported by Smart NICs managing the
resources such as bandwidth and security through rate limiting and filtering. Also, new, cost effective low-diameter and specialized topologies that require more advanced load balancing and routing become a necessity for extreme-bandwidth deployments [8, 2]. Many combinations of those requirements pose significant challenges on next-generation high-performance networks.
### Where RoCE needs improvement
Many of RoCE's issues have been discussed in the past [9] and many research works exist to propose various solutions [10]. Here, we outline potential improvements that we see and we relate them to the key workloads and deployment use-cases outlined above. We now provide an itemized list of issues that could be improved for more efficient operation in Ethernet-based high-performance RDMA or Smart NIC systems.
1) PFC requires excessive buffering for lossless transport
Priority Flow Control (PFC) lies at the very heart of converged Ethernet to enable lossless transport on each link. With PFC, the receiver monitors the available input buffer space. Once this buffer space falls below some threshold related to the bandwidth-delay product BW*RTT, it sends a PAUSE frame to the sender. At this time, BW*RTT/2 Bytes are already on the incoming wire but before the sender will receive the PAUSE frame, it will send another BW*RTT/2 Bytes. The minimal buffer requirement for fully lossless transfers would thus be BW*RTT + MTU1, where MTU is the maximum size of a packet. Yet, this would only support the case where packets are immediately drained at the receiver. Even the slightest delay in the forwarding may significantly reduce link utilization.
Footnote 1: Maximum Transfer Unit
The BW*RTT buffer space that covers the travel latency of the PAUSE message is often called "headroom buffer" and it is similar to the buffer required for credit-based flow control schemes such as those used in InfiniBand or Fibre Channel. In those, the receiver proactively sends credits (buffer allocations) to the sender keeping the input buffer space at an equilibrium, instead of reacting once it runs too full with PFC. Both schemes have their merits--a credit can travel proactively towards the source while a PFC scheme can be more reactive (late binding) when allocating shared buffer space to different source links. Both schemes need to essentially reserve BW*RTT space per link to just cover the round-trip control delay of the link, space that is lost for efficient forwarding.
In practice, buffer space is extremely valuable to ingest varying traffic peaks for temporal and spatial load balancing. Furthermore, just the required headroom buffer, that cannot be used for anything else without risking packet drops, puts a significant challenge for the scaling of next-generation switches. Figure 0(a) shows the required headroom space (excluding other buffering!) for various switch generations assuming a 600 ns average latency (including arbitration, forward error correction (FEC), and wire delay) for 9 kB packets and 8 traffic priority classes with separate buffers on a three-tier fat tree. Covering longer distances (and thus latencies) is also challenging as high-performance geo-replicated datacenters become common. Figure 0(b) shows the needed per-port headroom buffer for the same configuration assuming 800G ports, a 5ns/m wire delay, and various deployment types.
One may consider a lossy link-level protocol to repurpose these buffers for forwarding functions. Yet, this inter
Figure 1: Headroom Buffer Requirements.
protocols as we shall see soon. In any case, wasted buffer space is a general issue affecting all workloads that could benefit from the additional buffer if it was available for packet forwarding.
#### 2) Victim flows, congestion trees, PFC storms, and deadlocks
Another issue stems from the fact that PFC stops a whole traffic class (encoded as only three bits) and all flows in it. This can lead to blocked victim flows: assume that we have two flows A and B sharing a link L. Flow A is not congested and could send at full bandwidth. However, flow B is blocked at some downstream port and fills up the input buffer of L. Eventually, L's allocated buffer will be full with B's packets and L sends a PAUSE frame. This frame also stops flow A, which could proceed independently--now, flow A is victimized by the PAUSE of flow B. Thus, flows that are not congested may be affected by other flows that are congested. This phenomenon is also known as Head of Line blocking.
Since any congestion of a downstream port will fill buffers upstream unless the endpoint congestion control protocol reacts, PFC events can quickly grow a "congestion tree" inversely following victimized flows in the network. Congestion trees are a general problem in lossless networks and are sometimes called PFC storms. It could be addressed by an even more fine-grained tracking of congestion, e.g., at the basis of individual flows instead of priorities. Yet, this requires the network switches to maintain flow state to identify individual flows [11, 3]. One could also attempt to move congested flows into congested priorities dynamically, to avoid victims (cf. congestion isolation, P802.1Qcz). Another problem is that lossless lanes now consume already scarce traffic classes (separate buffer space). This takes an important resource from datacenter providers that already use such traffic classes for differentiated services such as elephant-flow backups, low-latency video conferencing, and others. Any traffic class used for RoCE (or other lossless) traffic is lost network-wide.
Such congestion trees are particularly problematic for incast workloads where they can jam the whole network, especially in the context of packet-level adaptive or oblivious routing. Yet, the very low bandwidth per flow at the incast link means that, in theory, these flows would need very little network buffering to saturate the link. The purely rate-based nature of RoCE's congestion control allows sources to inject (too) many packets that quickly fill network buffers. For example, a window-based scheme would allow the administrators to directly control the network-wide buffer occupancy of each flow.
Any lossless scheme with limited buffering suffers from deadlocks if the routing allows for cycles to form. This can be avoided with cycle-free routing schemes or special buffering strategies--both come at a (small) cost. Even if routes are generally deadlock free, transient states occurring after link failures can lead to deadlocks. Avoiding those is harder, however, one can configure packet timeouts in switches to resolve this problem dynamically.
#### 3) Go-back-N retransmission
RoCE was designed for very simple hardware following InfiniBand's in-order and credit-based lossless transport. This implies that packets can only be dropped if they are corrupted by bit errors, a very rare event. Thus, retransmission logic can be simple: if the receiver detects a gap in the packet stream (i.e., a skipped sequence number), it sends a negative acknowledgement (NACK) to the sender and drops all later packets. The sender then retransmits all packets beginning with the lost one. This scheme essentially discards and retransmits a full end-to-end BW*RTT (bandwidth-delay product) worth of data.
Let us assume a three-tier fat tree network with 800 Gb/s link speed and a worst-case per-hop latency of 600 ns. The total RTT as observed by an endpoint would be 3.6 us2. The effective bit error rate on each link can be as high as 1e-12 (as proposed by the Ethernet specification [12]) and we assume 9 kiB frames, the probability of losing a single frame is 3.3e-8 (see Appendix A for derivation). Thus, the total expected bandwidth loss due to go-back-n would be a negligible 0.00013%.
Footnote 2: we roughly approximate end-to-end latency as six hops
A bigger issue with the simple go-back-n scheme is that it does not support multi-pathing or out-of-order delivery. Any two packets passing
would trigger an expensive retransmission event losing a full BW*RTT transmission. Latest generations of RoCE NICs introduce selective retransmission to mitigate this problem. Yet, those are often limited. For example NVIDIA's ConnectX-6 adapter does not support adaptive routing of tag matching with selective retransmission enabled.3 Go-back-n has one interesting advantage though: if a bit error happens and the packet is dropped (silently) by the lower layers, the error is detected immediately once the next packet arrives. Other schemes that support out-of-order delivery would need to wait for a timeout to expire at the sender, potentially leading to much higher recovery times and jitter. Thus, when designing new transport protocols, one needs to consider all these trade-offs carefully!
Footnote 3: ConnectX-6 DX firmware release notes v22.27.1016
4) Congestion control and colocation with other traffic
RoCE's default congestion control relies on a very simple rate control that is intimately linked to the lossless transport assumption. Many researchers have recognized that this simple mechanism does not integrate well with other traffic such as TCP/IP and generally can be improved in the datacenter environment. Mechanisms such as DCQCN [5], TIMELY [6], and HPCC [4] build on RoCE to improve the transport of flows. Most RoCE deployments today use non-standard congestion control mechanisms which makes interoperability between vendors, or even different hardware generations of the same vendor, hard. This is due to the fact that congestion control remains a tough problem and it is likely that different workloads require different tuned versions of the protocol.
For example, the typically repetitive endpoint-congestion-free bulk data transfers in oblivious synchronous workloads could quickly be learned or even be statically configured based on the expected traffic pattern [2, 13]. Highly-dynamic incast scenarios require coordinating multiple senders either through the receiver or network signals. Latency-sensitive workloads with small messages that are smaller than the bandwidth-delay product can be most problematic, especially if they appear in an unpredictable data-driven communication pattern. Those may need to rely on switch buffering to ingest temporary load-imbalance at the network level. In general, congestion control schemes are and will remain a research focus with constant tuning even after deployment. Co-existing with different traffic types such as TCP or QUIC will also require constant adoption. Thus, such schemes should not only be fast and cheap in hardware but also be flexible and support a wide range of parametrizations.
Another line of argument considers switch queue size and occupancy. Datacenter switches traditionally have large (deep) buffers to accommodate traffic bursts without dropping to accommodate the slow end-to-end rate adjustment. On the other hand, switches used in HPC usually operate lossless with very shallow buffers and stiff back-pressure due to their reliable link-level flow control mechanisms [3]. Also, HPC network topologies have usually lower diameter than datacenter deployments [14]. Thus, HPC deployments support lower-latency operations because small packets are less likely to wait in buffers behind longer flows. Datacenter networks with RoCE are often combining both inefficiently: they use a lossless transport with all its issues with relatively large-buffered switches. Many modern congestion control mechanisms thus aim at keeping the buffer occupancy generally low, leaving this very expensive resource unused!
5) Header sizes, packet rates, scalability
RoCEv2 uses full Ethernet L2 and UDP/IP headers in addition to InfiniBand's Base Transport Header (BTH). Thus, the header overhead per packet is substantial: 22 Bytes L2, 20 Bytes IP, 8 Bytes UDP, and 12 Bytes BTH and 4 Bytes ICRC make a total of 66 Bytes per packet. Locally-routed InfiniBand, for example, has only a total header size of 20 Bytes: 8 Bytes for the Local Routing Header, and 12 Bytes for the BTH. Other HPC protocols have headers with less than 40 Bytes.
This impacts both the raw packet rate as well as processing overhead and cost as more complex headers require more header processing. Just the packet rate for small payloads could be problematic. Let us assume 8 Bytes messages as an example for a single-element reduction operation for conjugate gradient solvers or fine
grained global graph updates. The maximum rate (without headers) on an 800 Gb/s link would be 12.5 Giga-packets per second (Gpps). With IB headers, that rate would decrease to 3.5 Gpps and with RoCEv2 headers to 1.4 Gpps. The packet would be nearly 90% header overhead! And we are ignoring additional protocol headers for MPI or RDMA endpoints. Yet, given that NIC packet processing is currently slower (\(<\)1 Gpps per NIC), the header size may not be the biggest issue. Furthermore, NICs need to process acknowledgment packets, which could be especially challenging for selective acknowledgment and retransmission protocols. The high user-level and protocol message rates require parallel processing in the NIC given the mostly stagnant clock rates.
RoCE's packet format is closely linked to InfiniBand's verbs which has connections between queue pairs (QPs) as its basic concept. The size of the context state for a single connection depends on the implementation details but large-cluster all-to-all connectivity may be problematic. Each queue pair at least needs to keep connection information and state such as sequence number and destination address and queue pair number. Connection state can be relatively large, up to 1 kB per connection in some implementations.
Small packets are often important in latency-sensitive workloads, some of which are bound by the rate at which the NIC can issue new messages. Slimmer headers would potentially decrease latencies and increase message rates while allowing for a more efficient bandwidth utilization.
### 6) No support for smart stacks
As network overheads become more important in datacenter workloads, more intelligent stacks are designed. For example, the QUIC protocol allows to push transport processing to the application which can define application-specific protocols. This enables running different protocols for different service requirements, such as latency-insensitive video streaming, latency sensitive audio-conferencing, or generally resilient but large backup traffic. RoCE's philosophy of hardware acceleration does not support different transport protocols, even if the user-level stack would be able to specify additional properties of the traffic (e.g., mark messages as resilient to out-of-order delivery).
Emerging Smart NICs lead to new opportunities in this area where user-configurable kernels could perform packet and protocol processing on the NIC [15]. Additionally, in-network telemetry (INT) can provide additional signals for these protocols to react accordingly. Thus, even if the stack has additional knowledge about the traffic types, today's RoCE forces it into a relatively simple and inflexible protocol that cannot take full advantage of this knowledge.
### 7) Security
RoCE is known to have several security issues [16, 17], especially in multi-tenant contexts. Many of those issues stem from the fact that protocol security, authentication, and encryption have played a minor role at the design time. Yet, today, such properties are much more important.
IPSEC can be used to protect L3 headers and payload but would need to be enabled on a per-queue-pair basis such that no two tenants share a set of keys. This can be quite costly in terms of connection context overhead and performance. Furthermore, RoCE does not support sub-delegation of memory regions to other nodes. Both issues can be addressed with modern key-derivation protocols [16].
### 8) Link-level reliability
The move towards higher transceiver speeds leads to more complex encoding and modulation schemes running at growing frequencies. With 50G lanes, Ethernet moved from the simple two-voltage level NRZ to four-voltage level PAM4 encoding. Today's 100G lanes run at 25 GHz, requiring the receiver to distinguish four levels within a fraction of a nanosecond. The signal degradation in cables and connectors as well as the increasingly complex analog circuitry lead to higher bit-error rates going to a bit-error rate (BER) as high as 1e-4 soon.
Forward-error correction (FEC) has been introduced to avoid excessive end-to-end retransmissions due to dropping of corrupted packets in the network. Ethernet aims at a 1e-12 BER at the link level and currently employs a Reed-Solomon code on 10-bit symbols using a block of 514 such symbols with 30 additional encoding symbols (RS544). This enables the receiver to correct 15 random bit errors and up to 150 consecutive
(burst) bit errors. Other FEC codes such as LLFEC (RS272, half size as RS544) and Firecode provide lower latency but also lower protection against bit errors.
Generally, FEC comes at a latency and energy cost that falls into two categories: (1) accumulating the 5,140 bits of data and (2) encoding and decoding the code symbols. The former decreases with the link bandwidth and the latter depends on the implementation, varying from 20 to 100 ns in practice. Figure 2 shows the projected RS544 FEC for different link bandwidths.
For a constant RS544 FEC, the latency reduces for faster link bandwidths but will not go below the FEC computation overhead. However, faster lanes may lead to significantly higher bit error rates. In fact, RS544 may not be able to correct the projected 1e-4 BER to the desired 1e-12. Thus, future Ethernet standards may move to more complex FEC mechanisms that may increase the latency significantly.
An alternative approach is used in PCIe, which also deals with relatively high BER due to complex connectors but is designed as a low-latency local interconnect targeting around 5 ns. For example, the upcoming PCIe 6.0 specification protects a block of 242 Bytes with 6 Bytes of FEC together with an additional 8 Bytc CRC. The receiver first uses the FEC to correct some bit errors and then checks the CRC. If this check fails, it initiates a simple link-layer retransmission protocol to request the data again. The FEC reduces the bit error rate from 1e-4 to 1e-6 and the CRC then triggers retransmission with probability of less than 1e-5. The latency addition due to FEC is less than 2ns and the bandwidth reduction due to retransmission less than 2%. The challenge for Ethernet are longer links leading to higher link-latencies.
### System issues
Growing link-level and thus end-to-end latencies can lead to more issues at the system level. Higher latencies lead to higher buffer occupation and energy consumption. Less obviously, higher latencies lead to less efficient congestion control: messages that are transmitted faster than a single RTT cannot benefit from congestion control mechanisms that rely on receiver-based notifications. The bad case of incast with small messages thus gets worse or at least more common because the size of a "small message" increases. Figure 3 shows the size of the bandwidth delay product for some realistic latencies in datacenters today showing that even 1 MiB messages can be considered "too small" for effective incast handling by throttling the sender. Thus, problematic incast patterns may become more common with higher latencies!
In other words, if a system can throttle the sender fast enough, it can reduce the message size below which incast is a problem. This can be achieved by lowering latencies or having switches report incast congestion directly to the source (without bouncing through the receiver). Furthermore, if only very small messages create bad-case incasts, switch buffers may simply ingest them in the common case without even running out of resources. This may be amplified along incast trees where multiple sets of switch buffers can ingest transient incast messages, of course, potentially leading to congestion trees in the network. Such whole-systems issues remain an open discussion but it seems that lower latency generally simplifies them.
One also needs to pay attention to other aspects of the overall stack that can be quite complex. For example, simple and clear (remote)
Figure 3: Bandwidth-delay-product vs. Round-Trip-Time (numbers from De Sensi et al. [18])).
Figure 2: RS544 FEC latency breakdown.
memory semantics are tricky to define, reason about, and implement correctly [19]. Furthermore, the fact that process-local virtual addresses are exposed to remote hosts can be problematic for security and performance. One could think of a scheme with addressing relative to a memory region [20]. From a security perspective both schemes have their weaknesses: exposing addresses allows learning about the remote process, yet fixed offsets are much simpler to guess for an attacker [17]. We note that these are general problems for all RDMA systems and not specific to RoCE.
Routing and load balancing remains an open challenge--most HPC networks use packet-level adaptive routing with relatively advanced in-network mechanisms [3] while most datacenter networks use simple oblivious ECMP driven by the endpoints that change header fields to guide path selection in very simple ways. The granularity of such ECMP load balancing in data centers ranges from traditionally full flows to recently considered flowlets. Flowlets are consecutive sequences of packets that have a sufficient gap between them that flowlets cannot pass each other even when sent along different routes. Such gaps can be introduced by delaying packets or appear naturally. More recently, datacenter networks are looking towards more fine-grained mechanisms for load balancing. Another challenge is the requirement of some applications that messages be delivered in order. In general, out-of-order granularity and capabilities depend heavily on application requirements and the capabilities of the endpoint NICs. Finer and more out-of-order capabilities simplify network load balancing.
## Predictions
Based on all these points, we predict that academia and industry will revisit datacenter Ethernet. This next-generation Ethernet will likely support lossy and lossless transport modes for RDMA connections to allow intelligent switch-buffer management. This will make the provisioning of headroom buffer optional and avoid the other problems such as victim flows and congestion trees of lossless networking. Next-generation Ethernet is also unlikely to adopt go-back-n retransmission semantics but opt for more fine-grained mechanisms such as selective acknowledgments. Furthermore, it will likely make congestion management part of the specification. Special attention will be paid to colocation with other flows, especially in lossy traffic classes. The protocols will be designed in a flexible way to support smart networking stacks and security will finally become a first-class citizen. We may also see innovations in headers and reliability approaches as well.
Such modernizations will drive a new high-performance networking ecosystem for AI, HPC, and storage systems that are at the heart of hyper-scale datacenters. This development will conclude the convergence of HPC and datacenter networks!
|
2308.16007
|
Isospectrality and configurational entropy as testing tools for
bottom-up AdS/QCD
|
This work discusses the connection between isospectrality and configurational
entropy in holographic bottom-up models. We analyze the effect of
monoparametric isospectral transformation in holographic decay constants and
configurational entropy for a set of softwall-like models at zero temperature.
We conclude that the isospectral parameter $\lambda$ defines a window of
possible holographic models suitable to describe spectroscopy.
|
Miguel Angel Martin Contreras, Alfredo Vega, Saulo Diles
|
2023-08-30T12:52:57Z
|
http://arxiv.org/abs/2308.16007v2
|
# Isospectrality and configurational entropy as testing tools for bottom-up AdS/QCD
###### Abstract
This work discusses the connection between isospectrality and configurational entropy in holographic bottom-up models. We analyze the effect of monoparametric isospectral transformation in holographic decay constants and configurational entropy for a set of softwall-like models at zero temperature. We conclude that the isospectral parameter \(\lambda\) defines a window of possible holographic models suitable to describe spectroscopy.
## I Introduction
One direct evidence of confinement in hadronic physics comes from the early stages of string theory: Regge trajectories [1]. These structures can be understood as a taxonomic form to organize hadrons in terms of their quantum numbers, i.e., excitation number, spin, angular momentum, and hadronic masses. Calculating these objects is difficult since they belong to the low-energy region of QCD, where perturbative tools do not work. At this point, effective models become handy in approaching these problems. For example, relativistic or non-relativistic potential models, where Regge spectra come from the binding energy spectrum computed from Cornell-like potential [2; 3], or approximations done in the Bethe-Salpeter equation [4; 5], or lattice QCD methods [6; 7].
Since the non-perturbative nature of the hadronic spectra, gauge/gravity tools can be applied [8]. In this formalism, there are two possible forms of approach nonperturbative QCD. One is looking for common features between non-perturbative CFTs in a defined gravity, i.e., the top-down approach. The other one tries to find a gravity that captures QCD at the conformal boundary, i.e., the bottom-up approach. In this manuscript, we will focus on the latter. See ref. [9] for a complete review.
In the bottom-up landscape, confinement is achieved by transforming unbounded pure AdS states into bounded ones. This can be done in several forms. However, this can be summarized in the existence of a dilaton background field, inducing an energy scale that breaks the conformal invariance softly. This dilaton field can be static or dynamically generated. For simplicity, we will consider static dilatons only. However, these analyses can be extended to the dynamical case.
The holographic hadronic masses problem is generally reduced to calculating the eigenvalue spectrum of a given holographic potential \(V(z)\). The holographic potential carries information about the dilaton and the geometry used. So, asking which model can lead to better phenomenological results is natural. Or, which dilaton should be preferred over? The answer to this question is not so simple.
How can we approach this problem where we have many dilatons at hand? A possible tool we have to _measure_ the dilaton effect is _isospectrality_. In SUSY quantum mechanics, isospectrality transforms a potential into another one with the same eigenvalue spectrum. The transformation is labeled by a \(\lambda\in(0,\,\infty)\) parameter. Alternatively, the \(\lambda\) parameter can be choose in the equivalent range \(\lambda\in(-\infty,\,-1)\). The results in both range are completely equivalents. When \(\lambda\to\infty\), both potentials, \(V(z)\) and its isospectral counterpart, have the same eigenfunctions. However, this isospectral branch for \(\lambda\) is not unique. The interval \((-\infty,-1)\) will also provide
similar results, where the non-isospectral and isospectral models will be equivalent at \(\lambda\to-\infty\).
In AdS/QCD terms, if we have a well-known hadron spectrum, it is suitable to be modeled by a dilaton. Therefore, we can match this dilaton with an element in a bigger monoparametric isospectral class [10].
Since now we have many isospectral holographic potentials, which is chosen to model hadrons? A form to solve this question is by turning our eyes to the stability of the solutions. We will consider the _configurational entropy_ (CE) as a stability criterion for hadrons at zero temperature. Thermodynamics supports this claim: configurational entropy appears when a physical system is undergoing an isothermal process. Under this assumption, a possible criterion for choosing the most suitable isospectral model will be reduced to the one with the lowest CE. From the isospectral grounds, [10], this match with CE is expected to occur when \(\lambda\to\infty\). We will check this hypothesis on four bottom-up approaches: the hardwall[11; 12], sofwall[13], Braga deformed softwall[14] and non-quadratic softwall[15] models.
This work is organized as follows. In the section III, we discuss briefly how hadrons are conceived in AdS/QCD models. Section II introduces isospectrality from SUSY quantum mechanics. In section IV, we discuss shortly the ideas behind configurational entropy. In section V, we summarize the bottom-up models we will test with CE and isospectrality. Section VI is devoted to isospectrality in the bottom-up conceptual frame. We present our results in section VII. And finally, in section VIII, we deliver our conclusions.
## II Isospectrality
### General idea
In a plain and simple form of speaking, isospectrality is the study of objects that share the same spectrum. These objects include geometrical structures, isospectral manifolds, differential operators, and other mathematical structures.
In physics, isospectrality deals with two Hamiltonians having the same energy spectrum. This discussion was developed up to the beginning 80s in the context of the Darboux transform, a result of second-order differential ordinary equations in the 19th century [16], [17; 18]. However, at the beginning of this decade, isospectrality found in supersymmetric quantum mechanics [19] a new formulation, which is now one of the most popular. The methods used in this manuscript follow the ideas developed in [20], which is standard in supersymmetric quantum mechanics. This Darboux procedure have ha been used to deform topological defects [21]. In Ref. [10], the supersymmetric quantum mechanical procedure for producing a one-parameter family of isospectral potentials has been considered in the context of bottom-up holography. In the following, we provide the details of this procedure.
Our starting point is considering a potential \(V(z)\), with an associated set of eigenvalues and eigenfunctions \(\{\lambda_{n},\,\phi_{n}(z)\}\). The idea of isospectrality can be motivated by the following query: Does any other potential, \(\hat{V}(z)\), share the same eigenvalues \(\lambda_{n}\) with \(V(z)\)? The answer emerges in terms of the so-called superpotential \(W\).
For a given potential \(V(z)\), there is a superpotential \(W(z)\) so that \(V_{1}(z)=W^{2}-W^{\prime}\). The superpotential is derived from the ground state \(\phi_{0}(z)\) of \(V\) by \(W=-\frac{d}{dz}\log\phi_{0}\) and generates a superpartner of \(V\): \(V_{2}(z)=W^{2}+W^{\prime}\). The family of strictly isospectral potentials of \(V\) emerges when we look for the superpotentials \(\hat{W}\) generating the same superpartner \(V_{2}\). By the end of the day, we forget about \(V_{2}\) and keep only the isospectral family of \(V\).
Suppose that \(V_{2}(z)\) and \(\hat{V}_{2}(z)\) are related by the transformation
\[V_{2}(z) = W^{2}(z)+W^{\prime}(z), \tag{1}\] \[\hat{V}_{2}(z) = \hat{W}^{2}(z)+\hat{W}^{\prime}(z). \tag{2}\]
Thus, we want to know if, for pair \(V_{2}(z)\) and \(\hat{V}_{2}(z)\), there is a general superpotential that connects them. Suppose the general superpotential could be the following expression
\[\hat{W}(z)=W(z)+f(z). \tag{3}\]
Then, for the isospectral potential \(\hat{V}(z)\) we have
\[\hat{V}_{2}(z) = W^{2}+f^{2}+2\,W\,f+W^{\prime}+f^{\prime} \tag{4}\] \[= W^{2}+W^{\prime}+\left(f^{2}+2\,w\,f+f^{\prime}\right). \tag{5}\]
Therefore, to ensure that the transformation is isospectral, we need to define the quantity inside brackets to be zero, thus
\[f^{2}+2\,W\,f+f^{\prime} = 0 \tag{6}\] \[1+2\,\frac{W}{f} = -\frac{f^{\prime}}{f^{2}} \tag{7}\]
The solution to this equation defines the _one parameter family of isospectral superpotentials_
\[\hat{W}(z)=W(z)+\frac{d}{dz}\log\left[I(z)+\lambda\right] \tag{8}\]
where \(\lambda\) is an integration constant and \(I(z)\) is defined in terms of the \(V(z)\) ground state, \(\phi_{0}(z)\), as
\[I(z)=\int_{0}^{z}dz^{\prime}\,\phi_{0}^{2}(z^{\prime}) \tag{9}\]
This expression is called _isospectral transformation_[20]. Then, the isospectral nonparametric potential is
\[\hat{V}_{\lambda}(z)\equiv\hat{V}(z)=\hat{W}^{2}(z)-\hat{W}^{\prime}(z)=V(z)- \frac{d^{2}}{d\,z^{2}}\log\,\left[I(z)+\lambda\right]. \tag{10}\]
In the following sections, we will test this expression in the context of bottom-up models. Our strategy will be the following: Given a well-known holographic confining potential, we will construct the associated monoparametric family of isospectral potentials.
### The new ground state
For a given \(\lambda\), the potential \(\hat{V}_{\lambda}(z)\) have the same spectrum of its superpartner \(V_{2}(z)\), apart from the ground state. To complete the spectrum of \(\hat{V}\), we reintroduce the lost ground state, which is given by [20; 22]:
\[\phi_{0,\lambda}(z)\equiv\frac{\phi_{0}(z)}{I(z)+\lambda}. \tag{11}\]
Once we compute the integral for the starting point ground state, there are two forms of obtaining the new ground state for the isospectral potential. One is to solve the new differential equation for the \(\lambda-\)parametrized potential in eq.(10) numerically, calculating the new ground state as the numerical reconstruction of the solution of this Schrodinger equation. The other form is to place in eq.(11) the starting point ground state profile. We performed both strategies and found that both procedures give the same new \(\lambda-\)parametrized ground state. In Figure 1, we represent the matching results for the vector softwall model case.
## III Holographic description of hadrons in bottom-up models
### Bottom-up AdS/QCD in a scratch
Let us focus on the holographic description of hadronic states. We will consider as our background space the standard Poincare patch defined by
\[dS^{2}=\frac{R^{2}}{z^{2}}\left[dz^{2}+\eta_{\mu\nu}\,dx^{\mu}\,dx^{\nu}\right], \tag{12}\]
where \(\eta_{\mu\nu}\) is Minkowski metric tensor in four dimensions, and \(R\) is the AdS curvature Radius.
According to the standard AdS/CFT prescription, hadronic identity is set by one single quantity, the conformal dimension, i.e., the energy scaling dimension of the operator creating such hadrons, \(\Delta\) that enters the bulk action via the bulk mass \(M_{5}\). In the case of mesons, the energy scaling dimension is read from the \(q\,\bar{q}\) having the value of \(\Delta=3\). Thus, from the bulk mass expression
\[M_{5}^{2}\,R^{2}=\left(\Delta-S\right)\left(\Delta+S-4\right) \tag{13}\]
where \(S\) is the hadron spin. We have \(M_{5}^{2}R^{2}=0\) for vector mesons, and we get \(M_{5}^{2}\,R^{2}=-3\) for scalar mesons.
These particles can be described by a single bulk action written as
\[I=\frac{1}{2\,\mathcal{K}}\int d^{5}x\,\sqrt{-g}\,e^{-\Phi(z)} \left[\nabla_{m}\,\phi^{m_{1}\,m_{2}}\,\nabla^{m}\,\phi_{m_{1}\,m_{2}}\right. \\ \left.+M_{5}^{2}\,\phi^{m_{1}\,m_{2}}\,\phi_{m_{1}\,m_{2}}\right], \tag{14}\]
where \(\phi_{m_{1}\,m_{2}}\) is a U(1) 2-form in the bulk, whose 1-form is dual to hadronic operators at the boundary, and \(\mathcal{K}\) is a constant that fixes units, relevant for decay constants calculation. We have written a generic dilaton field to address some AdS/QCD models based on static dilaton fields \(\Phi(z)\).
From this action, we can derive the equation of motion for the 2-form bulk field \(\phi\) as follows:
\[\frac{1}{\sqrt{-g}}\,\partial_{z}\left[\sqrt{-g}\,e^{-\Phi(z)}\, g^{z\,z}\,g^{m_{1}\,m_{2}}\nabla_{m}\,\phi_{m_{1}\,m_{2}}\right]\\ -M_{5}^{2}\,e^{-\Phi(z)}\,g^{m_{1}\,m_{2}}\,\phi_{m_{1}\,m_{2}}=0 \tag{15}\]
After a Fourier transforming and redefining the bulk field as \(\Phi_{m_{1}\,m_{2}}(z,q)=\Phi_{m_{1}\,m_{2}}(z)\,\psi(z,q)\), we arrive to the Sturm-Liouville equation for the bulk field \(\psi(z,q)\):
\[\partial_{z}\left[e^{-B(z)}\,\partial_{z}\,\psi(z)\right]+\left( -q^{2}\right)\,e^{-B(z)}\,\psi(z,q)\\ -\frac{M_{5}^{2}\,R^{2}}{z^{2}}\,e^{-B(z)}\psi(z,q)=0 \tag{16}\]
where we have defined \(B(z)=\Phi(z)+\beta\,\log\left(\frac{R}{z}\right)\) and \(-q^{2}=M_{n}^{2}\) is the on-shell condition. Notice that \(\beta\) is a factor that carries spin information since \(\beta=-3+2\,S\).
The hadronic spectrum, i.e., the associated holographic Regge trajectory, comes from transforming the Sturm-Liouville equation into a Schrodinger-like one. To do so, we perform a Boguliobov transformation \(\psi(z)=e^{\frac{1}{2}B(z)}\,u(z)\):
\[-u^{\prime\prime}+V(z)\,u=M_{n}^{2}\,u, \tag{17}\]
where the holographic potential is defined as
\[V(z)=\frac{M_{5}^{2}\,R^{2}}{z^{2}}+\frac{1}{4}\left(-\frac{ \beta}{z}+\Phi^{\prime}(z)\right)^{2}\\ +\frac{1}{2}\left(-\frac{\beta}{z^{2}}-\Phi^{\prime\prime}(z) \right). \tag{18}\]
he eigenvalues of this potential define the hadronic spectrum, and the eigenstates are dual to normalizable hadronic modes.
With the hadronic spectrum, another significant quantity we can deduce from this holographic model is the hadronic decay constants \(f_{n}\), given in energy units. Decay constants are quantities measuring the probability of transitioning into the vacuum of a given hadronic state. In the language of OPE expansions, decay constants appear as the residues of the multipole expansion of the 2-point function \(\Pi(-q^{2})\). Holographically, it is defined as
\[f_{n}^{2}=\frac{\left(\Delta-S\right)^{2}}{M_{n}^{2}\,\mathcal{K} }\,\lim_{z\to\varepsilon}\,e^{-2\,\Phi(z)-\left(\beta-1\right)A(z)}\left| \frac{u_{n}(z,q)}{z}\right|^{2}, \tag{19}\]
where \(\varepsilon\to 0\) defines the AdS conformal boundary locus.
Up to this point, we have not discussed the role played by the dilaton in this scenario. The dilaton is responsible for inducing the confinement in the model. A direct consequence of confinement is the emergence of hadronic bounded states. In holographic terms, including a dilaton field makes free bulk fields in AdS become normalizable states. These normalizable states are dual to hadrons at the boundary.
The following sections will study the connection between isospectrality and configurational entropy for some of the most known AdS/QCD bottom-up models.
### Dilaton engineering
Isospectral tools define a new holographic potential in the parameter \(\lambda\). At this point, it is worth asking what static dilatons are associated with this family of isospectral potentials [10].
To explore this possibility, we will connect these monoparametric potentials \(\hat{V}_{\lambda}(z)\) with the dilaton as follows
\[\hat{V}_{\lambda}(z)=\frac{M_{5}^{2}\,R^{2}}{z^{2}}-\frac{\beta}{2 \,z^{2}}+\frac{\beta^{2}}{4\,z^{2}}-\beta\,\frac{\tilde{\Phi}^{\prime}}{2\,z} +\frac{1}{4}\,\tilde{\Phi}^{\prime 2}-\frac{1}{2}\,\tilde{\Phi}^{\prime\prime}. \tag{20}\]
Thus, given a isospectral potential \(\hat{V}_{\lambda}(z)\), we can construct the associated dilaton field \(\tilde{\Phi}(z,\lambda)\). This \(\tilde{\Phi}\) field is a deformation of the standard dilaton \(\Phi\), used in the action density (14), that generates the potential \(\hat{V}_{\lambda}(z)\) in a holographic sense.
Methodologically, the dilaton engineering flows directly from the bottom-up modeling: we always start from the given spectrum at the boundary (coming from experimental phenomenology). Thus, this spectrum fixes the choice of geometry or dilaton. In our case, we have a spectrum that defines a monoparametric family of potentials. Therefore, we want to compute the static dilaton associated with each element in the isospectral family.
## IV Configurational entropy
Configurational entropy (CE) can be defined easily as the different forms (in terms of microstates) that a given
Figure 1: Isospectral ground state wave functions obtained for \(\lambda=0.001\) calculated in the AdS/QCD models discussed here. In the figure, the dashed line corresponds with numerical solutions compared with the analytical expression given in eq.(11).
macrostate can be organized. Thus, a larger CE means a higher number of possible microstate arrangements. In thermodynamical terms, this entropy is associated with the work done by a system without any exchange of energy changing spatial configuration.
In information theory, configurational entropy measures the relationship between the informational content of physical solutions regarding their equations of motion (e.o.m.). CE is also a logarithmic measure of how spatially localized solutions, with given energy content, have spatial complexity. Thus, it measures information content in the solutions to a given set of equations of motion. In other terms, CE can be interpreted as measuring how much information is necessary to describe localized functions, i.e., e.o.m. solutions, concerning their parameter set. In general, dynamic solutions come from extremizing an action. CE measures the available information in those solutions.
Configurational entropy for a discrete variable with probabilities \(p_{n}\) is defined from the Shannon entropy as follows [23; 24; 14]
\[S_{C}=-\sum_{n}\,p_{n}\log p_{n}. \tag{21}\]
In the case of continuous variables, we have the _differential configurational entropy_ (DCE) defined as
\[S_{C}\left[f\right]=-\int d^{d}\,k\,\tilde{f}\left(k\right)\,\log\tilde{f} \left(k\right), \tag{22}\]
where \(\tilde{f}\left(k\right)=f\left(k\right)/f\left(k\right)_{\text{Max}}\) defines the _modal fraction_, \(f(k)_{\text{Max}}\) is the maximum value assumed by \(f(k)\). Also, we have that \(f(k)\in\,L^{2}\left(\mathbb{R}^{2}\right)\), i.e., the square-integrable space of functions on the plane. This ensures that \(f(k)\) has a defined Fourier transform. Usually, this \(f(k)\) function is associated with the energy density in momentum space, \(\rho(k)\).
In the AdS/CFT context, the holographic approach to configurational entropy in bottom-up and top-down AdS/QCD models was made in [25]. For hadronic states, it was introduced in [26; 27; 28; 29; 30; 24] and references therein. In the context of heavy quarkonium stability, DCE was used as a tool to explore thermal behavior in a colored medium [31], in the presence of magnetic fields [32] or at finite density [33]. In [34], DCE addressed the holographic deconfinement phase transition in bottom-up AdS/QCD. Recently, CE was used to discuss holographic stability in light nuclides in [35].
The holographic dictionary maps the information encoded in the spatial configuration of the boundary particle into the holographic configuration of the dual bulk field. In this sense, the information associated with the arrangement of the constituents inside the hadron is encoded in the energy density of the bulk field. The notion of energy density comes from the pure time component of the energy-momentum tensor: \(\rho(z)\equiv T_{00}(z)\).
As it was described in [35], the standard procedure to compute CE starts from the bulk action (14) by calculating the energy-momentum tensor \(T_{mn}\)
\[T_{mn}=\frac{2}{\sqrt{-g}}\,\frac{\partial\left[\sqrt{-g}\,\mathcal{L}_{ \text{Hadron}}\right]}{\partial\,g^{mn}}, \tag{23}\]
Once we get the Schrodinger modes, calculated from eqn. (17) we transform back to the Sturm-Liouville form and then compute the on-shell energy density as
\[\rho(z)\equiv T_{00}=\frac{e^{-B(z)}}{2}\left(\frac{z}{R}\right)^{3}\times\\ \left\{\left[\frac{1}{\mathcal{K}^{2}}\left(M_{n}^{2}\,\psi_{n}^ {2}+\psi_{n}^{\prime 2}\right)-\frac{M_{5}^{2}\,R^{2}}{z^{2}}\psi_{n}^{2} \right]\right\}\,\Omega, \tag{24}\]
where \(\Omega\) is a factor carrying plane wave and polarization contraction factors, which is irrelevant in the following calculation steps.
Once we have the on-shell energy density, we Fourier-transform it
\[\bar{\rho}(k)=\int_{0}^{\infty}d\,z\,e^{ik\,z}\rho(z) \tag{25}\]
to construct the _modal fraction_ as follows
\[f(k)=\frac{|\bar{\rho}(k)|^{2}}{\int dk|\bar{\rho}(k)|^{2}}. \tag{26}\]
The differential configurational entropy for the holographic hadron is then written as
\[S_{DCE}=-\int dk\,\tilde{f}(k)\log\,\tilde{f}(k) \tag{27}\]
where \(\tilde{f}\left(k\right)=f\left(k\right)/f\left(k\right)_{\text{Max}}\). The next section will discuss the DCE for some AdS/QCD models.
## V Bottom-up AdS/QCD test models
Let us now discuss the ideas developed above in the context of bottom-up AdS/QCD models. We will consider the following static dilaton models: hard wall [11; 12] softwall [13], UV deformed softwall [36], non-quadratic soft wall [15] and the UV deformed and non-quadratic softwall [37].
### Hard Wall Model
The hardwall model, introduced initially in the context of gluon spectrum [11] and used later to discuss light vector mesons phenomenology in [12], plays with the idea of
inducing confinement by adding a D-brane probe, in the same sense as the square-well produces bounded states in quantum mechanics. The D-brane locus \(z_{hw}\) is associated with the energy scale \(\Lambda_{QCD}\) used to fix hadron masses as follows
\[z_{hw}=\frac{1}{\Lambda_{QCD}}. \tag{28}\]
Since the confinement is placed by _cutting_ the AdS space, the static dilaton is fixed to zero in the action density (14).
For hadrons with integer spin \(S\), after little calculations, we obtain the holographic potential as
\[V_{hw}(z)=\frac{M_{5}^{2}\,R^{2}}{z^{2}}-\frac{\beta}{2\,z^{2}}+\frac{\beta^{ 2}}{4\,z^{2}},\ \ 0\leq z\leq z_{hw}. \tag{29}\]
By solving the Schodinger-like equation, we obtain the normalizable modes, the mass spectrum, and the decay constants as
\[\phi_{n}(z) = \frac{\sqrt{2}\,\Lambda_{QCD}}{\left|J_{2}\left(\alpha_{1,n} \right)\right|}\,z^{\frac{1-\beta}{2}}\,J_{1}\left(M_{n}\,z\right) \tag{30}\] \[M_{n} = \Lambda_{QCD}\,\alpha_{1,n}\] (31) \[f_{n} = \frac{\sqrt{2}\,\Lambda_{QCD}}{\mathcal{K}\left|J_{2}\left( \alpha_{1,n}\right)\right|} \tag{32}\]
where \(\alpha_{n,m}\) is the \(m\)-zero of the Bessel function of first kind \(J_{n}(z)\).
Let us focus on vector mesons implying \(\beta=-1\) and \(M_{5}^{2}\,R^{2}=0\). We have to adjust the cutoff \(z_{hw}\) in terms of the lights unflavored vector meson, i.e., the \(\rho(770)\) meson: \(\Lambda_{QCD}=\frac{M_{c}}{\alpha_{1,1}}=(4.943)^{-1}\) GeV, with \(M_{\rho}=775.26\pm 0.23\) MeV [38]. For scalar mesons, i.e., \(\beta=-3\), the cutoff is fixed with the lightest unflavored scalar meson.
In the case of the decay constants, the proper value of \(\mathcal{K}=2\,\pi\,N_{c}\) with \(N_{c}\) the number of colors, is fixed by the large \(q^{2}\) expansion of the 2-point function. It is expected that, at \(q^{2}\to\infty\), large \(N_{c}-\)QCD and AdS/QCD should have the same behavior (See [39; 40; 41; 42]).
For large values of \(n\), it is expected that HW decays obey the conformal limit [43]
\[F_{n}\equiv M_{n}\,f_{n}\propto M_{n}^{2}\,\partial_{n}\,M_{n}^{2}\propto n^{ 3/2}\,N_{c} \tag{33}\]
This limit appears in both the large \(N_{c}\) expansion of the vector \(2-\)point function and the II-B supergravity backgrounds since it is a consequence of the inherent conformal nature of this sort of background. When the inclusion of dilaton fields modifies the conformal nature, this conformal limit is expected not to hold anymore.
### Softwall Model
The softwall model was introduced in the holographic context of light vector meson spectroscopy to induce confinement by the smooth emergence of bulk-bounded states [13]. The model produces holographic hadrons by using a confining potential that, at a high \(z\)-value, defines the linearity of the squared mass spectrum. The confining part of the potential arises from the static dilaton profile used in the action density. In the case of the original softwall model, the dilaton is proved to be quadratic in \(z\). This fact is consistent with the generation of linear Regge trajectories at the conformal boundary. Recall that Regge trajectories are a clear signal of confinement in hadron physics.
The lightest mass hadron in the trajectory fixes the SWM model scale \(\kappa\). For the light-unflavored mesons, SWM works quite well in describing Regge trajectories. However, when you move to the heavy quarkonium realm, linearity starts to cease [15; 37].
The holographic potential in this model has the following structure.
\[V_{sw}(z)=\frac{M_{5}^{2}\,R^{2}}{z^{2}}-\frac{\beta}{2\,z^{2}}+\frac{\beta^{ 2}}{4\,z^{2}}+\kappa^{4}\,z^{2}-\kappa^{2}-\beta\,\kappa^{2}. \tag{34}\]
In general, the Schrodinger-like modes are written in terms of Laguerre-associated polynomials as follows:
\[\phi_{n}(z) = \sqrt{\frac{2\,\kappa^{4}\,n!}{\left(n+1\right)!}}\,e^{-\frac{1}{ 2}\kappa^{2}\,z^{2}}\,z^{\frac{3}{2}}\,L_{n}^{1}\left(\kappa^{2}\,z^{2}\right) \tag{35}\] \[M_{n}^{2} = 4\,\kappa^{2}\left(n+\frac{3-\beta}{4}\right)\] (36) \[f_{n}^{2} = \frac{\left(3-\beta\right)^{2}\left(1-\beta^{2}\right)^{2}\, \left(n+1\right)\,\kappa^{2}}{8\,\mathcal{K}^{2}\,\left(3+4\,n-\beta\right)} \tag{37}\]
The model outputs disagree with the phenomenology in the case of decay constants. In the case of vector SWM, decays are degenerated:
\[f_{n}^{2}=\frac{2\,\kappa^{2}}{\mathcal{K}}. \tag{38}\]
For scalar SWM, decays increase with excitation number. Following the same road we did with HWM. We will focus only on vector solutions, i.e., \(\beta=-1\) and \(M_{5}^{2}\,R^{2}=0\). The energy scale is set with the lightest unflavored meson. It is customary to use \(\rho(770)\) mass to fix the scale as \(\kappa=0.388\) GeV. As well as the HWM, the constant \(\mathcal{K}=2\,\pi\,N_{c}\) is fixed by comparison with the large-\(N_{c}\) two-point function at large \(q^{2}\).
### Braga deformed Softwall model
Despite the success exhibited by the SWM in describing hadronic spectroscopy, form factors, and other phe
nomenology, decay constants need to be correctly explained. Since decay constants depend on the eigenmode low-\(z\) behavior, the dilaton field can be used to overcome this fitting problem. In this landscape emerges this SWM deformation, where the dilaton is defined as
\[\Phi_{B}(z)=\kappa^{2}\,z^{2}+M\,z\,+\tanh\left(\frac{1}{M\,z}-\frac{\kappa}{ \sqrt{\Gamma}}\right). \tag{39}\]
where \(\kappa\) is an energy scale that controls the holographic potential high-\(z\) behavior and the mass spectrum, i.e., the Regge Trajectories. The other two scales, \(M\) and \(\Gamma\), modify the UV behavior of the dilaton, translated into the potential also, that fixes the eigenmode low-\(z\) behavior, improving the holographic decay constants: now they will be decreasing with excitation number, as it is expected from hadronic physics. However, including the UV term in the dilaton makes it lose precision in the mass spectrum. Thus, modifying the UV region implies that the dilaton field should not be quadratic. This idea is explored in [37].
The holographic potential in this deformed SW, according to the expression (18), has the following structure
Figure 2: Isospectral results for the studied holographic models discussed. In the left upper panel, we plot the family of isospectral potentials \(\hat{V}_{\lambda}(z)\) along with the hardwall potential (dashed). In the right upper panel, we depict the Schrödinger-like groundstates associated with the family of isospectral potentials. We present the isospectral dilatons \(\tilde{\Phi}(z,\lambda)\) calculated from the isospectral family in the lower panel. The D-brane locus is \(z_{hw}=4.949\) GeV\({}^{-1}\). For \(\lambda=\infty\), we took \(\lambda=9999\) as our ”numerical infinite”.
\[V_{BSW}(z)=\frac{M^{2}}{4}+\frac{M_{5}^{2}\,R^{2}}{z^{2}}-\frac{ \beta}{2\,z^{2}}-\frac{M\,\beta}{2\,z}+\frac{\beta^{2}}{4\,z^{2}}-\kappa^{2}\\ +M\,\kappa^{2}\,z-\beta^{2}\,\kappa^{2}+\kappa^{4}\,z^{2}-\frac{ \operatorname{sech}^{2}\left(\frac{1}{M\,z}-\frac{\kappa}{\sqrt{1}}\right)}{M \,z^{3}}\\ -\frac{\operatorname{sech}^{2}\left(\frac{1}{M\,z}-\frac{\kappa}{ \sqrt{1}}\right)}{2\,z^{2}}+\frac{\beta\operatorname{sech}^{2}\left(\frac{1}{M \,z}-\frac{\kappa}{\sqrt{1}}\right)}{2\,M\,z^{3}}\\ -\frac{\kappa^{2}\operatorname{sech}^{2}\left(\frac{1}{M\,z}- \frac{\kappa}{\sqrt{1}}\right)}{M\,z}+\frac{\operatorname{sech}^{4}\left(\frac {1}{M\,z}-\frac{\kappa}{\sqrt{1}}\right)}{4\,M^{2}\,z^{4}}\\ +\frac{\operatorname{sech}^{2}\left(\frac{1}{M\,z}-\frac{\kappa}{ \sqrt{1}}\right)\,\tanh\left(\frac{1}{M\,z}-\frac{\kappa}{\sqrt{1}}\right)}{M \,z^{4}}. \tag{40}\]
Since this potential is not analytical, it has to be solved numerically. Figure 2 depicts this potential with its ground state.
This model was formulated originally for heavy quarkonia. Thus, we will fix the parameters set to fit vector charmonium. For these hadrons, we have \(\kappa=1.2\) GeV, \(M=2.2\) GeV, \(\sqrt{\Gamma}=0.55\) GeV, \(\mathcal{K}=2\,\pi\,N_{c}\), \(\beta=-1\) and \(M_{5}^{2}\,R^{2}=0\). Decay constants for vector charmonium are collected in Table 1, and holographic masses are summarized in Table 2.
### Non-quadratic Softwall model
The non-quadratic SW deformation rises from the non-linear Regge trajectories proposed in the context of Bethe-Salpeter equations and potential models for heavy-light mesons [44; 45]. This idea improves the holographic spectroscopy and opens the possibility to include other non-\(q\bar{q}\) hadron in a very intuitive form [15].
The model consists of the inclusion of an extra parameter that is associated with the constituent mass. From WKB grounds, the high\(-z\) behavior of the holographic potential controls the mass spectrum. The fact that the quadratic dilaton works fine for light unflavored hadrons implies that the SWM is chiral symmetric by construction, yielding poor results in heavy charmonium spectroscopy.
Then, to overtake this issue, a non-quadratic dilaton with the following form
\[\Phi_{NQ}(z)=\left(\kappa\,z\right)^{2-\alpha} \tag{41}\]
allows modeling heavier hadron masses. The energy scale \(\kappa\) (in GeV) usually controls the hadron mass and is associated with the strong force nature inside hadrons. In the case of the adimensional parameter \(\alpha\), it controls the effect of the constituent mass in the trajectory. For light unflavored, \(\alpha=0\). For heavier mesons \(\alpha\to 2/3\) (see [15] for deeper details. This running of \(\alpha\) with the constituent masses allows us to include other mesonic species as heavy-light systems or non-\(q\bar{q}\) states by parametrizing \(\alpha\) in terms of the quark constituent masses.
The holographic potential has the following form
\[V_{NQSW}(z)=\frac{M_{5}^{2}\,R^{2}}{z^{2}}-\frac{\beta}{2\,z^{2 }}+\frac{\beta^{2}}{4\,z^{2}}+\kappa^{2}\left(\kappa\,z\right)^{2-2\alpha}\\ -\alpha\,\kappa^{2}\left(\kappa\,z\right)^{2-2\,\alpha}+\frac{ \alpha^{2}\,\kappa^{2}\left(\kappa\,z\right)^{2-2\,\alpha}}{4}-\frac{\beta\, \kappa\left(\kappa\,z\right)^{1-\alpha}}{2\,z}\\ +\frac{\alpha\,\beta\,\kappa\left(\kappa\,z\right)^{1-\alpha}}{ 2\,z}-\kappa^{2}\left(\kappa\,z\right)^{-\alpha}\\ +\frac{3\,\alpha\,\kappa^{2}\left(\kappa\,z\right)^{-\alpha}}{2}- \frac{\alpha^{2}\,\kappa^{2}\left(\kappa\,z\right)^{2}}{2}. \tag{42}\]
We will apply this potential to heavy vector charmonium. Thus, the parameters are fixed as \(M_{5}^{2}\,R^{2}=0\), \(\beta=-1\), \(\kappa=2.15\) GeV and \(\alpha=0.54\). This potential is depicted in Figure 2.
Decay constants are summarized in Table 1. Since this model shares the \(z\to 0\) properties with SWM, decay constants are not well fitted. An improvement to the quadratic deformed SWM is a hybrid dilaton that mixes the low-\(z\) behavior of the dilaton field (39) with the non-quadratic high-\(z\) behavior of the dilaton (41), preserving the heavy quarkonia mass spectrum and improving the decay constants, see [37] for details.
## VI Isospectrality a la bottom-up
The recipe exposed above allows us to explore other features of bottom-up models. By their inner nature, i.e., the dilaton choice, the results of these holographic models are expected to be phenomenologically similar. Most of the differences will rise due to the dilaton itself. However, the asymptotic behavior is quite different for the non-zero dilaton proposals (SW, BSW, NQSW). Recall that the dilaton asymptotics determines the decay constants calculation since the value \(\Phi(z\to 0)\) acts as a normalization constant.
In the region \(z\to 0\), all the potentials are dominated by the contribution coming from the AdS geometry. However, in the region \(z\to\infty\) some difference arises. For the SW and BSW models, the potentials behave as \(z^{2}\). For NQSW, potential behaves as \(z^{2-\alpha}\) for \(z\to\infty\). These differences are connected to the linearity in the holographic Regge trajectories. In the intermediate region, the story is different since the dilaton effects are notable in the eigenmodes. We have analytic solutions in most of the procedures. Thus, we will illustrate some of the calculations. The other approaches, BSW and NQSW models, must be approached numerically from the beginning. Therefore, we will explain and comment on the procedure and the results.
_For the hardwall model_, we can compute the associated dilaton field \(\tilde{\Phi}_{\lambda}(z)\) to this isospectral family. It is interesting to notice that, even though the original hardwall dilaton is zero, we have non-zero dilatons associated
with the hardwall isospectral family. Figure 2 depicts the collection of isospectral dilatons. As in the case of the isospectral potentials and ground states, the isospectral dilaton tends to recover the original behavior when \(\lambda\to\infty\).
Let us apply the isospectral technology. The isospectral function \(I(z)\) is defined in this case as
\[I_{hw}(z)=\frac{2\,\Lambda_{QCD}^{2}}{\left|J_{2}(\alpha_{1,n})\right|^{2}}\int _{0}^{z}d\,x\,x^{1-\beta}\,J_{1}\left(M_{n}\,x\right)^{2}. \tag{43}\]
Therefore, the family of isospectral potentials is defined as
\[\hat{V}_{\lambda}(z)=V_{hm}(z)-2\frac{d^{2}}{d\,z^{2}}\log\left[I_{hw}(z)+ \lambda\right]. \tag{44}\]
We solve this potential numerically to compute the isospectral family. Figure 2 depicts the potentials and associated ground states. It is interesting to notice that when we run \(\lambda\to\infty\), the isospectral solutions tend to recover the original hardwall solutions. In table 2, we summarized the numerical calculation of the isospectral mass spectra.
We can compute the associated dilaton field \(\tilde{\Phi}_{\lambda}(z)\) to this isospectral family. Even though the original hardwall dilaton is zero, we have non-zero dilatons associated with the hardwall isospectral family. Figure 2 depicts the collection of isospectral dilatons. These isospectral dilatons tend to recover the original behavior when \(\lambda\to\infty\), i.e., they tend to vanish.
Regarding isospectral decay constants, the ground state decay is the only one modified by the isospectral transformation. Decay constants for excited states are not sensitive to the isospectral procedure. When we closely inspect the effect of the \(I(z)\) function on the potential, the \(D_{z}^{2}\,\log(I+2)\) terms affect only the low \(z\) region of the holographic potential. For regions \(z\to\infty\) and \(z\to 0\), \(V(z)\) remains unaffected. Decay constants depend strictly on the \(z\to 0\) behavior of the Schrodinger mode. Thus, it is expected that isospectral and non-isospectral excited states, which were not used to build the isospectral transform (9), share the same \(z\to 0\) scaling behavior.
Also, as was expected from the isospectral transformation (9), the large \(\lambda\) family matches the non-isospectral case. Table 1 summarizes the decay constants spectra for the different isospectral families considered.
This feature is also consistent with holography since the isospectral transformation is not expected to change the field/operator duality since this transformation is written purely from bulk information. Thus, to be consistent, bulk information should not change boundary information. In other words, hadronic identity is preserved under isospectral transformations such as the expression (9). The same behavior observed in HWM for decay constants is kept in the other bottom-up holographic models revised in this manuscript.
It is worth mentioning that in light-front holography, these isospectral procedures coming from supersymmetry have also been implemented. Stanley Brodsky and collaborators discussed using SUSY algebra to accommodate mesons, baryons, and tetraquarks in a supermultiplet. See [46; 47; 48; 49] and references there in. In these works, supersymmetry is manifest, and different particle spectra are not strictly isospectral but differ by the ground state's exclusion. We remark that in the present analysis, there is no supersymmetry! Here, the supersymmetry algebra is used to build the isospectral procedure in the same way that we use complex exponentials to build the real trajectories of the harmonic oscillator.
_Let us turn our attention to the softwall model._ This is the only model with an analytical solution for the isospectral transformation, written in terms of the Gamma incomplete function \(\Gamma(a,z)\). Let us prove this. To do so, start from the ground state
\[\phi_{0}(z)=\sqrt{2\,\kappa^{4}}\,z^{3/2}\,e^{-\frac{1}{2}\,\kappa^{2}\,z^{2}}. \tag{45}\]
Then, the isospectral transformation takes the following form
\[I_{SW}(z)=2\,\kappa^{4}\,\int_{0}^{z}dx\,z^{3}\,e^{-\kappa^{2}\,x^{2}}=1- \Gamma\left(2,\kappa^{2}\,z^{2}\right), \tag{46}\]
that is written using the incomplete Gamma function. We can perform this calculation to prove that the expression above is independent of the hadronic spin. The monoparametric isospectral potential is now written as
\[\hat{V}_{SW}(z)=V_{SW}(z)\\ -2\,\frac{d^{2}}{d\,z^{2}}\,\log\left[1-\Gamma\left(2,\kappa^{2} \,z^{2}\right)+\lambda\right]. \tag{47}\]
The family of isospectral potentials for the vector softwall model is depicted in Figure 2, along with the dilaton and isospectral ground states. The numerical masses are summarized in Table 3.
Compared with HW, the SW model has the same isospectral behavior (nor the same holographic phenomenology). For \(z\to 0\) and \(z\to\infty\), the isospectral potential behaves as its SW counterpart. However, in the intermediate region, the isospectral potential behaves differently. As in the HW model case, \(I_{SW}(z)\) strongly modifies the ground state slope, leaving the excited ones unaltered. Proof of this statement is the behavior of the decay constants. For vector mesons, isospectral transformations break the degeneracy only for the ground state. Excited states keep being degenerated. Isospectral decay constants are summarized in Table 1.
For the dilaton reconstruction, the isospectral dilaton tends to zero when \(z\to 0\). in the high-\(z\) limit, the isospectral dilaton behaves quadratically with \(z\)-coordinate (See Figure 2). As in the HW model, the
isospectral dilaton tends to its non-isospectral counterpart at the bulk extrema. We expect the same behavior for all the deformations of the quadratic dilaton. Now we will explore these scenarios.
From this point, the isospectral calculations are entirely numerical. Thus, we can summarize the Braga and Non-quadratic dilaton deformations isospectral approach as follows: start from the vector meson ground state. For both models, we consider the \(J/\psi\) trajectory. From the Schrodinger bulk mode associated with the \(J/\psi\), we compute the isospectral transformation (9) and the isospectral potential (10). Next, we solve the family of isospectral potentials. We compute masses, decays, and isospectral Schrodinger modes.
_Let us consider the Braga deformed SWM_. In figure 2, we plot the isospectral family of potentials related to this holographic potential; see equation (40). As in the previous cases, the isospectral transformation modifies the low-\(z\) potential behavior. For the excited states, the eigenfunctions in the family have the same slope in the limits \(z\to 0\) and \(z\to\infty\). The decay constants spectrum supports this observation (see Table 1), where only the ground state decay is modified.
A similar situation occurs with the dilaton field. Despite the original idea of using this deformation to improve SWM decay constants, isospectrality modifies the ground state only, leaving the higher \(n\) decay constants unperturbed. The most significant modifications are observed in the low-\(z\) region, where the dilaton has a pronounced deformation for \(\lambda\to 0\), as the other AdS/QCD models.
_For the non-quadratic SW, the story is not so different_. In Figure 2, we plot the isospectral family of potentials, ground states, and isospectral dilatons for this deformed model. The isospectral potential only modifies the low \(z\) behavior of the dilaton fields. In the limits \(z\to 0\) and \(z\to\infty\), isospectral dilatons behave as the non-isospectral counterpart: vanishing for \(z\to 0\) and \(z^{2-\alpha}\) for \(z\to\infty\).
For the Schrodinger modes, decay constants are modified for the isospectral ground states. As in the previous cases, this isospectral transformation did not affect the excited modes decay constants, supporting that the isospectral transformation does not affect the low-\(z\) behavior of excited states. Table 1 presents the summary of isospectral decay constants.
At this point of the discussion, it is worth mentioning that the observed behavior of the isospectral decay constants is customary for the AdS/QCD models considered here, and it can be generalized to any other SW-like model, i.e., another dilaton-based model. As a matter of fact, in the holographic potential, dilaton derivatives control the high-\(z\) asymptotics of the holographic potential. The low-\(z\) region is dominated by the AdS Poincare Patch warp factor, which is unaffected by the isospectral transformation.
As a general conclusion, all of the holographic quantities calculated with the isospectral states tend to their non-isospectral counterpart when \(\lambda\to\infty\).
## VII Differential configurational entropy and isospectrality
Now, let us focus on the configurational entropy in this isospectral and holographic landscape. Differential configurational entropy becomes handy to analyze the connection between isospectrality and stability for these models at zero temperature. In the last section, we saw how the isospectrality modifies the dilaton behavior. For effective model engineering, isospectrality can open new possibilities to encode (or decode) holographic information.
As we did in the last section, we will focus on the vector meson case for the DCE calculation, i.e., \(\beta=-1\). For scalar mesons, the calculation is straightforward. The idea is to compute the DCE for the ground state of each potential in the isospectral family, labeled by \(\lambda\).
The recipe for DCE computation is standard. As explained in section IV, we start from the bulk normalizable model. We compute the on-shell energy-momentum tensor and extract \(T_{00}\), the energy density \(\rho\), see eqn. (24). Then, we Fourier transform \(\rho(z)\) to define the modal fraction and compute the configurational entropy using the Shanon expression. For all the models discussed here, the calculations are strictly numerical. We will focus on discussing the results rather than the procedure to compute.
Since our goal is to seek a connection between stability and isospectrality, under the hypothesis that _most stable models are those with \(\lambda\to\infty\)_. We will confront this assessment with the DCE analysis. A more general analysis for the finite temperature and finite chemical potential requires a different thermodynamical criterium since thermal effects will contribute to the entropy calculation.
To confront the hypothesis, we will analyze two scenarios. First, we will consider the DCE behavior for ground states in each isospectral family. Second, we will compute the DCE for the excited states. We expect that ground states have the smallest DC. These analyses will lead us to define a criterium of _which isospectral family will be suitable to describe holographic properties_, at least from the isospectrality. A summary of our outcomes is given in Table 2 and Figure 3. We use non-isospectral DCE as control data since we expect isospectral DCE to meet non-isospectral models at \(\lambda\to\infty\).
For the first test, the behavior of all four models is consistently the same. The \(\lambda\to\infty\) ground mode does not reach the lowest DCE value. We observed that when \(\lambda\to 0\), DCE in the ground states tends to increase. Then, it comes to a minimum at \(\lambda\to 1\) and finally goes asymptotically to the non-isospectral DCE, the \(\lambda\to\infty\) case. This behavior is more dramatic in the HWM case. For the other models, the dilaton at hand softens this behavior.
From purely entropic grounds, it was expected that, among the isospectral family, the ground state with the lowest CE should be preferred. This is not the strict rule in all the studied cases. Thus, we conclude for the first
test that _from a DCE basis, any isospectral family with \(\lambda>1\) will lead to consistent results for hadronic spectroscopy_. It is worth mentioning that most of the isospectral effect will rely on the ground state. Any hadronic property defined in terms of bulk field derivatives will remain essentially unadulterated for excited states. However, properties that depend on the bulk mode, as form factors or mesonic wave functions, can be modified by isospectrality.
In the case of the second test, when \(\lambda\to 0\), aside from the ground state, excited states increase their DCE with excitation number \(n\). We softened the interpolation in Figure 3 to see the overall DCE behavior. For all the models, we see that \(0<\lambda<1\) defines an exclusion region for isospectral holographic models. It is worth noticing the difference between HW and SW-like models. DCE for isospectral HW demonstrates that the most favorable solution is for \(\lambda=1\). DCE for \(\lambda\to 0\) proves the model is unstable. When the dilaton field starts to play, DCE has a particular and fascinating behavior. Isospectrality seems to affect the ground state over the excited ones. Excited states have the same DCE despite de isospectral family since the bulk mode and the mass are similar. Recall that bulk modes and the mass spectrum are the main ingredients of \(\rho(z)\). This observation is supported by the fact that decay constants, depending on the mode derivative near the boundary, are equivalent. This invariance under the isospectral transformation for excited modes is not expected on the isospectral side.
Our analysis leads us to conclude that isospectral holographic models that pretend to have phenomenological appropriate behavior, i.e., consistent with the experimental hadronic spectroscopy, should have \(\lambda>1\). Thus, we can define an isospectral holographic frame for hadronic physics.
## VIII Conclusions
This work discusses how isospectrality, differential configurational entropy, and AdS/QCD models are connected. Ref. [20] summarizes machinery used to build particles supersymmetric-partners from a given quantum mechanics potential. However, this formulation raises a natural question if you have two potentials belonging to an isospectral family. Which one should be preferred to the other when modeling a given phenomenology? Suppose one is Coulomb potential. In this particular Coulomb example, the potential structure comes from well and deep known electromagnetic phenomenology.
In holographic QCD, effective models have more phenomenological "freedom," allowing the existence of many proposals. At this point, isospectrality adds more ingredients to the story. Any isospectral potential, with a non-specified \(\lambda\), can generate any AdS/QCD model, according to the master equation (20). In this context, the above question found a new soil to grow: Among the possible AdS/QCD models we can develop, which ones
Figure 3: DCE for the mass spectra in each isospectral family. Isospectrality breaks stability in these models. For \(\lambda\to 0\), ground states are more unstable than excited states, implying that these isospectral families lead to inconsistent holographic results compared to boundary data supported by experiments. When \(\lambda\to 1\), ground states become more stable than excited modes. Thus, a good isospectral model, in terms of DCE, should have \(\lambda<1\).
can adequately describe hadronic spectroscopy? The answer comes in terms of the configurational entropy. CE is associated with the entropy observed in isothermal systems from pure thermodynamic grounds, implying it is connected with their inner statistical configuration.
On the other hand, at the holographic level, hadrons in these AdS/QCD models are described as _bags_ having constituents. These bags are characterized by the bulk mass that defines the hadronic identity. The strong interaction between constituents in the bag is captured in the mechanism used to break the AdS scaling invariance softly. The discriminant is the nature of the spectrum, i.e., if it is linear or not.
It is possible to define a region where a model is spectroscopically consistent using isospectrality and DCE. In the extremum, i.e., \(z\to 0\) and \(z\rightarrow\infty\), the potentials are controlled by the Poincare warp factor and the dilaton, respectively. The dilaton technology richness appears in the intermediate region, leading to improved decay constants or form factors [50, 51, 52, 53, 54, 42].
Despite this apparently freedom, isospectrality, and DCE defines a clear path: a proposed holographic model based on dilaton fields, pretending to capture hadronic spectroscopy, should have a decreasing DCE with excitation number, which is equivalent to say that has an isospectral parameter \(\lambda>1\).
|
2303.14562
|
Resolution Complete In-Place Object Retrieval given Known Object Models
|
This work proposes a robot task planning framework for retrieving a target
object in a confined workspace among multiple stacked objects that obstruct the
target. The robot can use prehensile picking and in-workspace placing actions.
The method assumes access to 3D models for the visible objects in the scene.
The key contribution is in achieving desirable properties, i.e., to provide (a)
safety, by avoiding collisions with sensed obstacles, objects, and occluded
regions, and (b) resolution completeness (RC) - or probabilistic completeness
(PC) depending on implementation - which indicates a solution will be
eventually found (if it exists) as the resolution of algorithmic parameters
increases. A heuristic variant of the basic RC algorithm is also proposed to
solve the task more efficiently while retaining the desirable properties.
Simulation results compare using random picking and placing operations against
the basic RC algorithm that reasons about object dependency as well as its
heuristic variant. The success rate is higher for the RC approaches given the
same amount of time. The heuristic variant is able to solve the problem even
more efficiently than the basic approach. The integration of the RC algorithm
with perception, where an RGB-D sensor detects the objects as they are being
moved, enables real robot demonstrations of safely retrieving target objects
from a cluttered shelf.
|
Daniel Nakhimovich, Yinglong Miao, Kostas E. Bekris
|
2023-03-25T21:08:09Z
|
http://arxiv.org/abs/2303.14562v1
|
# Resolution Complete In-Place Object Retrieval
###### Abstract
This work proposes a robot task planning framework for retrieving a target object in a confined workspace among multiple stacked objects that obstruct the target. The robot can use prehensile picking and in-workspace placing actions. The method assumes access to 3D models for the visible objects in the scene. The key contribution is in achieving desirable properties, i.e., to provide (a) safety, by avoiding collisions with sensed obstacles, objects, and occluded regions, and (b) resolution completeness (RC) - or probabilistic completeness (PC) depending on implementation - which indicates a solution will be eventually found (if it exists) as the resolution of algorithmic parameters increases. A heuristic variant of the basic RC algorithm is also proposed to solve the task more efficiently while retaining the desirable properties. Simulation results compare using random picking and placing operations against the basic RC algorithm that reasons about object dependency as well as its heuristic variant. The success rate is higher for the RC approaches given the same amount of time. The heuristic variant is able to solve the problem even more efficiently than the basic approach. The integration of the RC algorithm with perception, where an RGB-D sensor detects the objects as they are being moved, enables real robot demonstrations of safely retrieving target objects from a cluttered shelf.
## I Introduction
Robotic manipulation has the potential of being integrated into daily lives of people, such as in household service areas [1, 2]. A useful skill for such household settings involves the retrieval of a target object from a confined and cluttered workspace, such as a fridge or a shelf, which may also require the rearrangement of other objects in the process. In this context, it is important to consider how to safely retrieve objects while minimizing the time spent or the amount of pick and place operations, so as to assist humans efficiently.
One of the challenging aspects of these problems that requires explicit reasoning relates to heavy occlusions in the scene, as the sensor is often mounted on the robot and has limited visibility. These visibility constraints complicate the task planning process, as rearranging one object can limit placements for others and can introduce new occlusions. Moreover, real-world scenes in household setups are often unstructured and involve objects with complex spatial relationships, such as objects stacked on each other.
Many previous efforts on object retrieval have focused on cases where blocking objects are extracted from the workspace [3, 4], which simplifies the challenge as it does not require identifying temporary placement locations for the objects within the confined space. In-place rearrangement has been considered in some prior efforts [5]. While this prior method is efficient, it is not complete as it limits the reasoning on the largest object in the scene to analyze object traversability [4]. Alternatives use machine learning to guide the decision making [6, 7, 8], which is an exciting direction but does not easily allow for performance guarantees, such as resolution completeness. Setups where object stacking arise have received less attention and most solutions that do consider stacking are dependent on machine learning for reasoning [9, 10, 11]. Some works have proposed testbeds [12] that can help evaluate solutions in this domain.
This work focuses on object retrieval in clutter where occlusions arise and objects may be initially stacked under the assumption of known object models (e.g. Figure 2). It aims at a theoretical understanding to show the algorithm has safety and RC guarantees. A heuristic variant improves the practical efficiency. Key features of the proposed RC framework are the following:
* it employs an adaptive dependency graph data
Fig. 1: (Top) Setup for the real demonstration using an RGB-D sensor, robotic gripper, and Yaskawa Motoman robot to retrieve the target bottle. (Bottom Left) The camera view in which objects are occluded. (Bottom Right) The corresponding voxel map.
structure inspired by solutions in object rearrangement with performance guarantees [13] that express a larger variety of object relationships than previously considered (namely occlusion dependencies);
* it computes the occlusion volume of detected objects as a heuristic to inform the planning process;
* it reasons about the collision-free placement of objects in the confined workspace efficiently by utilizing a voxelized representation of the space;
* it achieves RC (or PC) depending on the implementation of the underlying sampling subroutines;
* it provides an early termination criterion when a solution cannot be found for the given resolution.
Simulation results, using a model of a Yaskawa Motoman manipulator for rearranging objects on a tabletop as shown in Fig. 1, evaluate the proposed RC framework against a baseline using random picking and placing operations. Both variants of the RC reason about object dependencies. One doesn't use heuristics and one is heuristically guided while retaining RC. Both of the RC approaches outperform the baseline. The heuristically guided solution is able to solve the problem more efficiently than the basic RC solution. The integration of the proposed approach with perception, where an RGB-D sensor detects the objects as they are being moved, provides real robot demonstrations of safe object retrieval from a cluttered shelf.
## II Related Work
Some works on object retrieval rely on geometric analysis of object occlusion [3, 4]. They provide theoretical insights but frequently do not limit actions to in-place rearrangement of blocking objects. Specifically, one method constructs a dependency graph taking into account objects that jointly occlude a region and objects that block others [3]. The occlusion volume is used to estimate belief regarding the target object position and helps to construct an optimal A* algorithm. An alternative constructs a Traversability graph (T-graph) [4], where the edges encode if the largest object in the scene can be moved between two poses. It then constructs an algorithm to extract the target object, but is limited as the traversability edges are too constraining. The POMDP formulation is popular for the task [14, 15], which allows the application of general POMDP solvers. The POMDP formulation was also adopted by the work that formalizes object retrieval in unstructured scenes as "mechanical search" [16].
Alternatives rely on learning-based methods to solve such challenges, such as reinforcement learning [6] or target belief prediction [7, 9, 17]. They report good performance but do not provide theoretical guarantees given the black-box nature of the solutions. In particular, a reinforcement learning solution [6] uses the rendered top-down projections of the scene to predict the target poses. A recent follow-up effort [7] on previous work [17] estimates the 1D position belief of the target object on the shelf via machine learning. It then constructs a policy based on the distribution change after applying pushing and suction actions. It incorporates stacking and unstacking actions, where object stacking is represented by a tree structure. Other works such as [18] utilize learning for planning grasps to greedily empty bins of complex and novel objects.
A related work [19] proposes a complete framework to safely reconstruct all objects in the scene amidst object occlusions. Nevertheless, object retrieval may not require reconstructing all objects and requires a search procedure that is more task-driven for efficiency. There are also previous works [20, 19] that construct a voxelization of the environment to model object occlusions, similar to the current work. This representation is used to compute an object's occlusion volume, which provides heuristic guidance. Object spatial relationships are often represented by scene graphs [10, 11], or implicitly in machine learning solutions [16, 21, 22].
What stands out in this work is that it proposes a general template for a RC or PC approach to task retrieval in occluded environments that only relies on basic motion and perception primitives. This modular nature allows for quick sim-to-real transfer and passive performance improvement as the primitives are improved over time. Furthermore, an efficient implementation is demonstrated utilizing a voxelized representation of the environment for quick collision filtering of object placements as well as providing an effective heuristic to rank object manipulations. Thus, the framework enables effective in-workspace manipulation.
## III Problem Statement
Consider an environment with a set \(\mathsf{S}=\{o_{1},...,o_{n}\}\subset\mathsf{O}\) of \(n\) objects for which there are available 3D models. The objects are stably resting on a support surface. Objects are allowed to be initially stacked and occlude each other from the camera view.
The robot has one fixed RGB-D sensor at its disposal. Discovered objects are those recognized given the observation history. An objects is assumed to be recognized once an image segmentation process recognizes it as an individual object in the observed image. Similarly, a perception method for detecting the target object once observed is assumed. The region of the workspace occluded by object \(o_{i}\) at pose \(s_{i}\) is denoted as \(O_{i}(s_{i})\). Similarly the space uniquely occluded by object \(o_{i}\) at pose \(s_{i}\), called the direct occlusion space, is denoted as \(\tilde{O}_{i}(s_{i})\). The proposed algorithms gradually removes occlusions and recognizes objects. A motion planner is used to plan pick-and-place actions.
While objects can start out stacked, they are not re-stacked and are only placed on the ground surface during actions. While objects can start out stacked, no reasoning about stability and ability to re-stack objects is considered. Thus, once an action to pick up a stacked object is taken, that object will only be placed on the
ground surface. For further assumptions and required properties of the motion planner see section V.
The objective for the object retrieval task is to determine a sequence of pick and place actions in order to discover and subsequently retrieve the target object; the target need not be directly visible or pickable from the robot's sensors. The corresponding solution should provide desirable guarantees: (a) safety, by avoiding collisions with sensed obstacles and objects as well as occluded regions, and (b) resolution completeness (RC) - or alternatively probabilistic completeness, depending on the implementation of the underlying motion planner, grasping process and object placement sampling. The optimization objective is to minimize the number of performed actions until the target object is retrieved.
## IV Method
The proposed pipeline is detailed in Algorithm 1. First a voxelized representation of the scene and a dependency graph are computed (lines 3,4,5), which are detailed in subsection IV-A. The dependency graph contains a belief state of the current scene based on visibility and reachability constraints. All the _sinks_ of this directed graph represent likely pickable objects. The _ranks_ are described later (see subsection IV-C). If the target object is pickable then it is retrieved and the pipeline is terminated line 6. Otherwise, a placement is planned (see subsection IV-B) one at a time for each object in a shuffled order biased by the _ranks_ (_TryMoveOne_ line 7). The first successful plan is executed.
If no placement is found for any of the pickable objects, then a fallback procedure is called to try and pick one object and move it temporarily out of view of the camera; the first successful plan is executed, the scene is re-sensed, and a new placement is sampled for the object or the object is simply put back at the same spot (_MoveOrPlaceback_ line 10). At this stage, the pipeline could restart if a new object was discovered (lines 12-15). Otherwise the same set of pickable objects are tested for new placements in the scene (line 16). If these two operations of "moving an object to look behind it"(_MoveOrPlaceback_) followed by retrying to "move one of the pickable objects to a new spot"(_TryMoveOne_) fail for all objects, then the pipeline can return and report failure for the current resolution (line 22). If the sequence of operations succeeds, then the pipeline can restart (line 19 \(\rightarrow\) 2).
### _Voxel Map and Dependency Graph_
A 3D occlusion voxel grid within the workspace is constructed from RGB-D images. First the point cloud (in world frame) is generated using the RGB-D image and the inverse of the camera projection. These points are down-sampled into a voxelization of the scene. Given segmentation image information, each object is associated with a portion of the voxel grid. Object geometry is used to label voxels as occupied and remaining associated voxels as occluded. The occluded regions of objects may intersect when jointly occluded.
The dependency graph is a directed graph where each node represents a visible (or target) object and a labeled edge \((o_{i},o_{j},r)\) represents a relation \(r\) from object \(o_{i}\) to \(o_{j}\) that necessitates \(o_{j}\) to be picked and placed before \(o_{i}\) could be picked. Valid relations in this work include "below", "grasp blocked by", and - for the prediction of the target object - "hidden by". See Figure 2 for a sequence of such dependency graphs generated during an example experiment.
```
1:failure\(\leftarrow\) false
2:whilefailure\(=\) false do
3:space\(\leftarrow\) UpdateVoxelsFromImage\(()\)
4:dg\(\leftarrow\) DepGraph(space)
5:sinks,ranks\(\leftarrow\) RankSinks(target,dg)
6:iftarget\(\in\) sinks then break
7:ifTryMoveOne(sinks,ranks)\(=\) false then
8:failure\(\leftarrow\) true
9:forsink\(\in\) sinks do
10:ifMoveOrPlaceback(sink)\(=\) false then
11:continue
12:space\(\leftarrow\) UpdateVoxelsFromImage\(()\)
13:ifDidDiscoverObject(space) then
14:failure\(\leftarrow\) false
15:break
16:ifTryMoveOne(sinks,\(\emptyset\))\(=\) false then
17:continue
18:failure\(\leftarrow\) false
19:break
20:ifnot failure then
21:Retrieve(target)
22:return failure
```
**Algorithm 1**RC_Pipeline(target)
Object x is defined to be "below" object y (\(x\xrightarrow{below}y\)) if object x touches object y and the z-coordinate of the center of mass of x is less than that of y. Note that this isn't guaranteed to capture all intuitive cases of one object being below another for non-convex objects. This relation is computed using object models and poses given by the perception system.
Object x has its "grasp blocked by" y (\(x\xrightarrow{blockd}y\)) if there are no collision free grasp poses for object x and the arm is in collision with y for one or more valid grasp poses. (Grasp poses are sampled and tested by inverse kinematics (IK) for discovered objects) Grasp poses are sampled using inverse kinematics (IK) to discovered objects. (or, if the grasping pose collides with objects, an edge to each collided object is added) If there exists a collision free grasp for an object, no blocking edges are added; otherwise, an edge to each object that has a collision with the arm is added. Note that although this relation guarantees that the source object blocks the target, it doesn't capture all such
reachability dependencies. This however is not an issue for completeness as Algorithm 1 will eventually try to grasp all objects in the case of motion planning failure.
The target object t is possibly "hidden by" x (\(t\xrightarrow{hidden}\), \(x\)) if the target isn't sensed in the scene and object x is touching the table. This relation is used to keep track of the belief state of where the target is. Each edge is assigned a probability based on the volume of the occluded space behind object x (see subsection IV-C).
### _Placement Sampling_
A valid placement is one that doesn't collide with another object or any undiscovered area of the workspace. Instead of randomly sampling x,y coordinates for an object and checking for collision at that point, we create a grid matching the horizontal extents of the workspace and add a collision mask which is the shadow of the object occupancy and occlusion voxel grid looking from a birds eye view. This mask is then convolved with the shadow of the object that is to be placed. The occupied pixel indices indicate collision-free placements and are converted to world coordinates. Object orientations can be enumerated by rotating the object shadow.
### _Target Object Prediction_
Intelligent object location prediction is achieved by applying a heuristic which ranks the pickable objects determined by the dependency graph (line 4 in Algorithm 1). This is done by augmenting the dependency graph edges with a weight \(p\in(0,1]\) estimating the probability for the relation \(o_{i}\xrightarrow{r}o_{j}\) to be true. To rank a pickable object, the sum of products of edge weights of all simple paths between the target and the object is computed. When sampling from the list of pickable objects, this rank is used as a probability weight.
For the "below" relation \(p=1\) since object segmentation is assumed to be reliable. For the "grasp blocked by" relation p is equal to the fraction of total sampled grasps for which that object is in collision with the arm. Note, the weights of grasp blocking edges coming out of any object need not add to one (hence don't truly represent probabilities) since the arm could be colliding with multiple objects for any single grasp. For the "hidden by" relation, the goal is to encourage knowledge gain of the environment. This is done by normalizing the volume of the direct occlusion region of each stack of objects and assigning the inverse as the probability estimate that the target is hidden behind each stack. This heuristic biases the pipeline towards discovering large volumes of occluded workspace.
From an algorithmic point of view, there is technically no reason to normalize the output of the heuristic, however, representing the heuristics as probabilities is insightful since - without prior knowledge - the probability the hidden object to be in a larger volume is larger than the probability of it being in a smaller volume. Furthermore, modeling the dependency graph
Fig. 2: Images (1)-(6) show a simulated experiment from initial configuration to final one action at a time with corresponding camera views in the top left. The corresponding generated dependency graphs transitioning between the images (1)-(2), (2)-(3), (3)-(4), and (4)-(5) are shown in images (a)-(d). The colors of nodes corresponds with the objects in the scene and the red object (labeled T) is the target object for the trial. The last graph between images 5-6 is not shown since it is trivial having no dependencies between any objects.
edges with probabilities as opposed to non-normalized weights is conducive for exploring future work which might seek to combine the probabilities based on the proposed volumetric-heuristics with priors based on the semantics of the objects involved or from an additional human instruction (see section VII work).
## V Resolution Completeness
Given a (formally) complete motion planner (finds solution in finite time if exists), a continuous space grasp sampler, and continuous placement sampler, the proposed pipeline would be PC Given a RC motion planner, a discrete space grasp sampler, and discrete placement sampler, the proposed pipeline would be RC.
To show the PC or RC of the algorithm proposed in this work, a simpler version of the algorithm is analysed first. Without loss of generality, the actual proposed algorithm will be likewise proven complete.
Consider a much simpler algorithm that, at every iteration, tries to pick an object at random and, if is not the target object, subsequently place it randomly in the explored region of the workspace; call this RAND-ACT.
**Lemma 5.1**: RAND-ACT is PC or RC (depending on the planning and sampling subroutines).
At every iteration, RAND-ACT attempts to perform a random action. Consequently, RAND-ACT executes a random walk on the space of all actions. Since pick and place actions are reversible, if a sequence of such valid actions exists, this algorithm will eventually perform it (or an augmented version of the sequence - i.e. placing an object back to where it was picked from) in the limit (or finite time for RC subroutines).
**Corollary 5.2**: Algorithm 1 is PC or RC (depending on the planning and sampling subroutines)).
Indeed Algorithm 1 is really a fancy implementation of RAND-ACT. At every iteration, the dependency graph is used to identify (and heuristically rank) the currently pickable objects. One of these is chosen randomly for a pick and place action via the TryMoveOne subroutine. The MoveOrPlaceback subroutine acts as a fallback in case there are no new discovered placements found; it delays placement sampling till after the environment is re-sensed with the picked object moved out of the way. Notice further, that even if the pickable objects are sampled weighted according to their ranking rather than uniformly, the action space is still explored entirely because each action has a positive probability of being sampled. Thus, **w.l.o.g. Algorithm 1** is PC or RC as well.
**Failure Detection:** The implementation in this work uses the RC approach. In addition to RC, Algorithm 1 takes a step closer towards achieving general completeness by actually detecting certain unsolvable cases within the completeness constraints of the motion and sampling subroutines. The detectable unsolvable instances for which the algorithm will return failure in finite time are as follows.
* No object can be grasped. This could happen if two objects each block the grasp of the other.
* No objects can be placed anywhere (except its current spot). This could happen in a highly cluttered scene where the only valid placement for each object is to just put it back where it was.
Thus, Algorithm 1 has stronger guarantees than RC but is not formally complete since it may run forever by juggling two objects between two placements.
**A Caveat:** A fundamental assumption in the argument presented is that actions are reversible. This is not always true in practice depending on the implementation of the sampling subroutines. And indeed, the implementation of the placement sampling process proposed in subsection IV-B implies irreversible actions for scenes with stacked objects because it does not consider the possibility of re-stacking objects. Thus, the proposed pipeline (as implemented) is only resolution complete on the any sub-task where the objects are no longer stacked. Implementation of stacking actions is planned for future work.
## VI Experiments
Simulated experiments and the real demonstrations are performed with the Yaskawa Motoman sda10f, with a robotiq 85 gripper attached on the right arm.
The simulated trials are randomly generated by picking random objects, dimensions, and collision-free placements within the specified workspace. 20 scenes with each of 6, 8, 10, 12, and 14 objects were used, all of which contain objects occluded from the camera. Each of the 100 trials is a unique scene. The target object in each scene was selected to be the hidden object with the most objects above it, if any.
All tested algorithms were given 20 minutes to run before being terminated. A trial run is considered successful only if the target object was retrieved within the time limit. Discovering but failing to pick up the object was still considered a failure.
Comparisons of the success rate, number of actions for solved trials, total run-time for solved trials, and number of timed out trials are shown in Figure 4 for 3 algorithms. The algorithms compared are the baseline random action approach (blue in figure), the proposed resolution complete pipeline without the object ranking heuristic (orange) and with it (green). For the resolution complete approaches timing-out is not the only failure mode since they could detect certain infeasible problems (section V: Failure Detection)
The success rate of the resolution complete approaches is higher than that of the random baseline. Although its not directly apparent from the plotted results in Figure 4, the resolution complete approaches (as expected) always found a solution when the random baseline found a solution; however in one such trial the resolution complete approach without heuristic exceeded the 20 min time-limit. It is also clear that
the heuristic approach has better success than the non heuristic approach even though they are both complete. Looking at the data for timed-out experiments, it becomes clear that the increased success of the heuristic approach is due to timing out less frequently. This also coincides with the data showing that the heuristic approach overwhelmingly finds solutions faster and with fewer object manipulations. In fact, while the non-heuristic RC approach started timing out linearly with increase in the number of objects, the heuristic approach had virtually no issue until the scenes got very cluttered with 14 objects.
It is clear that for all methods, success rate starts dropping off significantly at around 14 objects. This marks the difficulty level for the given industrial Motoman robot and the workspace. A more compact robot with a streamlined end-effector (such as the "bluction" tool [7]) could scale to more cluttered scenes.
### _Integration with Perception & Real Robot Demonstration_
The pipeline is directly transferable to scenarios on the real robot to retrieve a target red bottle from a cluttered shelf. Due to time constraints, a simple implementation of a perception system is used which only segments and detects colored cylinders without stacking. Despite the simplifications, a scene with significant object occlusion is still demonstrated with a successful retrieval. The proposed pipeline (with heuristic) is run online and communicates with the robot controller and the RGBD camera for execution and sensing. The camera extrinsic matrix is estimated by a classical robot-camera calibration procedure using ArUco markers [23]. For object recognition, the perception component is implemented via plane fitting, DBSCAN segmentation [24], and cylinder fitting using Open3D [25]. The plane fitting algorithm extracts the boundaries of the workspace, which is used to construct the collision geometries in MoveIt [26]. The inliers of each segmented cylinder are used to produce the segmentation mask for the RGBD image, which is used to label occlusion correspondence for each object. To ensure safety, additional cubic collision geometries are added to the planning scene to avoid collisions between the robot and the camera. 1. Extensive experiments of the proposed pipeline were not performed on the real robot but the demonstration presented was performed a few times and the pipeline was observed to have qualitatively similar performance as in simulated experiments; however, calibration and perception issues were observed to lead to pipeline failure.
Footnote 1: Videos can be found at [https://sites.google.com/scarletmail.rutgers.edu/occluded-obj-retrieval](https://sites.google.com/scarletmail.rutgers.edu/occluded-obj-retrieval)
## VII Discussion
It's worth mentioning that the physical execution accounts for over 60% of time used for the trials. This shows that there could be room for performance improvement by performing scene perception asynchronously, since a lot can still be sensed while the robot is moving. Further performance improvement can be found by parallelizing the planning of picks and placements for multiple objects as well.
While this work applies heuristics for selecting objects based on the occlusion volume, additional information regarding effective placements can also improve practical performance. In order to solve a larger variety of problems it would be useful to adapt the placement primitive to allow placing objects on top of others when there is limited space on the workspace surface.
Another direction is to integrate the task planner with human instructions. For instance, it would be helpful to use human language to identify the target as well as influence the search at some regions over others. Additional heuristics can also be obtained from semantic reasoning of the scene when objects of the same category tend to be placed closer [27]. Since current experiments only include simple geometries, such as cylinders and rectangular prisms, future work can investigate more complex objects where state-of-the-art perception algorithms are necessary. This would also be necessary for realistic human-robot integration.
Fig. 4: On top are shown the graphs of the: success rate (left) and number of timed out trials (right). On the bottom are the number of actions and the total runtime for the subset of trials in which all algorithms were successful.
Fig. 3: Execution on the real robot : (a) Initial scene where the red bottle is hidden. (b) The robot moves the yellow bottle, which occludes the most space. (c) The robot moves the second yellow bottle, revealing the red bottle. (d and e) The robot moves the green and the blue bottles to reach the red bottle. (f) Target is now reachable.
|
2304.01876
|
Synthetic non-Abelian gauge fields for non-Hermitian systems
|
Non-Abelian gauge fields are versatile tools for synthesizing topological
phenomena but have so far been mostly studied in Hermitian systems, where gauge
flux has to be defined from a closed loop in order for gauge fields, whether
Abelian or non-Abelian, to become physically meaningful. We show that this
condition can be relaxed in non-Hermitian systems by proposing and studying a
generalized Hatano--Nelson model with imbalanced non-Abelian hopping. Despite
lacking gauge flux in one dimension, non-Abelian gauge fields create rich
non-Hermitian topological consequences. Under only nearest-neighbor coupling,
non-Abelian gauge fields enable Hopf-link bulk braiding topology, whose phase
transition accompanies the emergence of exceptional points (EPs). At both ends
of an open chain, non-Abelian gauge fields lead to the simultaneous presence of
non-Hermitian skin modes, whose population can be effectively tuned. Asymptotic
analysis shows that this tuning mechanism stems from the interplay between the
Abelian Hatano--Nelson coupling and effective high-order hopping, which becomes
substantial near the EP phase transition condition. The predicted non-Hermitian
phenomena, enabled by non-Abelian gauge fields, could be realized in synthetic
dimensional optical platforms such as time-multiplexed photonic mesh lattices
and driven ring resonators.
|
Zehai Pang, Jinbing Hu, Yi Yang
|
2023-04-04T15:31:11Z
|
http://arxiv.org/abs/2304.01876v1
|
# Synthetic non-Abelian gauge fields for non-Hermitian systems
###### Abstract
Non-Abelian gauge fields are versatile tools for synthesizing topological phenomena but have so far been mostly studied in Hermitian systems, where gauge flux has to be defined from a closed loop in order for gauge fields, whether Abelian or non-Abelian, to become physically meaningful. We show that this condition can be relaxed in non-Hermitian systems by proposing and studying a generalized Hatano-Nelson model with imbalanced non-Abelian hopping. Despite lacking gauge flux in one dimension, non-Abelian gauge fields create rich non-Hermitian topological consequences. Under only nearest-neighbor coupling, non-Abelian gauge fields enable Hopf-link bulk braiding topology, whose phase transition accompanies the emergence of exceptional points (EPs). At both ends of an open chain, non-Abelian gauge fields lead to the simultaneous presence of non-Hermitian skin modes, whose population can be effectively tuned. Asymptotic analysis shows that this tuning mechanism stems from the interplay between the Abelian Hatano-Nelson coupling and effective high-order hopping, which becomes substantial near the EP phase transition condition. The predicted non-Hermitian phenomena, enabled by non-Abelian gauge fields, could be realized in synthetic dimensional optical platforms such as time-multiplexed photonic mesh lattices and driven ring resonators.
Open physical systems interacting with external environments are described by non-Hermitian Hamiltonians that support complex eigenvalues. Compared to closed systems, non-Hermitian systems exhibit rich unique phenomena, such as power oscillations [1; 2; 3], unidirectional invisibility [3; 4], and exceptional-point (EP) encirclement [5; 6], which have no counterparts in Hermitian systems. Besides their bulk invariants defined from eigenvectors [7; 8; 9; 10] as in Hermitian systems, non-Hermitian systems also exhibit eigenvalue topology [11; 12; 13; 14; 15; 16; 17; 18; 19; 20] due to the expansion of eigenenergies from the real to the complex regime. Importantly, non-Hermitian eigenstates of a non-vanishing eigenvalue winding number are all localized at the end of open systems known as the non-Hermitian skin effect (NHSE) [7; 21; 10]. NHSE has been implemented widely in photonics [22; 23; 24; 25; 18], acoustics [26; 27; 28], mechanics [29; 30; 31], and electric circuits [32; 33; 34; 35; 36; 37]. Moreover, synthetic gauge fields have been introduced for better controlling non-Hermitian systems [38; 39; 40; 41; 42; 43], but most efforts have been dedicated to Abelian gauge fields.
Non-Abelian physics has recently attracted lots of attention in acoustics and photonics [44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62]. In particular, non-Abelian gauge fields, leveraging the internal degrees of freedom of particles, are a synthetic control knob for realizing non-Abelian physics in engineered physical systems [63]. These gauge fields enable synthetic spin-orbit interaction and can be used for creating non-Abelian Aharonov-Bohm interference and lattice models featuring complex gauge structures. Moreover, recent experiments have demonstrated the possibility of creating and tuning building blocks of non-Abelian gauge fields in fibers [45] and circuits [64], indicating their applicability for large lattice systems. The effectiveness of synthetic gauge fields substantially relies on their dimensionality. In particular, pure one-dimensional (1D) systems forbid the definition of closed loops and the associated magnetic flux. Thus, synthetic gauge fields, whether Abelian or non-Abelian, carry little physical consequences in 1D Hermitian systems. Although the 1D spin-orbit interaction realized with cold atoms [65] seems to be a counterexample, an extra Zeeman term has to be added for the Rashba-Dresselhaus gauge fields to become nontrivial. So far, non-Abelian gauge fields have seldom been explored in non-Hermitian systems, where the dimensionality constraint above could be violated.
In this work, we introduce non-Abelian gauge fields to generate non-Hermiticity and study their manipulation of NHSE. Under bare nearest-neighbor coupling, the bulk spectrum of a non-Abelian Hatano-Nelson model forms Hopf links in complex energy braiding, which was realized only with long-range hopping previously. The Hopf links are characterized by the braiding degree \(\pm 2\), whose phase transition accompanies the appearance of EPs. Non-Abelian gauge fields enable the simultaneous presence of left- and right-localized skin modes whose population is tunable, confirmed by both winding-number and the non-Bloch calculations. Asymptotic analyses further reveal that the tunability is most effective near the EP phase transition, resulting from the interplay between the Abelian Hatano-Nelson coupling and effective high-order hopping on the order of \(4N-1\) (\(N\) is an integer).
The Hatano-Nelson model [66] is a prototypical 1D system that demonstrates NHSE because of its nonreciprocal hoppings. We first extend the model with U(1) Abelian gauge fields as (Fig. 1a)
\[\hat{H}_{0}=\sum_{m}J_{\mathrm{L}}c_{m}^{\dagger}\mathrm{e}^{i\theta_{+}}c_{ m+1}+J_{\mathrm{R}}c_{m+1}^{\dagger}\mathrm{e}^{i\theta_{\mathrm{R}}}c_{m}. \tag{1}\]
Here \(c_{m}^{\dagger}(c_{m})\) is the creation (annihilation) operator at site \(m\), \(J_{\mathrm{L(R)}}\) is the real hopping amplitude leftward (rightward), and \(\theta_{\mathrm{L(R)}}\) the corresponding hopping phases. The conventional Hatano-Nelson model is restored if \(\theta_{\mathrm{L}}=\theta_{\mathrm{R}}=0\). One can reformulates Eq. (1) as \(H_{0}(k)\mathrm{e}^{-i\theta_{+}}=J_{\mathrm{L}}\mathrm{e}^{i(k+\theta_{+})}+J_ {\mathrm{R}}\mathrm{e}^{-i(k+\theta_{+})}\), where \(\theta_{+}=\left(\theta_{\mathrm{L}}+\theta_{\mathrm{R}}\right)/2\) and \(\theta_{-}=\left(\theta_{\mathrm{L}}-\theta_{\mathrm{R}}\right)/2\). As in Hermitian systems, a Peierls substitution of \(\theta_{-}\) acts on the momentum \(k\). Meanwhile, on the left-hand side, a Peierls substitution of \(\theta_{+}\) acts on the complex energy, i.e. a rotation on the complex energy plane. Thus, the U(1) fields only lead
to trivial modifications to the Hatano-Nelson model. This is confirmed by the energy band shown in Fig. 1b, which exhibits a winding number \(w=+1\) on the complex energy plane, where \(w\equiv\frac{1}{2\pi}\int_{0}^{2\pi}\partial_{\mathrm{z}}\mathrm{arg}(E(k)-E_{ \mathrm{b}})\ \mathrm{d}k=\mathrm{sgn}(J_{\mathrm{L}}-J_{\mathrm{R}})\) and \(\pm 1\) indicates counter-clockwise (CCW) and clockwise (CW) rotation, respectively.
In contrast, the model gets substantially modified with SU(2) non-Abelian gauge fields (Fig. 1c):
\[\hat{H}=\sum_{m}J_{\mathrm{L}}c_{m}^{\dagger}\mathrm{e}^{\mathrm{i}\theta_{ \mathrm{L}}\sigma_{\gamma}}c_{m+1}+J_{\mathrm{R}}c_{m+1}^{\dagger}\mathrm{e}^{ \mathrm{i}\theta_{\mathrm{R}}\sigma_{x}}c_{m}, \tag{2}\]
where \(\sigma_{x}\) and \(\sigma_{y}\) are Pauli matrices. Notably, in Eq. (2), both the hopping amplitudes (\(J_{\mathrm{L}}\), \(J_{\mathrm{R}}\)) and the non-Abelian hopping phases (\(\theta_{\mathrm{L}}\), \(\theta_{\mathrm{R}}\)) contribute to non-Hermiticity. This feature distinguishes our system from a recent study on non-Hermitian Aubry-Andre-Harper models [67], where the non-Abelian on-site potentials alone do not cause non-Hermiticity. The Bloch Hamiltonian of Eq. (2) is
\[H(k)=A(k)\sigma_{0}+\mathrm{i}J_{\mathrm{L}}\sin\theta_{\mathrm{L}}\mathrm{e} ^{\mathrm{i}k}\sigma_{\gamma}+\mathrm{i}J_{\mathrm{R}}\sin\theta_{\mathrm{R}} \mathrm{e}^{-\mathrm{i}k}\sigma_{x}, \tag{3}\]
where
\[A(k)=J_{\mathrm{L}}\cos\theta_{\mathrm{L}}\mathrm{e}^{\mathrm{i}k}+J_{ \mathrm{R}}\cos\theta_{\mathrm{R}}\mathrm{e}^{-\mathrm{i}k} \tag{4}\]
and \(\sigma_{0}\) the identity matrix. The eigen-energy of \(\hat{H}\) is given by
\[E_{\pm}(k)=A(k)\pm\mathrm{i}\sqrt{J_{\mathrm{L}}^{2}\sin^{2}\theta_{\mathrm{L }}\mathrm{e}^{2\mathrm{i}k}+J_{\mathrm{R}}^{2}\sin^{2}\theta_{\mathrm{R}} \mathrm{e}^{-\mathrm{i}2k}}. \tag{5}\]
Eq. (5) permits EPs at \(k_{\mathrm{EP}}=\{\pm\pi/4,\pm 3\pi/4\}\) when
\[J_{\mathrm{L}}^{2}\sin^{2}\theta_{\mathrm{L}}=J_{\mathrm{R}}^{2}\sin^{2} \theta_{\mathrm{R}} \tag{6}\]
the EP condition is satisfied, as shown by an example spectrum in Fig. 1d.
The two energy bands in Eq. (5) form Hopf link in (\(\mathrm{Re}\,E\), \(\mathrm{Im}\,E\), \(k\)) space (different from the exceptional-line links in three-dimensional momentum space [68, 69]). In fact, the EP condition Eq. (6) is the phase transition of the energy braiding between two types of Hopf links, defined by a braiding degree [18]\(\nu=\pm 2\) (Fig. 1e), where
\[\nu\equiv\int_{0}^{2\pi}\frac{\mathrm{d}k}{2\pi\mathrm{i}}\frac{\mathrm{d}}{ \mathrm{d}k}\mathrm{ln}\ \mathrm{det}\left(\hat{H}_{k}-\frac{1}{2}\mathrm{Tr}\hat{H}_{k}\right). \tag{7}\]
Fig. 1f-g confirm this transition, where Hopf links of opposite braiding degrees (Fig. 1g and h) appear on opposite sides of the EP phase transition (Fig. 1f). Non-Hermitian energy braiding of the Hopf-link type has been identified previously but requires longer range hopping, such as the next-nearest-neighbor coupling [18, 19]. In a recent paper [19], the Hopf link is achieved by using a building block Hamiltonian \(\left(0,\mathrm{e}^{\mathrm{i}\kappa};1,0\right)\), where \(n=2\) gives rise to Hopf link. Nevertheless, here non-Abelian gauge fields enable the realization of the Hopf link using nearest-neighbor coupling only. Therefore, even though no gauge flux can be defined, introducing non-Abelian gauge fields can sufficiently drive non-Hermitian topological phase transitions in a 1D bulk, which is impossible for Hermitian systems.
Next, we show how non-Abelian gauge fields enrich NHSE in Fig. 2. We calculate the eigen-spectra of \(H_{0}\) and \(H\) under the periodic boundary condition (PBC) and the open boundary condition (OBC). Both the PBC spectra of \(H_{0}\) and \(H\) form closed loops surrounding the zero energy in the complex plane, indicating point-gapped bulk topology, while their
OBC spectra become open arcs [15, 70, 71]. Thus, the PBC and OBC spectra are topologically distinct, and NHSE inevitably occurs in both models as a consequence of topological phase transition [16]. In Fig. 2a, a PBC spectrum of the Abelian \(H_{0}\) is an ellipse showing uniform CCW winding, and the corresponding OBC arc occupies the major axis. As determined by \(|J_{\rm L}/J_{\rm R}|>1\), all OBC states are localized on the left of the chain (Fig. 2b), identical to the conventional Hatano-Nelson model [12]; it confirms our previous analysis that the U(1) Abelian gauge field only leads to a trivial Peierls substitution without modifying the NHSE.
In contrast, the NHSE in the non-Abelian Hatano-Nelson model \(H\) is far richer. The PBC spectrum of \(H\) (Fig. 2c), under the same set of parameters, simultaneously exhibits CW and CCW winding at the four corners and center of the Hopf link, respectively. Consequently, the OBC arc enclosed by these sectors should demonstrate leftward (blue in Fig. 2c) and rightward (red in Fig. 2c) localization, respectively, whose simultaneously presence is confirmed by the visualization of the eigenstates in Fig. 2d. Meanwhile, extended states (green in Fig. 2d) also appear at the boundary of the CW and CCW winding (green circles in Fig. 2c). This simultaneous leftward and rightward localization cannot be explained solely by the imbalanced hopping amplitudes (\(J_{\rm L},J_{\rm R}\)).
We adopt both the non-Bloch [7, 14, 17] and the winding-number approaches [15] to study the effect from non-Abelian gauge fields. Using the non-Bloch approach, for an OBC energy \(E\), we calculate the characteristic polynomial \(\det[H(z)-E]\), a quartic function of \(z\) (see Sec. S2 of Ref. [72]) The generalized Brillouin zone (GBZ) [14]\(C_{z}\) is thus determined by the trajectory of \(z_{2}\) and \(z_{3}\) under the condition \(|z_{2}|=|z_{3}|\), where \(z_{2}\) and \(z_{3}\) are the second and third solutions to the polynomial sorted by the absolute values in ascending order. In parallel, we calculate the multi-band winding number (details in Sec. S3) \(w\equiv\sum_{n=1}^{N}\int_{-\pi}^{\pi}\frac{\mathrm{d}\phi}{2\pi}\mathrm{d}_{n }\mathrm{arg}(E_{n}(k)-E_{\rm b})\), where \(E_{\rm b}\) is a complex OBC energy base point, \(n\) labels the band index, and \(N\) is the total number of bands. The sign of \(w\) indicates the localization, i.e. \(w>0\) and \(w<0\) for left- and right-localization, respectively, and \(w=0\) for extended states. Shown in Fig. 2f, Consistency is achieved between our non-Bloch and winding-number analyses.
We specifically discuss the properties of zero modes \(E=0\) (details in Sec. S2) under OBC. The associated all four non-Bloch solutions are \(z=\pm\mathrm{i}\sqrt{J_{\rm R}/J_{\rm L}\mathrm{e}^{\pm\mathrm{i}\alpha}}\), where \(\alpha=\arctan\Bigl{(}\sqrt{1-F^{2}}/F\Bigr{)}\) and \(F=\cos(\theta_{\rm R})\cos(\theta_{\rm L})\). All roots of the characteristic polynomial have equal absolute values, guaranteeing the existence of OBC zero modes, as can be seen by the pinned crossing at \(E=0\) in the complex plane (Fig. 2c). We also prove that the OBC zero modes must be doubly degenerate (Sec. S2). Furthermore, The absolute values of the zero-mode non-Bloch solutions depend only on the ratio of the hopping amplitudes, which indicates that even SU(2) gauge fields cannot modify the localization direction of the zero modes (proof in Sec. S2).
Nevertheless, the localization of non-zero modes can be effectively manipulated by non-Abelian gauge fields. The GBZ of the non-Abelian model, shown in Fig. 2e, exhibits states both inside and outside the unit circle; these states are thus associated with left- and right-localization, respectively. We prove that this tunability of the skin modes is not possible using the U(1) gauge fields in the Abelian Hatano-Nelson model [see Eq. (1) and proof in Sec. S2.A].
To further elucidate the interplay between the imbalanced hopping amplitudes and non-Abelian gauge fields, we define a population contrast \(\eta\) in the OBC eigenstates as
\[\eta(J_{\rm L},J_{\rm R},\theta_{\rm L},\theta_{\rm R})\equiv\frac{n_{\rm L} -n_{\rm R}}{n_{\rm L}+n_{\rm R}+n_{\rm E}}, \tag{8}\]
where \(n_{\rm L}\), \(n_{\rm R}\), and \(n_{\rm E}\) are the number of left-localized, right-localized, and extended states, respectively. Fig. 3 shows the population contrast as a function of the gauge fields \((\theta_{\rm L},\theta_{\rm R})\)
Figure 2: **Analysis of non-Hermitian skin effect.****a-d.** Periodic- (black lines) and open-boundary (dots) spectra (a and c) and eigenstates (b and d) of the Abelian (a-b) and non-Abelian (c-d) Hatano–Nelson models. **e.** Localization analysis. The unit circle (red) intersects the generalized Brillouin zone (blue), indicating the simultaneous presence of left- (inside the unit circle) and right-localized (outside the unit circle) states. The black circles denote the zero-mode solutions. **f.** Simultaneous left- and right-localization confirmed by consistent non-Bloch (squares) and winding-number (circles) calculations. Here, \(J_{\rm L}=0.7\), \(J_{\rm R}=0.6\), \(\theta_{\rm L}=-2.5\), \(\theta_{\rm R}=-1.4\).
under different choices of \((J_{\rm L},J_{\rm R})\).
Fig. 3a exhibits an equal partition of the left- and right-localization under \(J_{\rm L}=J_{\rm R}\), where non-Abelian gauge fields are the only origin of non-Hermiticity. \(\eta\) changes sign across the 45- and 135-degree lines defined by \(\sin^{2}(\theta_{\rm L})=\sin^{2}(\theta_{\rm R})\), exactly the EP phase transition condition. Notably, when \(J_{\rm L}=J_{\rm R}\), the PBC spectrum collapses into an arc that overlaps with the OBC spectrum. Consequently, the winding number of all OBC energy points is zero, and all modes are extended. When \(\theta_{\rm L}=\{0,\pi\}\) or \(\theta_{\rm R}=\{0,\pi\}\), Eq. (5) reduces to the conventional Hatano-Nelson energy band under \(J_{\rm L}=J_{\rm R}\), whose PBC spectrum also collapses into an arc and all OBC modes are extended; however, there is no phase transition there.
In Fig. 3b and c, as the imbalance between \(J_{\rm L}\) and \(J_{\rm R}\) appears and increases, localization tunability of the non-Abelian gauge fields becomes suppressed, as shown by the reduced red-colored area. Crucially, localization tuning is most effective (indicated by color variations in Fig. 3) near the EP phase transition [Eq. (6) and solid black lines in Fig. 3].
Asymptotic analysis (Sec. S4) reveals that the appearance of such tunability stems from the competition between effective nearest-neighbor and high-order [\((4N-1)\)-neighbor where \(N\) is an integer] coupling. Without loss of generality, we assume \(J_{\rm L}>J_{\rm R}\) and obtain asymptotic expressions (details in Sec. S4) of the eigen-energy \(E_{\pm}(k)\simeq\)
\[\left\{\begin{aligned} & J_{\rm L}\mathrm{e}^{\mathrm{i} \mathrm{i}\mathrm{i}}\mathrm{e}^{\mathrm{i}k}+J_{\rm R}\cos\theta_{\rm R} \mathrm{e}^{-\mathrm{i}k},\ \mathrm{if}\ |J_{\rm L}\sin\theta_{\rm L}|\gg|J_{\rm R}\sin\theta_{\rm R}|,\\ & A(k)\pm\mathrm{i}J_{\rm L}\sin\theta_{\rm L}\mathrm{e}^{\mathrm{i }k}\sum_{n=0}^{\infty}\mathrm{C}_{n}^{1/2}\left[\left(\frac{J_{\rm R}\sin \theta_{\rm R}}{J_{\rm L}\sin\theta_{\rm L}}\right)^{2}\mathrm{e}^{-\mathrm{i} 4k}\right]^{n}\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\
|
2302.03451
|
The Solidarity Cover Problem
|
Various real-world problems consist of partitioning a set of locations into
disjoint subsets, each subset spread in a way that it covers the whole set with
a certain radius. Given a finite set S, a metric d, and a radius r, define a
subset (of S) S' to be an r-cover if and only if forall s in S there exists s'
in S' such that d(s,s') is less or equal to r. We examine the problem of
determining whether there exist m disjoint r-covers, naming it the Solidarity
Cover Problem (SCP). We consider as well the related optimization problems of
maximizing the number of r-covers, referred to as the partition size, and
minimizing the radius. We analyze the relation between the SCP and a graph
problem known as the Domatic Number Problem (DNP), both hard problems in the
general case. We show that the SCP is hard already in the Euclidean 2D setting,
implying hardness of the DNP already in the unit-disc-graph setting. As far as
we know, the latter is a result yet to be shown. We use the tight approximation
bound of (1-o(1))/ln(n) for the DNP's general case, shown by U.Feige,
M.Halld'orsson, G.Kortsarz, and A.Srinivasan (SIAM Journal on computing, 2002),
to deduce the same bound for partition-size approximation of the SCP in the
Euclidean space setting. We show an upper bound of 3 and lower bounds of 2 and
sqrt(2) for approximating the minimal radius in different settings of the SCP.
Lastly, in the Euclidean 2D setting we provide a general
bicriteria-approximation scheme which allows a range of possibilities for
trading the optimality of the radius in return for better approximation of the
partition size and vice versa. We demonstrate a usage of the scheme which
achieves an approximation of (1/16,2) for the partition size and radius
respectively.
|
Eran Rosenbluth
|
2023-02-07T13:17:56Z
|
http://arxiv.org/abs/2302.03451v1
|
# The Solidarity Cover Problem
###### Abstract
Various real-world problems consist of partitioning a set of locations into disjoint subsets, each subset spread in a way that it covers the whole set with a certain radius. Given a finite set \(S\), a metric \(d\), and a radius \(r\), define a subset \(S^{\prime}\subseteq S\) to be an \(r\)_-cover_ if and only if \(\forall s\in S\ \exists s^{\prime}\in S^{\prime}:d(s,s^{\prime})\leq r\). We examine the problem of determining whether there exist \(m\) disjoint \(r\)-covers, naming it the _Solidarity Cover Problem_ (SCP). We consider as well the related optimization problems of maximizing the number of \(r\)-covers, referred to as the _partition size_, and minimizing the radius. We analyze the relation between the SCP and a graph problem known as the Domatic Number Problem (DNP), both hard problems in the general case. We show that the SCP is hard already in the Euclidean 2D setting, implying hardness of the DNP already in the _unit disc graph_ setting. As far as we know, the latter is a result yet to be shown. We use the tight approximation bound of \((1-o(1))/\ln(n)\) for the DNP's general case, shown by U. Feige, M. Halldorsson, G. Kortsarz, and A. Srinivasan (SIAM Journal on computing, 2002), to deduce the same bound for partition-size approximation of the SCP in the Euclidean space setting. We show an upper bound of 3 and lower bounds of 2 and \(\sqrt{2}\) for approximating the minimal radius in different settings of the SCP. Lastly, in the Euclidean 2D setting we provide a general bicriteria-approximation scheme which allows a range of possibilities for trading the optimality of the radius in return for better approximation of the partition size and vice versa. We demonstrate a usage of the scheme which achieves an approximation of \((\frac{1}{16},2)\) for the partition size and radius respectively.
Domatic number, Domatic partition, Covering problems, Approximation algorithms, Hardness proofs 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 20122 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 202 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2012 2022 2022 2022 2022 2022 2022 2022 2022 2022 2022 2022 2022 2022 2022 20222 2022 2022 2022 2222 2022 2022 222 2022 2222 2222 2222 2222 2222 2222 2222 2222 22222 22222 2222 2222 22222 22222 22222 22222 22222 22222 22222 22222 22222 2222 22222 22222 22222 22222 222222 222222 222222 222222 22222 22222222 2222222 2222222 2222222 2222222222
problem was first mentioned by this name in [2], was said to be NP-complete for \(m\geq 3\) in [4, pg 190] without an explicit proof there, and one proof can be found in [1]. The well-studied optimization version of the DNP is to maximize the partition size. For both, the SCP and the DNP, for \(m=1\) the answer is trivially 'yes'. For \(m=2\) it is not difficult to verify that the answer is 'yes' if and only if each element has at least one neighbor, that is, another element within radius \(r\) in the SCP and an adjacent vertex in the DNP. The DNP is not difficult to reduce to the SCP's most general setting, and the SCP is straightforwardly reducible to the DNP. Yet, compared to the DNP the SCP allows the metric space to be something else than a graph with shortest edge-path distance function, and adds a radius parameter. These extensions give rise to questions about complexity in Euclidean space, most specifically in Euclidean 2D, and about approximability with regards to the radius. They match real-world motivations for the DNP e.g. network resources allocation [10] and facilities allocation [3]. Another relevant scenario is sensing: Assume a set of locations for which a certain data needs to be repeatedly measured by sensors in those locations. Assume each sensor's reading is a good estimation in a certain radius, \(r\), and there is a limitation on the frequency each sensor can be queried, \(f_{s}\). Finally assume it is required to estimate the data in each location in frequency \(f_{r}>f_{s}\). Let \(m=\frac{f_{r}}{f_{s}}\), then an \(m\)-solidarity-\(r\)-cover of the set of sensors will allow having an estimation of the data in all locations in the required frequency by alternating between querying each of the \(m\) covering subsets.
For the optimization version of the DNP, maximizing the partition size, a tight \((1-o(1))/\ln(n)\) approximation bound is known for general graphs [3]. For interval graphs DNP, which coincide with Euclidean 1-dimensional SCP, linear-time algorithms exist [7]. In the specific and interesting case of unit disc graphs a probabilistic constant factor approximation algorithm was presented in [9], with a factor of over 500. However, no lower bound has been proven for that scenario. To the best of our knowledge, the DNP for unit disc graphs has not been even shown to be NP-hard.
Several well-studied covering problems are similar in some ways to the SCP but we found too different for their results to be of direct use in reasoning about the SCP. Such is the minimum set cover problem and such is the k-center problem. Some work on the latter though [6] inspired the greedy technique used in Section 3.2.
In Section 2 we prove NP-completeness of the SCP in Euclidean 2D space. This problem is equivalent to the DNP for unit disc graphs and so for the first time, as far as we know, the DNP for unit disc graphs is proven to be NP-hard. In Section 3 we focus on approximability results. We examine the approximability of the SCP with regards to the partition size parameter. We show a relation between the SCP in Euclidean space and the DNP which implies the same tight approximation bound for the SCP as the one known for the DNP, \((1-o(1))/\ln(n)\)[3]. In Section 3.2 we examine the approximability of the SCP with regards to the radius parameter. We introduce a 3-approximation algorithm generalizing the ideas in [6], and prove a 2-approximability lower bound for the general metric space setting and a \(\sqrt{2}\)-approximability lower bound already for the Euclidean 2D setting. In Section 3.3 we introduce a bicriteria approximation scheme for the Euclidean 2D setting which allows trading the optimality of the radius in return for better approximation of the partition size and vice versa. We apply this scheme obtaining an exemplary \((\frac{1}{16},2)\)-approximation algorithm for the partition size and radius respectively.
The following table summarizes our results for the SCP as well as results that are straightforwardly implied by previous results for the DNP. The latter are marked in grey. 'L' and 'U' stand for lower and upper bound.
## 2 Decision Hardness
We start with showing that the SCP is NP-hard already in the Euclidean 2D setting, and as containment in NP is relatively straightforward completeness is proven. The hardness proof describes a reduction from the 3-coloring problem for a planar orthogonal graph \(G=(V,E)\) drawn in an area of size \(O(|V|^{2})\), to the SCP. A graph is _planar orthogonal_ if and only if it is a planar graph whose edges are a combination of horizontal and vertical lines connecting integer-grid points. Any planar graph \(G=(V,E)\) with degree at most 4 admits a planar orthogonal embedding in an area of size \(O(|V|^{2})\) and that embedding can be computed in polynomial time [11, Theorem 2]. Hence, as the 3-coloring problem for planar graphs with degree at most 4 is known to be NP-hard [5, Theorem 2.1] so is the 3-coloring problem for planar orthogonal graphs drawn in an area of size \(O(|V|^{2})\).
Let \(G=(V,E)\) be a graph such that:
1. \(G\) is planar orthogonal.
2. The area of \(G\) in the plane is \(O(|V|^{2})\).
Then, there is a set of points \(S\subseteq\mathbb{R}^{2}\) computable in polynomial time, such that: for every \(1\leq r<\sqrt{2}\) there is a 3-solidarity-\(r\)-cover of \(S\), if and only if \(G\) is 3-colorable.
Proof.: Please see Figure 1 for an example of the following formal description. Let \(G^{\prime}=(V^{\prime},E^{\prime}),V^{\prime}\subseteq\mathbb{Z}^{2}\) be a scaling of \(G\) such that each vertex in coordinates \((i,j)\in\mathbb{Z}^{2}\) is shifted to \((6i,6j)\) and the edges segments are lengthened accordingly. Also, assume'\(<\)'is an arbitrary total ordering on \(V^{\prime}\) and, in accordance with (i), each edge \(e_{uv}\in E^{\prime}\) connecting \(u<v\in V^{\prime}\) is defined as a sequence \(e_{uv}=(e_{1}^{uv}=u,\ldots,e_{k_{uv}}^{uv}=v)\in(\mathbb{Z}^{2})^{k_{uv}}\) of \(k_{uv}\in\mathbb{N}\) 1-step-afar grid points. Define the following subsets:
* For each \(v=(v_{x},v_{y})\in V^{\prime}\), if \(deg(v)\geq 2\) then define \(S^{*}_{v}=\{v\}\). Otherwise, if \(v\) is isolated or has a single edge connected to it from the right side then define \(S^{*}_{v}=\{v,\ p_{1}^{v}=(v_{x}-1,v_{y}),\ p_{2}^{v}=(v_{x}-0.5,v_{y}+0.5)\}\), and otherwise define \(S^{*}_{v}\) to contain \(v\) and the equivalent two points considering the single edge direction.
* Due to the scaling, for each edge \(e_{uv}=(e_{1}^{uv}=u,\ldots,e_{k_{uv}}^{uv}=v)\), \(e_{4}^{uv}\) must be either a part of a vertical segment or a horizontal segment but not both, that is, not a corner point. Assume w.l.o.g it is a part of a horizontal segment, then define \(S^{*}_{uv}=\{p_{u}^{uv}=(e_{4_{x}}^{uv}-0.5,e_{4_{y}}^{uv}+0.5),p_{v}^{uv}=(e_{ 4_{x}}^{uv}+0.5,e_{4_{y}}^{uv}+0.5),p_{uv}^{uv}=(e_{4_{x}}^{uv},e_{4_{y}}^{uv}+ 1)\}\). The idea is that the three points in \(S^{*}_{uv}\) are within radius 1 of each other, yet they are in distance \(\geq\sqrt{2}\) from points that we do not want them to be in radius 1 of, as described later in the proof.
* For each \(e_{uv}\in E^{\prime},e_{uv}=(e_{1}^{uv}=u,\ldots,e_{k_{uv}}^{uv}=v)\) define \(S^{\prime}_{uv}=\{e_{2}^{uv},e_{3}^{uv},e_{5}^{uv},\ldots,e_{k_{uv}-1}^{uv}\}\), and \(S_{uv}=S^{\prime}_{uv}\cup S^{*}_{uv}\).
Finally, define \(S=\{\bigcup_{v\in V^{\prime}}S^{*}_{v}\}\cup\{\bigcup_{e_{uv}\in E^{\prime}} S_{uv}\}\). Due to (ii) the described construction of \(S\) can be executed in time polynomial in \(|G|\). Due to (i), and the mentioned scaling, no points associated with an edge are in radius \(<\sqrt{2}\) of points associated with another edge. The second important property of the construction is that for each edge \(e_{uv}\in E^{\prime}\), for each point \(p\in S^{\prime}_{uv}\), \(p\) has exactly two neighboring points within radius 1 in \(S\). Moreover, for each two points in \(S\) either they are within distance 1 of each other or they are at least
distance apart. We proceed to show the required relation between a 3-solidarity-1-cover of \(S\) and a 3-coloring of \(G\), please see Figure 2 for a demonstration of that relation.
For the first direction, assume \(S_{1},S_{2},S_{3}\subseteq S\) form a 3-solidarity-1-cover of \(S\). Let \(e_{uv}\in E^{\prime}\) and assume \(u\in S_{i}\) and \(v\in S_{j}\), we want to show that \(i\neq j\). Note that necessarily \(p_{u}^{uv}\in S_{i}\), \(p_{v}^{uv}\in S_{j}\), and for each \(4<k<k_{uv}\), \((k-4)\mod 3=0\) it holds that \(e_{k}^{uv}\in S_{j}\). These are due to the radius-1 neighboring properties, and number of points in \(e_{uv}\), guaranteed by the scaling of \(G\) and the definition of \(S\). Finally, the only way for \(p_{uv}^{uv}\) to be 3-1-covered is if \(p_{uv}^{u}\) and \(p_{uv}^{v}\) are assigned to different subsets, that is, if \(S_{i}\neq S_{j}\). Hence, coloring the vertices of \(G^{\prime}\) according to the solidarity-cover assignment of the vertices-points results in a valid 3-coloring, and since \(G\) is just a spatial down-scaling of \(G^{\prime}\) the coloring is valid for it as well.
For the second direction, assume \(\tau:V\rightarrow\{1,2,3\}\) is a valid 3-coloring of \(G\), then \(\tau\) is a valid 3-coloring of \(G^{\prime}\). We define an assignment \(\psi:S\rightarrow\{S_{1},S_{2},S_{3}\}\) of the points in \(S\) to three subsets of \(S\) as follows:
* For each \(v\in S\) such that \(v\in V^{\prime}\) set \(\psi(v)=\tau(v)\). If \(S_{v}^{*}\) contains two points in addition
Figure 1: An example of constructing \(S\) given \(G\). On the left is the depiction of a planar-orthogonal graph \(G\) consisting of two vertices \(u,v\) and an edge between them. On the right is the depiction of the set of points \(S\) constructed from \(G\) assuming the vertices ordering \(u<v\). In both diagrams the distance between subsequent grid-lines is 1.
Figure 2: A demonstration of the 3-coloring of a planar orthogonal graph \(G\) and the corresponding 3-solidarity-1-cover of the constructed set of points \(S\). On the left is a planar-orthogonal version of a 3-colored triangle graph \(G\). The coloring is represented by 3 colors and 3 matching shapes. On the right is the set of points \(S\) constructed from \(G\) and partitioned into three disjoint subsets - represented by colors and shapes. The partition forms a 3-solidarity-1-cover. In both diagrams the distance between subsequent grid-lines is 1.
to \(v\) then assign them to the remaining two subsets and get that each of the three points is 3-1-covered. Otherwise, there are at least two edges connected to \(v\), assign the two first edge-points (which are not \(v\)) to the two remaining subsets and get that \(v\) is 3-1-covered.
* For each \(e_{uv}\in E^{\prime}\):
* By assumption, and the definition of \(\psi(w),w\in S\), we have \(\psi(u)\neq\psi(v)\). Set \(\psi(p_{u}^{uv})=\psi(u),\ \psi(p_{v}^{uv})=\psi(v)\), set \(\psi(p_{uv}^{uv})\) to be the remaining subset, and get that each of the three points is 3-1-covered.
* \(\forall 4<k<(k_{uv}-3)\quad\psi(e_{k}^{uv})=\begin{cases}\psi(v)&\text{(k-4) mod 3=0}\\ S_{q}\neq\psi(v)&\text{(k-4) mod 3=1}\\ S_{r}\neq\psi(v),\ r\neq q&\text{(k-4) mod 3=2}\end{cases}\) Each of the above points has exactly two neighbors in radius 1, and the three have different assignments, hence each of them is 3-1-covered.
* Foreach \(1<k<4\) and \((k_{uv}-3)<k<k_{uv}\) set \(\psi(e_{k}^{uv})\) to be a subset such that \(e_{k}^{uv},u,v\) are each 3-1-covered. It is quick to verify that such an assignment always exists given the definition of \(S\) and of \(\psi\) so far.
Overall, \(\psi\) induces a 3-solidarity-1-cover of \(S\).
The SCP is NP-Complete already for 3-solidarity-1-cover in Euclidean 2D space.
Proof.: Hardness follows from Lemma 1 together with hardness of the 3-coloring problem for planar orthogonal graphs drawn in an area of size \(O(|V|^{2})\). Containment in NP follows from being able to use a partition description as a witness, verifiable in polynomial time, for the existence of an \(m\)-solidarity-\(r\)-cover.
## 3 Approximation Bounds
In this section we examine approximation variants of the SCP. We begin with the objective of maximizing the partition size, proceed with the objective of minimizing the radius, and end with considering the two objectives simultaneously in the 2D setting.
### Partition Size Approximation
A tight bound of \((1-o(1))/\ln(n)\) approximation, assuming \(\text{NP}\not\subseteq\text{DTIME}(n^{O(\log\log(n))})\), was proven for the DNP in [3]. We show that the same bound applies to the SCP in the Euclidean space setting.
Let \(G=(V,E),V=\{v_{1},\ldots,v_{n}\}\) be an undirected graph and let \((S,d),S=\{s_{1},\ldots,s_{n}\},r\in\mathbb{R}\) be a metric space and a radius such that \(\{v_{i},v_{j}\}\in E\Leftrightarrow d(s_{i},s_{j})\leq r\), then \(G\) admits a domatic partition of size \(m\) if and only if \(S\) admits an \(m\)-solidarity-\(r\)-cover.
Proof.: The assumption implies that for each \(k\in[n],\{i_{j}\}_{j\in[k]}\) it holds that \(\{v_{i_{1}},\ldots,v_{i_{k}}\}\) is a dominating set of \(G\) if and only if \(\{s_{i_{1}},\ldots,s_{i_{k}}\}\) is an r-cover of \(S\). Also, trivially every two subsets of \(V\) are disjoint if and only if their index-wise corresponding subsets of \(S\) are disjoint. Hence, every domatic partition of \(G\) of size \(m\) corresponds to an \(m\)-solidarity-\(r\)-cover of \(S\) and vice versa.
The SCP in Euclidean space adheres a tight bound of \((1-o(1))/\ln(n)\) for the partition size approximation, assuming \(\text{NP}\not\subseteq\text{DTIME}(n^{O(\log\log(n))})\).
Proof.: For the lower bound we show a reduction from the DNP to the SCP in Euclidean space. Let \(\rho\) denote the Euclidean distance function. Given \(G=(V,E),V=\{v_{1},\ldots,v_{n}\}\), we can compute in polynomial time a set \(S=\{s_{1},\ldots,s_{n}\}\subseteq\mathbb{R}^{n}\) and a radius \(r\in\mathbb{R}\) such that \(\rho(s_{i},s_{j})\leq r\Leftrightarrow\{v_{i},v_{j}\}\in E\)[8, Theorem 1]. By Lemma 3 the above is a valid reduction.
For the upper bound, we can straightforwardly reduce the SCP to the DNP. Given a metric space \((S,d),S=\{s_{1},\ldots,s_{n}\},r\in\mathbb{R}\), define \(V=\{v_{1},\ldots,v_{n}\},\;E=\{\{v_{i},v_{j}\}\mid d(s_{i},s_{j})\leq r\}\). By Lemma 3 the above is a valid reduction.
### Radius Approximation
Let \((S,d),|S|=n\) be a finite metric space. For any given \(m\in[n]\) there exists a radius \(r\in\mathbb{R}\) for which there is an \(m\)-solidarity-\(r\)-cover since it is always possible to set \(r=\max(\{d(s_{1},s_{2})\mid s_{1},s_{2}\in S\})\). Let \(m\in[n]\) and let \(r^{*}=\min(\{r\mid\text{there exist an $m$-solidarity-$r$-cover}\})\). We describe a polynomial time algorithm that finds an \(m\)-solidarity-\(3r^{*}\)-cover. The main part of the algorithm is the subroutine GreedySC (see pseudocode below) which is inspired by ideas in [6] where they are used for \(k\)-center clustering approximation. It receives as input the metric space, the partition size, and a radius \(r\). If \(r\) is a feasible radius for an \(m\)-solidarity-\(r\)-cover then GreedySC outputs a partition, and if GreedySC outputs a partition then it is an \(m\)-solidarity-\(3r\)-cover. Hence, let
\[\hat{r}=\min(\{d(s_{i},s_{j})\;\mid s_{i},s_{j}\in S,\textsc{GreedySC}((S,d),m,d (s_{i},s_{j}))\neq\text{ `false'}\}\]
then \(\hat{r}\leq 3r^{*}\). As the number of candidates for \(\hat{r}\) is \(|\{d(s_{i},s_{j})\mid s_{i},s_{j}\in S|\leq\frac{1}{2}(n^{2}-n)\), and GreedySC runs in polynomial time, we can find \(\hat{r}\) and a corresponding partition in polynomial time.
```
Input: Finite metric space \((S,d)\), partition size \(m\in[|S|]\), radius \(r\) Output: An \(m\)-solidarity-\(3r\)-cover or 'false' Initialize: Set \(S_{1},\ldots,S_{m},P:=\emptyset\), select \(p_{1}\in S\) arbitrarily, set \(i:=1\), and \(r_{1}:=\infty\) while\(r_{i}>2r\)do Update \(P:=P\cup\{p_{i}\}\) ifP=Sthen break; Select \(p_{i+1}\) of maximal distance to all points in \(P\) i.e. \[p_{i+1}\in\operatorname*{argmax}_{p\in(S\setminus P)}\left\{\min_{v\in P}d(v,p)\right\}\] Set \(r_{i+1}:=\min_{v\in P}d(v,p_{i+1})\); Set \(i:=i+1\); Set \(i:=i-1\) for\(k=1\) to \(i\)do Let \(B_{p_{k}}(r)\) denote the points in the radius \(r\) ball centered around \(p_{k}\) if\(|B_{p_{k}}(r)|<m\)then returnfalse Assign the points in \(B_{p_{k}}(r)\) to all \(m\) different subsets \(S_{1},\ldots,S_{m}\) Assign the points in \((S\setminus\bigcup_{k=1}^{i}B_{p_{k}}(r))\) arbitrarily to \(S_{1},\ldots,S_{m}\) return\(\{S_{1},\ldots,S_{m}\}\)
```
**Algorithm 1**GreedySC
**Lemma 5**.: _Let \((S,d),|S|=n\) be a finite metric space, \(m\in[n],r\in\mathbb{R}\), then:_
1. _If_ \(S\) _admits an_ \(m\)_-solidarity-_\(r\)_-cover, then_ \(\textsc{GreedySC}((S,d),m,r)\) _will return a partition._
2. _If_ \(\textsc{GreedySC}((S,d),m,r)\) _returns a partition, then that partition is an_ \(m\)_-solidarity-_\(3r\)_-cover._
Proof.: Note that when referring to variables in the subroutine i.e. the set variables, they are considered in their final state - before the subroutine terminates.
First, we show that if there is an \(m\)-solidarity-\(r\)-cover then the subroutine will not terminate with 'false'. Assume otherwise and let w.l.o.g. \(|B_{p_{1}}|<m\), implying that there are no \(m\) points in \(S\) which are within radius \(r\) of \(p_{1}\). Hence, there cannot be \(m\) disjoint subsets that cover \(p_{1}\) with radius \(r\), in contradiction to the existence of an \(m\)-solidarity-\(r\)-cover.
Next, we show that \((S_{1},\ldots,S_{m})\) are disjoint. Assume otherwise and let \(p\in S\) such that \(p\in S_{j},p\in S_{k},j\neq k\). By the second part of the subroutine, which assigns points to subsets, necessarily \(p\in B_{p_{j}}(r),p\in B_{p_{k}}(r)\), implying \(d(p_{j},p_{k})\leq 2r\), in contradiction to the while-condition in the subroutine's first part.
Finally, we show that each \(S_{i}\) is a \(3r\)-cover. Assume otherwise, w.l.o.g. assume \(S_{1}\) is not a \(3r\)-cover and let \(w\in S\) such that \(\forall q\in S_{1}\ d(w,q)>3r\). By the second part of the subroutine, which assigns points to subsets, \(w\not\in P\) and also \(\forall u\in P\ \exists v\in S_{1}:d(u,v)\leq r\). Hence, necessarily \(\forall u\in P\ \ d(w,u)>2r\) in contradiction to the termination of the subroutine without adding \(w\) to \(P\).
Let \((S,d),|S|=n\) be a finite metric space, let \(m\in[n]\), and let \(r^{*}=\min(\{r\mid\) there exist an \(m\)-solidarity-\(r\)-cover\(\})\). Then, an \(m\)-solidarity-\(3r^{*}\)-cover can be found in polynomial time.
Proof.: There are \(O(|S|^{2})\) possible radii thus we can search in polynomial time for the minimal radius for which _GreedySC_ returns a partition. By Lemma 5 necessarily that partition is an \(m\)-solidarity-\(3r^{*}\)-cover.
There does not exist a polynomial-time c-approximation of the radius, with \(c<2\), for the SCP in general metric space setting, unless P=NP.
Proof.: We show that otherwise the DNP can be solved in polynomial time, in contradiction to it being NP-hard. Assume by contradiction that there exists a c-approximation algorithm \(A\) for some \(c<2\). Given a DNP instance \(G=(V,E),\ V=\{v_{1},\ldots,v_{n}\},m\in[n]\), we can construct in polynomial time a metric space \((S,d),S=\{p_{1},\ldots,p_{n}\}\), \(d(p_{i},p_{j}):=\min(\{|P|\mid P\subseteq E\text{ is a }v_{i}-v_{j}\text{ path in }G\})\), that is, the distance between two points is defined to be the length (in edge-count) of the shortest-path between their corresponding vertices in the graph. It is clear that there is a domatic partition of size \(m\) for \(G\) if and only if there is an \(m\)-solidarity-\(1\)-cover for \(S\) or, in other words, the minimal feasible radius for a solidarity cover with parameter \(m\) is \(1\). Hence, if there is a domatic partition then \(A\) must return \({}^{*}r<2^{*}\) and the result will indicate that actually the minimal feasible radius is \(1\) as due to our definition of \(d\) the potential minimal feasible values are discrete. In the other direction, if the approximation algorithm returns \({}^{*}r<2^{*}\) then by the same reasoning it indicates the existence of a domatic partition of size \(m\) for \(G\).
There does not exist a polynomial-time \(c\)-approximation of the radius, \(c<\sqrt{2}\), for the SCP in Euclidean space setting of dimension \(n\geq 2\), unless P=NP.
Proof.: We show that otherwise the \(3\)-coloring problem for planar orthogonal graph \(G=(V,E)\) drawn in an \(O(|V|^{2})\) area can be solved in polynomial time, in contradiction to it being
NP-hard. According to Lemma 1, given an instance of the mentioned coloring problem we can construct a set \(S\subseteq\mathbb{R}^{2}\) that admits a 3-solidarity-r-cover for every \(1\leq r<\sqrt{2}\), if and only if a 3-coloring exists. Assume there is a c-approximation algorithm, \(c<\sqrt{2}\), then given the set of points and partition size 3 it will return a 3-solidarity-r-cover with \(r<\sqrt{2}\) if and only if a 3-coloring exists for the graph.
### Bicriterial Approximation in the Euclidean 2D Setting
In the spirit of [9, Section 3.1] we show a deterministic bicriteria approximation scheme for the Euclidean 2D setting, allowing to improve the approximability of the radius on account of the optimality of the partition size, and vice versa. The scheme enables a range of trade-offs - rather than only one specific - between the two approximation factors. Throughout this section, we assume the distance function, \(d\), of our metric space to be the Euclidean distance. Similarly to Section 3.2 we rely on the polynomial size of the set of radii to be considered. Let \(S\subset\mathbb{R}^{2},|S|=n\) be a finite set of points in the plane. Let \(m\in[n]\) and let \(r^{*}=\min(\{r\mid\text{there exist an $m$-solidarity-$r$-cover}\})\). Assume a plane that is divided into squares of diameter \(r^{\prime}\) and let \(f(r,r^{\prime})\) be the maximal number of squares intersecting a circle of radius \(r\) placed anywhere in that plane. We describe a polynomial time algorithm which, given a desired radius approximation factor \(\beta\), finds an \(\frac{1}{f(r,(\beta-1)r)}m\)-solidarity-\(\beta r^{*}\)-cover. The main part of the algorithm is the subroutineSquaresSC (see pseudocode below). It receives as input the set of points, the partition size, the desired radius approximation factor \(\beta\), and a radius \(r\). If an \(m\)-solidarity-\(r\)-cover exists thenSquaresSC outputs a partition, and ifSquaresSC outputs a partition then that partition is \(\frac{1}{f(r,(\beta-1)r)}m\)-solidarity-\(\beta r\)-cover. Hence, let
\[\hat{r}=\min(\{d(s_{i},s_{j})\mid s_{i},s_{j}\in S,\text{ SquaresSC}((S,d),m,\beta,d(s_{i},s_{j}))\neq\text{`false'}\})\]
then \(\hat{r}\leq\beta r^{*}\). As the number of candidates for \(\hat{r}\) is \(|\{d(s_{i},s_{j})\mid s_{i},s_{j}\in S|\leq\frac{1}{2}(n^{2}-n)\), andSquaresSC runs in polynomial time, we can find \(\hat{r}\) and a corresponding partition in polynomial time.
```
Input: Finite set of points in the plane \(S\subset\mathbb{R}^{2}\), partition size \(m\in[|S|]\), radius approximation factor \(1<\beta\in\mathbb{R}\), radius \(r\in\mathbb{R}\) Output: A \(\frac{1}{f(r,(\beta-1)r)}m\)-solidarity-\(\beta r\)-cover or 'false' Set \(m^{\prime}=\frac{1}{f(r,(\beta-1)r)}m\), \(S_{1},\ldots,S_{m^{\prime}}:=\emptyset\); Let \((t,l,b,\rho)\) be the top, left, bottom, and right extreme coordinates of points in \(S\); Split the rectangle with corners \((t,l),(t,\rho),(b,\rho),(b,l)\) to squares of diameter \((\beta-1)r\); foreach square \(s\)do if\(s\) has at least \(m^{\prime}\) pointsthen assign the points in \(s\) such that each of the \(m^{\prime}\) subsets is assigned at least one point; else assign the points in \(s\) to the \(m^{\prime}\) subsets arbitrarily; if\(\{S_{1},\ldots,S_{m^{\prime}}\}\) is an \(m^{\prime}\)-solidarity-\(\beta r\)-coverthen return\(S_{1},\ldots,S_{m^{\prime}}\) else return'false'
```
**Algorithm 2**SquaresSC
**Lemma 9**.: _Let \(S\subseteq\mathbb{R}^{2},|S|=n\) be a finite set of points in the plane, \(m\in[n],r\in\mathbb{R},1<\beta\in\mathbb{R}\), then:_
1. _If_ \(S\) _admits an_ \(m\)_-solidarity-_\(r\)_-cover, then_ \(\textsc{SquaresSC}((S,d),m,\beta,r)\) _will return a partition._
2. _If_ \(\textsc{SquaresSC}((S,d),m,\beta,r)\) _returns a partition, then that partition is a_ \(\frac{1}{f(r,(\beta-1)r)}m\)_-solidarity-_\(\beta r\)_-cover._
Proof.: Assume \(S\) admits an \(m\)-solidarity-\(r\)-cover. Let \(S_{1},\ldots,S_{m^{\prime}}\) be the partition constructed before the last line of SquaresSC, and let \(p\in S\). Then, assuming any division of the plane into squares of diameter \((\beta-1)r\), the circle of radius \(r\) around \(p\) necessarily intersects with a square that contains at least \(\frac{1}{f(r,(\beta-1)r)}m\) points. Hence, by the first "For each" statement of the subroutine, each subset \(S_{i},i\in[m^{\prime}]\) has a point that belongs to that square and so it is at most \(r+(\beta-1)r=\beta r\) distant from \(p\).
By the last line of the subroutine, if the constructed partition is not an \(\frac{1}{f(r,(\beta-1)r)}m\)-solidarity-\(\beta r\)-cover then 'false' will be returned.
**Lemma 10**.: _Assume a plane that is divided to squares of diameter \(r\), then the maximal number of squares intersecting a circle of radius \(r\) placed on that plane is exactly 16._
Proof.: To show that 16 is an upper bound, instead of looking on a circle of radius \(r\) we look on a square (parallel to the axis) with side-length \(2r\), that is, big enough to contain the circle. The side length of the grid-square is \(\frac{r}{\sqrt{2}}\), hence, the maximum number of grid lines it can intersect in each dimension is 4, hence, the maximum number of grid-squares it can intersect is 16. To show that the maximal number of intersecting squares is at least 16 we look for example on the circle of radius \(r\) around the origin. Assume the grid is aligned with the origin and the top left coordinates of square \(Q_{ij}\) are \((i\frac{r}{\sqrt{2}},(j+1)\frac{r}{\sqrt{2}})\), it is easy to verify that the circle intersects squares \(\{Q_{i,j}\}_{i,j\in\{-2,-1,0,1\}}\).
**Theorem 11**.: _Given a partition size \(m\), let \(r^{*}=\min(\{r\ |\) there exist an \(m\)-solidarity-\(r\)-cover\(\})\). Then, a \(\frac{1}{16}\)m-solidarity-\(2r^{*}\)-cover can be found in polynomial time._
Proof.: There are \(O(|S|^{2})\) possible radii thus we can search in polynomial time for the minimal radius for which _SquaresSC_ with \(\beta=2\) returns a partition. By Lemma 9 and Lemma 10 necessarily that partition is a \(\frac{1}{16}\)m-solidarity-\(2r^{*}\).
## 4 Conclusion and Outlook
We have seen that the SCP is a generalization of the DNP also in the Euclidean space setting, and bears the same decision and partition size approximability hardness. Already for the very specific but important Euclidean 2D setting, in which it coincides with the DNP for unit disc graphs, the SCP is hard to decide. Still, the SCP brings news with it. Having the radius parameter opens the possibility for an optimal partition size in return for a 3-approximation of the radius. It is possible that the approximability can be improved but only down to a factor of 2 in the general case and \(\sqrt{2}\) in more specific settings. Alternatively, in the Euclidean 2D setting both optimalities can be simultaneously compromised, improving the approximation of the radius - potentially even beyond the single-criteria lower bound - while degrading the approximation of the partition size.
Several questions naturally follow our investigation of the SCP:
**Radius Approximation Bound.** The gap between the radius approximability bounds may be narrowed.
**Bicriteria Approximation.** The trade-off between radius-approximability and partition size approximability may be improved. Also, bicriteria hardness results may be proved.
**Redundancy Extension.** We have defined a point \(p\in S\) to be covered by a subset \(S_{i}\subseteq S\) if the subset contains at least one point close to it, \(p^{\prime}\in S_{i},\ d(p,p^{\prime})\leq r\). In various potential applications, e.g. sensing, it is beneficial to have a coverage redundancy i.e. to consider a point \(p\in S\) as covered by a subset \(S_{i}\subseteq S\) only if the subset contains \(\ell>1\) points close to it, that is, if there exist \(p_{1},\ldots,p_{\ell}\in S_{i},\ \forall j\in[\ell]\ d(p,p_{j})\leq r\). Obviously the redundancy version is at least as hard but it raises the question whether it is also at most as hard i.e. whether it has the same approximation possibilities.
**Outliers.** We have considered a partition to be a cover only if every point is covered by every subset. In the somewhat resembling \(k\)-center problem it was observed that by allowing a small fraction of outliers the radius could be significantly reduced. One could think then that in the SCP requiring only a high fraction of the points - and not all of them - to be covered by every subset will allow finding a solidarity cover with smaller radius in polynomial time.
|
2303.14204
|
On the dearth of C-enhanced metal-poor stars in the Galactic bulge
|
The chemical fingerprints of the first stars are retained within the
photospheres of ancient unevolved metal-poor stars. A significant fraction of
these stellar fossils is represented by stars known as Carbon-Enhanced
Metal-Poor (CEMP), $\rm [C/Fe]>+0.7$ and $\rm [Fe/H]<-2$, which are likely
imprinted by low-energy primordial supernovae. These CEMP stars are largely
observed in the Galactic halo and ultra-faint dwarf galaxies, with values
reaching $\rm [C/Fe]=+4.5$. The Galactic bulge is predicted to host the oldest
stars, but it shows a striking dearth of CEMP stars with $\rm [C/Fe]\gtrsim
+2.0$. Here we explore the possible reasons for this anomaly by performing a
statistical analysis of the observations of metal-poor stars in combination
with the predictions of $\Lambda$CDM models. We suggest that the dearth of CEMP
stars with high $\rm [C/Fe]$ is not due to the low statistics of observed
metal-poor stars but is the result of the different formation process of the
bulge. $N$-body simulations show that the first star-forming halos which end up
in the bulge are characterized by the highest star-formation rates. These rates
enable the formation of rare massive first stars exploding as pair-instability
supernovae (PISNe), which wash out the signature of primordial faint
supernovae. We demonstrate that the mean $\rm [C/Fe]$ of first stars polluted
environments decreases with the increasing contribution of PISNe. We conclude
that the dearth of CEMP stars in the Galactic bulge indirectly probes the
existence of elusive PISNe, and propose a novel method which exploits this lack
to constrain the mass distribution of the first stars.
|
Giulia Pagnini, Stefania Salvadori, Martina Rossi, David Aguado, Ioanna Koutsouridou, Ása Skúladóttir
|
2023-03-24T18:00:01Z
|
http://arxiv.org/abs/2303.14204v1
|
# On the dearth of C-enhanced metal-poor stars in the Galactic bulge
###### Abstract
The chemical fingerprints of the first stars are retained within the photospheres of ancient unevolved metal-poor stars. A significant fraction of these stellar fossils is represented by stars known as Carbon-Enhanced Metal-Poor (CEMP), [C/Fe] \(>\) +0.7 and [Fe/H] \(<-2\), which are likely imprinted by low-energy primordial supernovae. These CEMP stars are largely observed in the Galactic halo and ultra-faint dwarf galaxies, with values reaching [C/Fe] = +4.5. The Galactic bulge is predicted to host the oldest stars, but it shows a striking dearth of CEMP stars with [C/Fe] \(\gtrsim\) +2.0. Here we explore the possible reasons for this anomaly by performing a statistical analysis of the observations of metal-poor stars in combination with the predictions of \(\Lambda\)CDM models. We suggest that the dearth of CEMP stars with high [C/Fe] is not due to the low statistics of observed metal-poor stars but is the result of the different formation process of the bulge. \(N\)-body simulations show that the first star-forming halos which end up in the bulge are characterized by the highest star-formation rates. These rates enable the formation of rare massive first stars exploding as pair-instability supernovae (PISNe), which wash out the signature of primordial faint supernovae. We demonstrate that the mean [C/Fe] of first stars polluted environments decreases with the increasing contribution of PISNe. We conclude that the dearth of CEMP stars in the Galactic bulge indirectly probes the existence of elusive PISNe, and propose a novel method which exploits this lack to constrain the mass distribution of the first stars.
keywords: Galaxy: bulge - Stars: carbon, Population III - galaxies: formation, high-redshift
## 1 Introduction
Today we are surrounded by stars - the Milky Way galaxy alone contains hundreds of billions of stars - but there was a time, billions of years ago, when stars were absent and the Universe was extremely simple. At that time the Universe was mostly neutral and mainly composed by hydrogen and helium produced during the Big Bang Nucleosynthesis. The first stars played a fundamental role in the evolution of the Universe, as they were responsible for the transition from this very simple and early stage to the more complex one visible today. In fact, the first stars were the sources of the first chemical elements heavier than lithium and of the first hydrogen-ionizing photons. Hence, they initiated the extended processes of _reionization_ and _metal-enrichment_.
Within the standard \(\Lambda\)CDM model of structure formation, the first stars (referred to as _Population III_ or Pop III stars) are predicted to have formed within the first few hundred million years after the Big Bang, corresponding to redshifts of \(z\sim 15-20\). The primordial birth environment of Pop III stars, characterized by lack of heavy elements and dust, may have resulted in a higher Pop III characteristic stellar mass with respect to present-day stars (e.g. Tan & McKee, 2004; Hososkawa et al., 2011) although sub-solar mass stars might also have been able to form (e.g. Greif et al., 2011; Stacy et al., 2016). Since the primordial star formation process is still very poorly understood, we can state that the Initial Mass Function (IMF) of Pop III stars is almost completely unknown. Despite long searches (Beers & Christlieb, 2005; Caffau et al., 2013), zero-metallicity stars have not yet been observed, confirming the hypothesis of a primordial IMF biased towards more massive stars than the present-day IMF (e.g. Salvadori et al., 2007; Magg et al., 2019; Rossi et al., 2021). Massive Pop III stars explode as supernovae (SNe) polluting the surrounding medium with their chemical products, whose yields depend upon the mass of the progenitor star along with the explosion energy (e.g. Heger & Woosley, 2010; Nomoto et al., 2004). Hence, even if we cannot directly observe short-lived zero-metallicity stars, we can still catch their long-lived descendants (Keller et al., 2007; Starkenburg et al., 2017). Low-mass (PopII) stars formed in environments enriched by the chemical products of the first stars to the critical metallicity value, \(Z>Z_{cr}\), at which normal star formation is expected to proceed (Bromm et al., 2001; Schneider et al., 2002). In this context Stellar Archaeology operates: searching for the chemical signatures of the first stellar generations in the photospheres of old (\(>\) 12 Gyr) and _metal-poor_ stars that dwell in our Galaxy and its ancient dwarf satellites.
Which components of the Milky Way should be examined to find the oldest living stars? The first surveys looking for first star descendants have attempted to select metal-poor stars in the Galactic stellar halo (e.g. Beers et al., 1992; Christlieb, 2003), which is expected to
be the most metal-poor component. Typically the iron-abundance1, [Fe/H], is measured in these surveys and used as a metallicity indicator. The most metal-poor stars are then selected for high-resolution follow-up, often revealing chemically peculiar stars with strong enhancements or deficiencies of particular elements (e.g. Christlieb, 2003; Caffau et al., 2011; Keller et al., 2014; Bonifacio et al., 2015; Caffau et al., 2016; Francois et al., 2018; Aguado et al., 2018; Gonzalez Hernandez et al., 2020).
Footnote 1: Throughout this paper we will be using the notation \([A/B]\equiv\log_{10}(N_{A}/N_{B})_{*}-\log_{10}(N_{A}/N_{B})_{\odot}\), where \(N_{A}\) and \(N_{B}\) refer to the numbers of atoms of elements A and B, respectively.
The best-known type of chemically interesting stellar object at [Fe/H]!\(\sim-2\) is the _carbon-enhanced metal-poor_ (CEMP) class, which has [C/Fe] \(>+0.7\)(e.g. Beers & Christlieb, 2005; Aoki et al., 2007). This class can be divided into two main populations: (i) carbon-rich stars that also exhibit an excess in heavy elements formed by the slow neutron-capture processes, having [Ba/Fe]\(>\)1 and named CEMP-s stars, and (ii) carbon-rich stars with no excess of the heavy elements, having [Ba/Fe]\(<\)0 and known as CEMP-no stars. The CEMP-s stars are commonly assumed to be chemically enriched by mass transfer from a binary companion star that has gone through the asymptotic giant branch (AGB) phase (Abate et al., 2015), and these objects are preferentially found in binary systems (e.g. Suda et al., 2004; Lucatello et al., 2005; Starkenburg et al., 2014; Hansen et al., 2016). On the other hand, CEMP-no stars are not primarily found in binary systems (Lucatello et al., 2005; Norris et al., 2012; Starkenburg et al., 2014; Hansen et al., 2016) and even those that have a binary companion (e.g. Arentsen et al., 2019) show high values of \({}^{12}\)C/\({}^{13}\)C, which implies that the surface composition has not been altered by mass transfer (see Aguado et al., 2022, 2023). Hence the C-excess in CEMP-no stars is expected to be representative of the interstellar medium (ISM) out of which they formed, which was likely primarily polluted by the first stellar generations (e.g. Salvadori et al., 2015; De Bennassuti et al., 2017).
The observed chemical abundance patterns of the most Fe-poor, \([{\rm Fe/H}]<-4\), CEMP-no stars are indeed consistent with the yields of Pop III stars exploding as a _faint supernovae_ and experiencing mixing and fallback (e.g. Iwamoto et al., 2005; Marassi et al., 2014). Because of their low explosion energy, only the outer layers of the Pop III progenitor star, which are rich in C and other light elements, can be expelled by faint SNe. On the other hand, the inner part, which is rich in Fe-peak elements, falls back into the center forming a neutron star or a black hole (e.g. Heger & Woosley, 2010). The increased fraction of CEMP-no stars with decreasing [Fe/H] further supports such a link with Pop III star pollution (De Bennassuti et al., 2017). Furthermore, the high values of \({}^{12}\)C/\({}^{13}\)C recently observed in CEMP-no stars are consistent with an imprint from low-energy Pop III SNe (Aguado et al., 2023) and rule out the so-called'spinstars' (e.g. Meynet et al., 2006) as main pollutants. CEMP-no stars have been found in a significant fraction in the Galactic halo (Yong et al., 2012; Placco et al., 2014; Carollo et al., 2014; Lee et al., 2017; Yoon et al., 2018; Lee et al., 2019) and in the faintest satellites of the Milky Way, the so-called ultra-faint dwarf galaxies (UFDs) (Spite et al., 2018; Norris et al., 2010; Lai et al., 2011; Gilmore et al., 2013), which are the oldest galaxies in the Local group (e.g. Simon et al., 2010; Gallart et al., 2021). On the contrary, CEMP-no stars seem to be quite rare in the more luminous classical dwarf spheroidal (dSph) galaxies (e.g. Skoladotti et al., 2015; Yoon et al., 2020), which show more complex and longer star formation histories with respect to UFDs (see Salvadori et al., 2015, for a global view). Ultimately, these observational results confirm that the descendants of the first stars are preferentially found in purely ancient environments.
Relying on the \(\Lambda\)CDM model and hierarchical clustering, White & Springel (2000) first predicted the oldest stars to be in the inner part of the Milky Way, i.e. the Galactic bulge. This idea was confirmed in the following years through different numerical simulations of the Milky Way (Diemand et al., 2005; Tumlinson, 2009; Salvadori et al., 2010; Starkenburg et al., 2016). More recent simulations have shown this to also hold for different galaxies, including the dwarf satellites of Lyman Break galaxies at redshift \(z\approx 6\)(Gelli et al., 2020). Unfortunately, the Galactic bulge is a dusty and overcrowded region, predominantly populated by metal-rich stars, so metal-poor objects are difficult to find (Zoccali et al., 2008; Ness et al., 2013; Howes et al., 2016). The EMBLA Survey (Extremely Metal-poor bulge stars with AAOmega), has been the first in attempting to discover candidate metal-poor stars in this inner region (Howes et al., 2014, 2016). Although it successfully identified \(\approx 30\) stars with [Fe/H] \(<-2\), from its observations it is clear that there is a _dearth_ of CEMP-no stars in this region; in fact, only one CEMP-no star was found, having [Fe/H] \(=-3.48\) and [C/Fe] \(=+0.98\)(Howes et al., 2015). A reason for a lack of CEMP stars in the EMBLA sample could be a selection bias introduced by the SkyMapper photometric selection since photometric colours may have been affected by the strong CH absorption of CEMP stars with [C/Fe] \(>+2\), placing them outside the selected region in the colour-colour diagram. Therefore, the EMBLA Survey could have missed stars with extremely large C enhancements but stars with mild C enhancements should still have been found.
More recently, the Pristine Inner Galaxy Survey (PIGS Arentsen et al., 2020) targeted the Galactic bulge with low/intermediate resolution spectroscopy (R \(\approx\) 1300 at 3700-5500 A, and \(R\approx\) 11000 at 8400-8800 A), collecting 1900 stars with [Fe/H] \(<-2.0\), which is currently the largest sample of confirmed very metal-poor stars in the inner Galaxy. Since s-process abundance measurements for this sample are unavailable, a different CEMP classification was applied based on the absolute carbon abundance2, A(C), and the [Fe/H] of the stars (Yoon et al., 2016). According to Bonifacio et al. (2015) (see also Spite et al., 2013) CEMP stars can be divided in two main groups: (i) the high-carbon band, A(C) \(>\) 7.4, largely containing CEMP-s stars that typically have higher iron-abundance; (ii) the low-carbon band, A(C) \(<\) 7.4, predominantly containing CEMP-no stars at lower [Fe/H]. Using this classification while correcting for evolutionary effects in Red Giant Branch (RGB) stars (e.g. Placco et al., 2014), the PIGS survey identified 24 new CEMP-no candidate stars in the Galactic bulge. Still, the overall fraction of CEMP-no stars obtained by PIGS is only \(\lesssim 6\)% at [Fe/H]\(<-2\), i.e. much lower than what is found in the Galactic halo (\(\approx 20\)%, see Arentsen et al., 2021). Furthermore, \(\approx 9\)% of their CEMP-no candidates have only a moderate C-enhancement, \(+0.7<\) [C/Fe] \(<+1.2\), with only one CEMP-no star at [Fe/H] \(\simeq-3.5\) that shows a high [C/Fe] \(\simeq+2.2\), while similar values are more frequently observed both in the Galactic halo and in UFDs (e.g. Fig. 1 from Salvadori et al., 2015). As explained in Arentsen et al. (2021) and Arentsen et al. (2022), the selection of metal-poor stars in PIGS could be significantly affected by biases in the photometry against CEMP stars with strong CH lines, i.e. those with very high carbon abundances and/or cooler temperatures. However, stars with [Fe/H] \(<-3.0\) are not expected to be substantially biased, except for the coolest (\(T_{\rm eff}<4750\) K) and most carbon-rich
([C/Fe] \(>\) +2.0) stars. For stars with [Fe/H] \(<\) \(-\)2.0 and \(<\) \(-\)2.5, slightly warmer and less carbon-rich stars are also affected, but still most of the CEMP-no stars are expected to stay within their selection. Therefore they expect that the CEMP-no fraction is less impacted by a photometric bias compared to that of CEMP-s stars.
These observational results raise several questions: Why is there an apparent dearth of CEMP-no stars in the Galactic bulge? And why are CEMP-no stars with high [C/Fe] so rare in this ancient environment? Is it entirely due to a bias in the selection of metal-poor stars or is there a process that reduces the CEMP-no fraction in the Galactic bulge? The aim of this paper is to answer these questions, and it is structured as follows. In Section 2 we will carry out a preliminary analysis on the observational data of metal-poor stars in the different regions of the Local Group in order to understand whether the dearth of C-enhanced stars in the Galactic bulge is due to a statistical effect. In Section 3 we will illustrate the cosmological model used to follow the evolution of the Galaxy that combines a \(N\)-body simulation, following the hierarchical assembly of a Milky Way (MW)-like galaxy, with a semi-analytical model that follows the evolution of baryons. The results obtained from this model and the additional analytical calculations performed will be described in Sec. 4. Finally, in Sec. 5 we will draw the conclusions of our analysis with the associated implications, and we will list the prospects for future works.
## 2 Observational Data Analysis
To understand why CEMP-no stars are less common in the Galactic bulge with respect to other ancient environments of the Local Group, we should first analyze if this dearth can be just a statistical effect. Indeed, since CEMP-no stars are more common at decreasing [Fe/H], the probability to discover them in environments dominated by metal-rich stars, such as the Galactic bulge, can be intrinsically very low (Salvadori et al., 2015).
### The fraction of CEMP stars
Henceforth we will simply speak of CEMP stars when referring to the subclass of CEMP-no stars, i.e to stars with [Fe/H] \(<\) \(-\)2, [C/Fe] \(>\) +0.7 within the low-carbon band, A(C) \(<\) 7.4. The upper panel of Figure 1 shows the measured [C/Fe] vs [Fe/H] for a sample of 984 halo stars with carbon measurements from the JINAbase3(Abohalima and Frebel, 2018). The abundance and stellar parameter data collected there are based on high-resolution spectroscopic studies (\(R=\lambda/\Delta\lambda\gtrsim 15\,000\), with the majority having \(R=30\,000-40\,000\)) found in the literature. In the same plot we also show the measurements for the sample of bulge stars (Arentsen private comm.) as selected in Arentsen et al. (2021) in their Fig. 9. In detail they select measurements from a FERRE analysis with the following cuts on gravity, temperature and [C/Fe] uncertainty: \(\logg<3.5\), \(4600\,\mathrm{K}<\mathrm{T}_{\mathrm{eff}}<5500\,\mathrm{K}\) and \(\epsilon_{\mathrm{[C/Fe]}}<0.5\). As stated in Sec. 1, for bulge stars there are not available measurements of the s-process element Ba. Thus, the A(C) classification is used to distinguish among CEMP-no and CEMP-s candidates. Although many halo stars have Ba measurements, here we adopt the same definition to separate CEMP-no and CEMP-s stars to have a self-consistent classification. As explained in detail in Yoon et al. (2016), despite the differences in the two criteria, the clear distinction in A(C) between CEMP-s and the CEMP-no stars in their halo sample appears to be as successful, and likely more astrophysically fundamental, for the separation of these sub-classes as the previously recommended criterion based on [Ba/Fe] abundance ratios. Fig. 1 (upper panel) clearly shows how the [C/Fe] values in Galactic halo stars are higher at [Fe/H] \(<\)\(-\)3 compared to stars in the bulge, and how in general CEMP-no stars are more frequent in the halo, although we cannot exclude that this can be due to a statistical effect given the rarity of extremely metal-poor stars in the Galactic bulge (see also Sec. 2.3). To quantify this, we compute the fraction of CEMP-no stars, defined as the ratio of CEMP-no stars over the total number of stars at a given [Fe/H]:
Footnote 3: [https://jinabase.pythonanywhere.com/](https://jinabase.pythonanywhere.com/)
\[F_{\mathrm{CEMP}}(\mathrm{[Fe/H]})=\frac{N_{\mathrm{CEMP}}(\mathrm{[Fe/H]})}{ N_{\star}(\mathrm{[Fe/H]})}. \tag{1}\]
The Galactic halo is currently the best observed ancient environment of the Local Group and for which we have the largest
Figure 1: _Top panel_: Measured [C/Fe] vs [Fe/H] for stars in the Galactic halo (black and grey points; JINAbase) and the bulge PIGS sample (orange and red points; Arentsen et al., 2021). _Bottom panel_: Fraction of CEMP-no stars in the Galactic halo in different [Fe/H] ranges: gray includes all CEMP-no stars, while cyan is only CEMP-no stars with carbon-excess, [C/Fe] \(>\) +2.0. Error bars are Poissonian errors derived from Gehrels (1986). The number of stars in each bin, \(N_{\star}\) ([Fe/H]), is listed on top.
statistics regarding the chemical properties of the metal-poor stellar population (e.g. Yong et al. 2012b; Bonifacio et al. 2021). Therefore we compute \(F_{\rm CEMP}^{halo}\) for the sample of halo stars in Fig. 1. The results are shown as grey histogram in the bottom panel of Fig. 1 where we have also computed \(F_{\rm CEMP}^{halo}\) for CEMP stars with high carbon-excess, [C/Fe] \(>\)\(+\)2.0 (cyan histogram). As we can see, \(F_{\rm CEMP}\) increases as [Fe/H] decreases and reaches the value of \(\approx\) 1 for [Fe/H] \(\leq-\)5. Because of the large number of CEMP-no stars at [Fe/H] \(\leq-\)4 discovered during the last 10 years (see e.g., Yong et al. 2012a; Norris et al. 2012; Keller et al. 2014; Bonifacio et al. 2015; Aguado et al. 2017; Da Costa et al. 2019; Nordlander et al. 2019; Li et al. 2022, and references therein), we can now derive \(F_{\rm CEMP}\) at different [Fe/H] values, equally spaced every 0.5 dex. Still, we should remind that these values suffer various systematic uncertainties. On one hand, as suggested in Arentsen et al. (2022), high-resolution spectroscopic compilations of very metal-poor stars ([Fe/H] \(<-\)2.0) might be measured towards (very) carbon-rich objects at the 'high' metallicity and ([Fe/H] \(>-\)3.0) due to their follow-up strategies. This leads to an overestimate of the CEMP fraction in this metallicity range. Furthermore, \(F_{\rm CEMP}^{halo}\) might increase when correcting the carbon measurements to account for the internal depletion of carbon occurring in RGB stars (Placco et al. 2014). On the other hand, \(F_{\rm CEMP}^{halo}\) might decrease when accounting for non-local thermodynamic equilibrium (non-LTE) effects, which can drastically lower the measured [C/Fe] value (Amarsi et al. 2019). Finally, the selection function of the data might suffer some bias in either direction. Since the main two effects compensate and non-LTE corrections are only available for a handful of stars (Amarsi et al. 2019), we will consider the derived \(F_{\rm CEMP}^{halo}\) as the reference fraction of Galactic halo stars at different iron abundances. It is also worth mentioning some work suggesting that the frequencies of CEMP varies with location in the Galactic halo (Yoon et al. 2018; Lee et al. 2019b). Despite the mentioned caveats, as an initial hypothesis, we will assume that the fraction of CEMP-no stars is _the same in all environments_ and equal to that of the Galactic halo, \(F_{\rm CEMP}\equiv F_{\rm CEMP}^{halo}\).
### Ancient environments in the Local Group
To understand whether CEMP-no stars might simply be hidden in the Galactic bulge, we need to derive the probability to catch CEMP-no stars while blindly observing the bulge and then compare to other ancient environments in the Local Group (Salvadori et al. 2015). To this end we analyzed a number of environments with _increasing_ stellar mass (luminosity) and correspondingly an _increasing_ average stellar metallicity. In particular we considered (see Table 1 for details): the least luminous UFDs, the most luminous UFD _Bootes I_, the dSph galaxy _Sculptor_, the dSph galaxy _Fornax_ and the Galactic _bulge_.
The carbon measurements in the least luminous UFDs are from high-resolution spectroscopic studies (_Segue 1_: Norris et al. (2010); Frebel et al. (2014). _Pisces II_: Spite, M. et al. (2018). _Ursa Major II and Coma Berenice_: Frebel et al. (2009). _Leo IV_: Simon et al. (2010)). Only few stars have been observed in each UFD because these galaxies are faint and distant so that only few RGB stars are available for spectroscopic observations. For this reason we consider stars belonging to the faintest UFDs all together. Stars in _Bootes I_, the most luminous UFD, are studied with low-resolution spectroscopy (Lai et al. 2011; Norris et al. 2010) with the only exception of seven stars (Gilmore et al. 2013). In the _Sculptor_ dSph galaxy, many carbon measurements are available from both low- (Kirby et al. 2015) and high-resolution spectroscopic studies (Frebel et al. 2010; Tafelmeyer, M. et al. 2010; Starkenburg et al. 2014; Simon et al. 2015; Jablonka, P. et al. 2015; Skaladdtri et al. 2015). Finally, for the _Fornax_ dSph we include the low-resolution measurements from Kirby et al. (2015).
For the Galactic bulge, we use data from the JIN**abase** and those provided by the high-resolution, large multi-object APOGEE spectroscopic survey (Data Release 16, Jonsson et al. 2020).
\begin{table}
\begin{tabular}{c c c c c c} \hline & \multicolumn{2}{c}{least luminous} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \\ & \multicolumn{1}{c}{UCDs} & & & & & bulge \\ \hline \(L_{*}\) & \(\leq 10^{4}L_{\odot}\) & \(10^{45}L_{\odot}\) & \(10^{6.34}L_{\odot}\) & \(10^{12.35}L_{\odot}\) & \(10^{10}L_{\odot}\) \\ \hline ([Fe/H]) & \(\leq-\)2.2 & \(-\)2.1 & \(-\)1.8 & \(-\)1.0 & 0.0 \\ \hline \end{tabular}
\end{table}
Table 1: Luminosities and average stellar metallicities of the Local Group environments taken into account in this work.
Figure 2: The MDF (left panel) and the probability of observing a CEMP star (right panel) for the different environments displayed in order of increasing luminosity (mass): ultra-faint galaxies (orange), Boötes (red), Sculptor (pink), Fornax (green) and the Galactic bulge (blue).
### Are CEMP stars hidden in the Galactic bulge?
For a "blind" survey that does not pre-select the most metal-poor stars, the ability to find CEMP stars is naturally limited by the number of stars that exist at each [Fe/H], i.e. the Metallicity Distribution Function (MDF, Salvadori et al., 2015). In Fig. 2 (left), we show the normalized MDFs for the different environments studied. Note that the total number of stars observed in these environments strongly increases with galaxy luminosity: only 19 stars constitute the MDF in the faintest UFDs, while for the Galactic bulge we have \(>17,000\) stars.
We then ask ourselves: if we assume that the fraction of CEMP stars is "Universal" and equal to that of the Galactic halo, \(F_{\rm CEMP}^{halo}\), what is the joint probability, \(P_{\rm obs}\), to observe a star that has a given [Fe/H] value, which is also carbon-enhanced? This is done by combining two independent functions: the halo fraction of CEMP stars and the normalized MDF derived for each environment:
\[P_{\rm obs}({\rm[Fe/H]})={\rm F}_{\rm CEMP}^{halo}({\rm[Fe/H]})\times\frac{{ \rm N_{*}}({\rm[Fe/H]})}{{\rm N_{tot}}}. \tag{2}\]
In the right panel of Fig. 2, we show \(P_{\rm obs}\) for the different environments studied, and we note a clear trend: the brighter is the galaxy, and thus more shifted towards higher [Fe/H] values, the lower is the overall probability to observe a CEMP star. In other words, in the most massive galaxies, stars in the low-[Fe/H] all become almost "invisible" with respect to the metal-rich stars since they are much rarer. This is especially evident in the bulge since the peak of the resulting probability (\(P_{\rm obs}<0.01\%\)) is two orders of magnitude lower than the values obtained in UFDs (\(P_{\rm obs}\sim 3\%\)), where more CEMP stars have actually been discovered. This peak of \(P_{\rm obs}\) in the bulge, at [Fe/H] \(\simeq-3.5\), is almost flat for \(-3.5<\) [Fe/H] \(<-2.0\), and drops significantly at [Fe/H] \(<-3.5\).
In conclusion, the scarcity of stars at low-[Fe/H], makes it very difficult to search for CEMP stars in luminous environments, such as the Galactic bulge, that are dominated by metal-rich stars. The dearth of CEMP stars in the Galactic bulge can therefore partially be ascribed to this reason. However, neither EMBLA or PIGS are "blind" surveys since they specifically search for the most metal-poor stars in the Galactic bulge. PIGS, in particular, obtained a sample of 1900 stars with [Fe/H] \(<-2\). Thus, if we hypothesize that the CEMP fraction within the bulge is equal to the one we obtained for the Galactic halo, namely \(F_{\rm CEMP}^{bulg}({\rm[Fe/H]}<-2)=F_{\rm CEMP}^{halo}({\rm[Fe/H]}<-2)\approx 24\%\), then we can estimate that \(\sim 450\) CEMP stars should have been observed by PIGS. Instead, \(<62\) CEMP(-no) candidates have been identified so far (Arenstein et al., 2021). In agreement with the PIGS papers, we thus conclude that the dearth of CEMP stars in the Galactic bulge cannot be only a consequence of a statistical effect in this metal-rich environment. It it is also interesting that we derive a peak probability of observing CEMP stars in the Galactic bulge at [Fe/H] \(\simeq-3.5\), i.e. at the metallicity of the only CEMP star with high [C/Fe] observed by Arenstein et al. (2021).
## 3 Cosmological model description
In this Section we will briefly recap the cosmological model used to identify the first star-forming halos whose stars are currently located in the Galactic bulge and to investigate their properties. The model combines a \(N\)-body simulation that follows the hierarchical assembly of a Milky Way (MW) - like galaxy (used also in Salvadori et al., 2010; Pacucci et al., 2017) with a semi-analytical model (GAMETE, Salvadori et al., 2007; Salvadori et al., 2015) that follows the evolution of baryons, from the formation of the first stars to the present day. The semi-analytical model allows us to follow the star formation and metal enrichment history of the Milky Way from early times (\(z\sim 20\)) until now (\(z=0\)) and thus to link the chemical abundances of present-day stars with the properties of the first stellar generations.
* **N-body simulation** The \(N\)-body simulation used to study the hierarchical formation of the MW has a low-resolution region corresponding to a sphere of radius \(10h^{-1}\) Mpc (see Scannapieco et al., 2006). The region of the highest resolution is a sphere of radius \(1h^{-1}\) Mpc, i.e. four times the virial radius of the MW at z = 0 (\(r_{vir}=239\) kpc). A low-resolution simulation including gas physics and star formation has been used to confirm that the initial conditions will lead to a disk galaxy like the Milky Way. The system consists on about \(10^{6}\) dark matter (DM) particles within \(r_{vir}\) with masses of \(7.8\times 10^{5}M_{\odot}\); its virial mass and radius are respectively \(M_{vir}=7.7\times 10^{11}M_{\odot}\) and \(r_{vir}=239\) kpc, roughly consistent with the observational estimates of the MW, \(M_{vir}\approx 10^{12}M_{\odot}\)(e.g. McMillan, 2011). The softening length is \(540\) kpc. The simulation data is output every \(22\) Myr between \(z=8-17\) and every \(110\) Myr for \(z<8\). At each output, the virialized DM halos have been identified using a _friend-of-friend_ group finder with a _linking parameter_\(b=0.15\), and a threshold number of particles constituting virialized halos equal to \(50\). The \(N\)-body simulation enables to reconstruct a hierarchical history of the MW that proceeds through the consecutive merging of DM halos maintaining the information about the spatial distribution of DM particles belonging to them.
* **Star formation** At the initial redshift of the simulation, \(z=20\), gas in DM halos is assumed to have a primordial composition, and only objects with virial temperatures \(T_{vir}\geq 10^{4}\) K form stars. This choice, which is equivalent to assuming that the star formation activity is rapidly quenched in mini-halos, is dictated by the limited DM resolution of the \(N\)-body simulation (Graziani et al., 2015). At each time-step, stars are assumed to form in a single burst, proportional to the available cold gas mass, \(M_{gas}\). The constant of proportionality is a redshift-dependent star-formation efficiency \({\rm f_{*}}(z)=\epsilon_{*}\ \frac{{\rm Ai}(z)}{{\rm t}(z)^{2}}\), where \({\rm t_{ff}}\) is the free-fall time, \(\Delta{\rm t}(z)\) the \(N\)-body time-step and \(\epsilon_{*}\) a free parameter of the model, physically corresponding to a _"local" star formation efficiency_.
* **Pop III stars** The model simply assumes that the Pop III IMF is a delta function centered either on the average mass value of the pair-instability SNe (PISNe; Salvadori et al., 2010) or faint SNe (Pacucci et al., 2017), i.e: \[\Phi(m)=\frac{dN}{dm}\propto\delta(m_{\rm pop\ III}).\] (3)
To give an idea of the amount of chemical elements released by these different Pop III stars, an "average" PISN (\({\rm m_{PopIII}}=200\) M\({}_{\odot}\)) yields of metals, iron, and carbon respectively is equal to: \(Y=M_{Z}/M_{\rm pop\ III}=0.45\), \(Y_{\rm Fe}=0.022\) and \(Y_{\rm C}=0.02\)(Heger and Woosley, 2002). Conversely, an average faint SN (\({\rm m_{*}}=25\) M\({}_{\odot}\)) gives \(Y\sim 0.1\), \(Y_{\rm Fe}=4\times 10^{-7}\) and \(Y_{\rm C}=9.98\times 10^{-3}\)(Iwamoto et al., 2005; Marassi et al., 2014).
* **Pop III-to-Pop II transition** Following the critical metallicity scenario (e.g. Bromm et al., 2001; Schneider et al., 2002) we assume that the IMF of newly formed stars depends upon the initial metallicity of the star-forming clouds. Therefore, a star forming halo with a gas metallicity \(Z\leq Z_{cr}\) will form Pop III stars, otherwise, if \(Z>Z_{cr}\), it will form Pop II/I stars. Exploiting the data driven constraints of De Bennasaiti et al. (2017), we set
\(Z_{cr}=10^{-4.5}Z_{\odot}\). Normal Pop II/I stars are assumed to have masses in the range \([0.1,100]\)\(M_{\odot}\) and to form according to a standard Larson IMF (Larson, 1998), which peaks at the characteristic mass \(\rm m_{ch}=0.35M_{\odot}\) and rapidly declines with a Salpeter-like shape towards larger masses (Salpeter, 1955).
* **Metal enrichment** The newly formed stars are assumed to instantaneously evolve at the following snapshot of the simulation (Instantaneous Recycling Approximation, IRA) since the time elapsed between two neighbouring steps is larger than the lifetime of the least massive stars evolving as SNe. For Pop III stars exploding as PISN we assume the yields of Heger & Woosley (2002), while for primordial faint SN we assume those of Iwamoto et al. (2005). For Pop II/I stars we adopt the yields from Woosley & Weaver (1995). To not overestimate the contribution of carbon due to AGB stars that also produce slow neutron-capture process elements and, even if not in binary systems, can lead to the formation of "moderate" CEMP-s stars (Rossi et al., 2023), we only considered the chemical products of Pop II/I that evolve as core-collapse SNe in short timescales (\(3-30\) Myr). After SN explosions the newly produced/injected metals are assumed to be instantaneously and homogeneously mixed within the Inter Stellar Medium (ISM) eventually reaching the Inter Galactic Medium (IGM) via supernova driven outflows.
* **Gas and metals dispersal** Supernovae may release a high amount of energy, which may overcome the binding energy of the hosting halo leading to a partial gas and metals removal from the galaxy. The mass of gas ejected into the IGM, \(\rm M_{gas}^{ej}\), depends on the balance between the escape velocity of the halo, \(v_{esc}\), and the kinetic energy released during the explosion, namely: \[\rm M_{gas}^{ej}=(2E_{SN})/v_{esc}^{2}\] (4) where \(\rm E_{SN}=\epsilon_{w}\,N_{SN}\,(E_{SN})\), with \(\rm N_{SN}\) number of SN explosions and \((\rm E_{SN})\) the average explosion energy. For a typical PISN (\(\rm r_{200M_{\odot}}^{PISN}\sim 2.7\times 10^{52}\) erg (Heger & Woosley, 2002), while a typical faint SN provides \((\rm E_{\rm 25M_{\odot}}^{init})\sim 0.7\times 10^{51}\) erg (Iwamoto et al., 2005; Marassi et al., 2014), which is lower than the energy released by a normal 25 \(M_{\odot}\) core-collapse SN, \(\rm E_{\rm 25M_{\odot}}^{cc}\approx 10^{51}\) erg.
The quantity \(\epsilon_{w}\), representing the second free parameter of the model, is the _wind efficiency_, that is the fraction of the explosion energy converted into kinetic form.
* **Chemical evolution** Due to mechanical feedback, the mass of gas and metals in a halo can decrease substantially. At each simulation time-step the gas mass reservoir in each halo, \(\rm M_{gas}\), is updated with respect to the initial gas mass, \(\rm M_{gas}^{in}\), to account for the mass of stars locked into newly formed stars, \(\rm M_{*}\), and the gas mass ejected out of the halo, \(\rm M_{gas}^{ej}\): \[\rm M_{gas}=M_{gas}^{in}-(1+R)M_{*}-M_{gas}^{ej}.\] (5) where \(R\) is the returned fraction, which is equal to 1 only for PISN and lower than unity otherwise. Similarly, the mass of metals, \(\rm M_{Z}\), in each hosting halo is updated as follow: \[\rm M_{Z}=M_{Z}^{in}+Y\,M_{*}-Z_{ISM}^{in}M_{*}-Z_{ISM}^{in}M_{gas}^{ej},\] (6) where \(M_{Z}^{in}\) is the initial mass of metals, \(\rm Y_{Z}\) the metal yield, and \(Z_{ISM}^{in}\) the initial metallicity of the ISM.
* **Model calibration** To set the best values of the two model free parameters, \(\epsilon_{*}\) and \(\epsilon_{w}\), the observed properties of the MW have been used as a benchmark. In particular, the results of the simulations at redshift \(z=0\) have been compared with the gas/stellar mass and metallicity of the MW, the baryon-to-dark matter ratio, and the metallicity of high-velocity clouds (see Salvadori et al. 2007, 2010, for more details).
## 4 Simulation analysis and results
### Extreme cases of Pop III enrichment
To understand how much the properties of Pop III stars can affect the present-day distribution of CEMP stars, we first analyse the \(z=0\) outputs of the simulation for two extreme cases: (i) when all Pop III stars explode as an average PISN of \(\rm 200\,M_{\odot}\), or (ii) when they all explode as an average faint SN of \(\rm 25\,M_{\odot}\) (see Sec. 3). In the first case we find that CEMP stars are never produced by the model, the carbon-to-iron value of present-day stars being [\(\rm C\)/Fe] \(\rm 5\,0.2\). This is perfectly consistent with the low [\(\rm C\)/Fe] value of an ISM imprinted by massive \(>160\)\(M_{\odot}\) Pop III stars evolving as PISN (e.g. Salvadori et al. 2019, left panel of Fig. 2).
Conversely, CEMP stars are formed when all Pop III stars are assumed to explode as 25 \(M_{\odot}\) faint SNe and they can have huge C-excess, [\(\rm C\)/Fe]\(\rm 2+4\), at the lowest [\(\rm Fe\)/H] \(\rm<-4\). This result is in line with the idea that the most iron-poor CEMP-(no) stars are indeed the descendants of this kind of zero-metallicity SNe (Sec. 1). Under this assumption we can also compute the present-day distribution of CEMP stars which predicts that at [\(\rm Fe\)/H] \(\rm<-2\) the innermost region hosts the largest mass fraction of CEMP stars. This remains true even if we only consider CEMP stars with [\(\rm C\)/Fe] \(\rm>+2.0\), see Fig. 3.
Although we cannot completely rule out a statistical effect (see Sec. 2.3), these results are at odds with observations, which actually suggest a _lower_ fraction of CEMP stars in the bulge than in the outer regions (Sec. 2.1). Therefore it is clear that to model the early chemical enrichment in the first star forming halos it is necessary to consider a more realistic Pop III IMF, which accounts for the contribution of both faint SNe and PISNe.
Figure 3: Present-day mass fraction of CEMP stars with [\(\rm C\)/Fe] \(>+2\), relative to all stars at [\(\rm Fe\)/H] \(<-2\), contained in an annulus of radial width within 1 kpc (see colormormy) plotted in the \(\rho-\zeta\) cylindrical coordinate plane of our simulated MW-analogue galaxy. The results have been obtained from our semi-analytical model by assuming that _all_ Pop III stars have \(\rm m_{popIII}=25M_{\odot}\) and explode as faint SNe.
### The first star forming halos
To identify the first star forming halos and see where the oldest stars are currently located, we reconstructed the hierarchical tree in the MW-analogue by combining the positions of the DM particles within which star formation occurs, and their belonging to a specific halo at different redshifts. A halo is identified as the main progenitor of another at the following timestep, if the latter inherits at least 90% of its DM particles. If a halo has two or more progenitors then we can assume that a merging process has occurred.
Figure 4 shows the present-day, \(z=0\), positions of Pop II stars formed in the first star-forming halos, at \(z\geq 13\). As we can see from the top panel of Fig. 4, _the oldest Pop II stars are currently located in the innermost Galactic region, i.e. the bulge_. The position of DM particles belonging to halos that begin forming stars at lower redshifts (lower panels of Fig. 4) is gradually spread towards the outermost regions, filling first the inner and then the outer Galactic halo. These results confirm the idea that the most ancient stellar populations, some of which should have been formed from the ashes of the first stars, must dwell in the bulge (see Sec. 1).
Figure 5 shows the present-day, \(z=0\), spatial distribution of DM particles, color-coded with their star-formation rate (SFR) during the first \(\approx 800\) Myr, i.e. averaged over the cosmic time between \(z=15\) and \(z=8\), which corresponds to the lowest redshift for which we have Pop III star-formation. Thus we assigned to each DM particle the mean SFR of the halo it belongs to, so that:
\[\langle{\rm SFR}\rangle=\frac{\sum_{i=1}^{\rm N}{\rm M}_{i}^{\rm tot}}{{\rm t }(z=8)-{\rm t}(z=15)}, \tag{7}\]
where \({\rm M}_{i}^{\rm tot}\) is the total stellar mass formed in the halo at the \(i\)-\(th\) time-step, and \(N\) is the number of time-steps between \(z=15\) and \(z=8\). Then we computed the final (SFR) by averaging among the results of all DM particles in the considered pixel. This results in a wide range of average star formation rates (Fig. 5), \(\langle{\rm SFR}\rangle\approx(10^{-4}-1.4)\)\({\rm M}_{\odot}/\)yr.
The spatially resolved region with the highest mean SFR at \(z>8\) is the innermost one, i.e. the Galactic bulge. This result, which is in agreement with previous theoretical (e.g. Cescutti & Matteucci, 2011) and observational (e.g. Lucertijn et al., 2022) findings, suggests that the progenitors of the Galactic bulge experience, on average, a more intense star formation at early times with respect to the progenitors of the Galactic halo. A higher star formation rate in a metal-poor environment might have strong consequences: in particular, it could allow to produce many rare stellar populations, most likely including the massive progenitors of PISNe (Rossi et al., 2021).
### Pop III enrichment in the first star-forming haloes
We will now analytically model the Pop III star enrichment within the first star forming halos of the cosmological simulation by using more realistic Pop III IMFs, and by accounting for their incomplete sampling. To make our analytical calculations, we selected those pristine and star-forming halos for which more than 80% of particles are predicted to dwell in the Galactic bulge, i.e. with \({\rm r}=\sqrt{\rho^{2}+\zeta^{2}}\simeq 2.5\) kpc, at the present time (8 halos). Their halo mass and redshift of formation range between \({\rm m_{h}}=(5.5\times 10^{7}-1.7\times 10^{8})\)\({\rm M}_{\odot}\), and \(11.4<z<16.3\). The predicted star-formation rate for these first star-forming halos is \({\rm SFR}<1.8\times 10^{-2}\)\({\rm M}_{\odot}\)yr\({}^{-1}\), which in all cases is less than the threshold value for a fully populated Pop III IMF, \({\rm SFR}_{\rm min}\sim 10^{-1}\)\({\rm M}_{\odot}\)yr\({}^{-1}\)(Rossi et al., 2021). Thus, it is fundamental to include incomplete sampling of the adopted Pop III IMFs.
Figure 4: _Present-day_ positions, \(z=0\), in the \(\rho-\zeta\) cylindrical coordinate plane of Pop II stars formed in a specific halo of our simulation (d number and color in label), at the specified \(z\) which decreases from the top to the bottom panel. Identified Galactic regions are: the inner halo, 7 kpc\(<r\leq 20\) kpc (grey circles); and the bulge, \(r\leq 2.5\) kpc (green circle).
#### 4.3.1 Realistic IMFs of Pop III stars
In addition to the delta functions considered so far (Sec.4.1), we will assume that Pop III stars form according to a _Larson IMF_:
\[\Phi(m)=\frac{dN}{dm}\propto m^{-2.35}\mathrm{exp}(-m_{ch}/m), \tag{8}\]
biased towards more massive stars as it is expected for the first stellar generations. In particular, following the latest data-driven results from stellar archaeology (Rossi et al., 2021), we assume a minimum mass of Pop III stars equal to \(\mathrm{m_{min}}=0.8\,\mathrm{M_{\odot}}\), a maximum mass \(\mathrm{m_{max}}=1000\,\mathrm{M_{\odot}}\), and we explore different characteristic masses, \(\mathrm{m_{ch}}\in\{1,10,100\,\mathrm{M_{\odot}}\}\). This allows us to account for the contribution to chemical enrichment of Pop III stars exploding as both faint SNe and PISNe and to vary their relative proportions. All the Pop III IMFs considered in this study are shown in Fig. 6.
For each of our selected first star-forming halo, we derived the _effective_ Pop III IMF associated to each burst of primordial star-formation to account for the stochastic and incomplete IMF sampling. This was done using the Monte Carlo procedure developed by Rossi et al. (2021). Starting from the total mass of Pop III stars formed, we generate a random number of stars, which are distributed according to the assumed stellar IMF. Due to the stochastic nature of this sampling, every time that stars are formed, the effective stellar mass distribution is differently populated, especially at the higher masses (Fig. 7). Following Rossi et al. (2021) we assumed a time-scale for star formation equal to \(\Delta t=1\) Myr and then we computed the total mass of Pop III stars formed in the burst accordingly, \(\mathrm{M_{popIII}^{burst}}=\mathrm{SFR}\times\Delta t\). Then we applied a statistical approach by averaging among the results of 50 incomplete random sampling of the IMF and quantifying the scatter among them.
Figure 7 shows, for different \(\mathrm{m_{ch}}\), the comparison between the theoretical Pop III IMFs and effective ones, which have been obtained in one run of the random sampling. The mass range covered
Figure 5: Present-day spatial distribution of DM particles in the cylindrical coordinate plane of the simulated galaxy, color-coded with their SFR averaged over the cosmic time between \(z=15\) and \(z=8\).
by faint SNe, \((8-40)\,\rm M_{\odot}\); PISNe, \((140-260)\,\rm M_{\odot}\); and other stars that do not end their lives as supernovae are identified using different colours. Among these, stars with m \(<8\,\rm M_{\odot}\) lose their external envelope during the AGB phase, and stars in the two mass ranges, \((40-140)\,\rm M_{\odot}\) and \((260-1000)\,\rm M_{\odot}\) are predicted to directly end their life collapsing in a black hole (Heger and Woosley, 2002). Note that the average mass fraction of stars exploding as faint SNe (and PISNe) strongly varies, not only with m\({}_{\rm ch}\) but also as a consequence of the incomplete IMF sampling. In the mass range of faint SNe, the Pop III IMFs are almost completely sampled in the extreme case m\({}_{\rm ch}=100\,\rm M_{\odot}\). On the contrary, in the typical mass range of PISNe the Pop III IMF is only partially populated. As expected, however, the mass range of PISNe becomes more densely populated as the characteristic mass increases.
The mass of metals, iron, and carbon produced by the total number of Pop III stars formed was computed by summing up the contribution of Pop III stars with different masses, i.e. \(\rm M_{X}=\sum_{i}^{N}Y_{X}(m_{pooIII,i})m_{pooIII,i}\), where \(\rm Y_{X}(m_{pooIII,i})\) is the yield of the element X produced by Pop III stars in the i - th mass bins. For faint SNe, we followed De Bemassuti et al. (2017) and assumed that the yield corresponding to \(25\,M_{\odot}\) is simply re-scaled to the mass of the Pop III star exploding as faint SN, i.e. \(\rm Y_{X}(m_{pooIII})=Y_{X}(25\,\rm M_{\odot})\times(m_{pooIII}/25\,\rm M_{ \odot})\). This scaling is quite a good approximation as shown by Marassi et al. (2014) and it has been used in different works investigating stellar nucleosynthesis (e.g. De Bemassuti et al., 2017; Nomoto et al., 2013). For PISNe we exploited the yields and SN explosion energies provided by Heger and Woosley (2002). Similarly, we computed the total amount of energy released by SNe by accounting for the number of Pop III stars effectively formed in different mass bins, \(\rm E_{SN}=\epsilon_{w}\sum_{i}^{N}N_{SN}(m_{i}^{pooIII})\times E_{SN}(m_{i}^{poo III})\), as done in Rossi et al. (2021). Finally, by exploiting Eq. 5 we computed the ISM metallicity along with [Fe/H] and [C/Fe] after the Pop III star enrichment.
#### 4.3.2 Average and maximum [C/Fe] of bulge stars
In Fig. 8 we show the [Fe/H] and [C/Fe] of the ISM in the 8 first star-forming halos currently dwelling in the Galactic bulge as obtained with our incomplete IMF sampling procedure (Sec. 4.3.1) and as a function of the five different Pop III IMFs assumed (see Fig. 6). The abundance ratios for the different halos are colour-coded according to the halo mass and, for comparison, we also show the results obtained considering the fully sampled IMFs as solid grey lines. For each m\({}_{\rm ch}\) the average abundances over all halos are shown as empty black symbols together with their errors. The observed ranges of [C/Fe] within the Galactic halo and bulge (from Fig. 1) are highlighted as well in Fig. 8 (respectively as grey and orange shaded areas).
First, we note that for all IMFs, the total gas metallicity after the Pop III star enrichment is \(\rm Z_{ISM}>Z_{cr}=10^{-4.5}Z_{\odot}\), which implies that normal low-mass long-lived stars will be able to form in such environments. Furthermore, when we only account for the contribution of faint SNe, we get a very low value of [Fe/H] \(\simeq-5.6\) and an extremely high [C/Fe] \(\simeq+4\), which is close to the maximum value observed in the Milky Way halo (Keller et al., 2014). As soon as the chemical contribution of PISNe is also considered, namely for m\({}_{\rm ch}=1\,\rm M_{\odot}\), the [C/Fe] value drops dramatically by at least 2 orders of magnitude. As we can see at fixed m\({}_{\rm ch}\), as the halo mass increases, the abundances approach the fully sampled case, that is,
Figure 7: Theoretical (solid lines) and effective (coloured histogram) Larson IMFs of Pop III stars with increasing characteristic masses from top to bottom (m\({}_{\rm ch}=1,10,100\,\rm M_{\odot}\)) obtained from one run of the random sampling. In each panel, the mass range covered by faint SNe, \((8-40)\,\rm M_{\odot}\) (orange); PISNe, \((140-260)\,\rm M_{\odot}\) (red); and other stars that do not end their lives as supernovae (grey), are specified. The total mass fractions are also listed for each mass range. The total mass of the burst of Pop III is \(7.7\times 10^{3}\,M_{\odot}\).
[Fe/H] increases and [C/Fe] decreases. This result can easily be explained as a consequence of the Pop III IMF sampling: in the lower mass halos, which have lower star formation rates, on average only one PISN explodes, thus only partially lowering the [C/Fe] value obtained in the case of faint SNe only. Still, even a single PISN is able to inject into the ISM 50% of its total stellar mass in form of heavy elements, thus strongly affecting the final [C/Fe] value of the ISM. In the most massive halo more PISNe can actually form and explode, thus further lowering the expected [C/Fe] value. For increasing characteristic mass, the mean [C/Fe] values become even lower, since PISN production starts to dominate the chemical enrichment. When only PISNe explode, no CEMP stars are able to form, regardless of the halo mass.
In conclusion, the measured [C/Fe] values in the Galactic bulge suggest that massive Pop III stars exploding as PISNe have likely formed in this environment, washing out the high [C/Fe] signature left by low-energy primordial SNe. Furthermore, by comparing our predicted [C/Fe] with the values observed in the bulge (Fig. 8), we can exclude the two extreme IMF cases, where Pop III stars explode either only as faint SNe or only as PISNe, and suggest that the IMF that best reproduces the observations is a Larson type with a characteristic mass possibly \(1\,\mathrm{M}_{\odot}\lesssim m_{\mathrm{ch}}\lesssim 10\,\mathrm{M}_{\odot}\). This result is in full agreement with cosmological simulations for the formation of Pop III stars (e.g. Hirano et al., 2014; Susa et al., 2014; Hirano et al., 2015) or theoretical models interpreting different observables (e.g. De Bennassuti et al., 2017; Rossi et al., 2021; Ishigaki et al., 2018).
## 5 Summary and Conclusions
The aim of this paper is to investigate the apparent dearth of carbon-enhanced metal-poor (CEMP) stars with high [C/Fe] values in the Galactic bulge with respect to other environments, such as the Galactic halo and ultra-faint dwarf galaxies (e.g. Howes et al., 2015, 2016; Arentsen et al., 2021). This lack is particularly puzzling since the bulge is supposed to be the oldest Galactic stellar component (White & Springel, 2000; Diemand et al., 2005; Tumlinson, 2009; Salvadori et al., 2010; Starkenburg et al., 2016) and CEMP stars are predicted to be among the most ancient observable stars, most likely being the direct descendants of first stars with intermediate masses that exploded as low-energy faint supernovae (e.g. Iwamoto et al., 2005; Marassi et al., 2014; Bonifacio et al., 2015; De Bennassuti et al., 2017). A reason for the dearth of CEMP stars in the Galactic bulge could be linked to a bias introduced by the photometric selection performed by the surveys that have targeted metal-poor stars in this environment (see Sec. 1 and Howes et al., 2015, 2016; Arentsen et al., 2021, for details). Despite these caveats, this issue still persists even though progress has been achieved over the years in detecting these types of stars leading to an actual increase in the number of their observations. Therefore, we asked ourselves: could this scarcity of CEMP stars be a consequence of the low statistics of metal-poor stars in the more metal-rich and dusty bulge? Could it suggest an intrinsically different formation mechanism of this region compared to the other environments?
In order to answer these questions, we carried out a diversified investigation which is summarised in the following points:
Figure 8: Average [Fe/H] and [C/Fe] of the ISM in 8 first star-forming halos (colour-coded according to the halo mass), obtained through the random sampling of different Pop III IMFs. The characteristic masses (m\({}_{\mathrm{ch}}\)) are in solar masses (\(M_{\odot}\)) and error bars are \(1\sigma\) confidence intervals. The values are computed assuming a time step \(\Delta t=1\) Myr of star formation, consistent with Rossi et al. (2021). For comparison, the grey solid lines are related to the fully sampled IMFs. For each m\({}_{\mathrm{ch}}\) the average abundances over all halos are shown as empty black symbols together with their errors. Observed ranges of [C/Fe] within the Galactic halo (grey) and bulge (orange) are shown as shaded areas.
* We first performed a statistical analysis of the metal-poor stars observed in the Local Group to understand whether the dearth of CEMP stars is due to the limited sample size of metal-poor stars within the bulge, which is dominated by metal-rich stars (e.g. Zoccali et al., 2008; Ness et al., 2013; Howes et al., 2016). By exploiting available data (Sec. 2), we first derived the fraction of CEMP stars at a given [Fe/H] for the Galactic halo (\(F_{\rm CEMP}^{halo}\)), which is the environment with the highest statistic on very metal-poor stars, [Fe/H] \(<-2\). We then assumed this fraction to be _"Universal"_ and combined it with the MDFs obtained for different dwarf galaxies and for the bulge. As a final step we computed the probability of identifying a CEMP star in a given [Fe/H] range, \(P_{\rm obs}\), while blindly observing these environments. In agreement with previous studies (Salvadori et al., 2015), we show that \(P_{\rm obs}\) is strongly affected by the total stellar mass and average metallicity of the examined environment. In the bulge, which is the brightest and most metal-rich, we find that \(P_{\rm obs}<0.01\)%, i.e. it is more than two orders of magnitude lower with respect to ultra-faint dwarf galaxies.
We conclude that CEMP stars could be partially hidden in this region dominated by metal-rich stars, rather than being totally absent. However, this dearth cannot totally be ascribed to this statistical reason. If the CEMP fraction in the bulge would be the same as in the halo, then biased surveys specifically searching for the most metal-poor stars in the Galactic bulge - such as the PIGS survey (Arentsen et al., 2021) - should have observed many more CEMP stars (\(\sim 450\)) than what they actually found (\(<62\), Sec. 2). This discrepancy could be due to selection biases in the photometric selection, but could there be another mechanism that reduces the [C/Fe] values in the bulge?
* We then asked whether the lower [C/Fe] values of CEMP stars in the Galactic bulge reflects the different formation and evolution of this ancient environment. To address this question, we focused on the predictions derived from the \(\Lambda\)CDM cosmological model, through the use of a \(N\)-body simulation that follows the hierarchical formation of a MW-like galaxy combined with the semi-analytical model GAMETE (e.g. Salvadori et al., 2007, 2010; Salvadori et al., 2015; Graziani et al., 2015; Pacucci et al., 2017), which is required to follow the star formation and metal enrichment history. The model is data-calibrated, i.e. the best values of the free parameters are set to reproduce the observed properties of the MW at \(z=0\) (Sec. 3). If we assume that _all_ Pop III stars explode as faint SNe, we find that the mass fraction of CEMP stars with [C/Fe] \(>+2\) increases at decreasing Galacto-centric radii and it is maximum in the Galactic bulge, which is the region of the simulation containing the most ancient stars, which is at odd with observations. However, the \(N\)-body simulations reveal that the stars dwelling into the present-day bulge form in halos that experienced the _highest mean star-formation rate_ at high-redshifts (\(z>8\)).
We inferred that the dearth of CEMP stars with [C/Fe] \(>+2\) in the Galactic bulge might be linked with the higher star-formation rate of its early progenitor halos, which hosted the first stars. Indeed, star-formation rates \(>10^{-2}M_{\odot}/yr\) in primordial environments might allow the formation of rare very massive Pop III stars (Rossi et al., 2021), which evolve as energetic Pair Instability SNe (PISNe).
* We thus investigated how the chemical enrichment of the bulge progenitors depends upon the properties of rare Pop III stars that can _effectively form_ in these highly star-forming systems. To this aim, we performed analytical calculations and computed the chemical properties of the ISM in the bulge progenitors after the contribution of Pop III stars. More specifically, we investigated how different Pop III IMFs affect the [C/Fe] and [Fe/H] values of the ISM by assuming various characteristic masses (\(\rm{m_{ch}}=1,10,100\,M_{\odot}\)), by including the contribution of Pop III stars exploding as PISNe (see Sec. 4), and by accounting for the incomplete sampling of the Pop III IMF (Rossi et al., 2021). Our results show that very massive Pop III stars can effectively form in the bulge progenitors, and that their contribution to the chemical enrichment as energetic PISNe can partially wash out the distinctive signature of faint SNe, lowering the carbon overabundance down to [C/Fe] \(<+2\). In particular, we show that the higher the probability to form very massive Pop III stars, i.e. the larger the \(\rm{m_{ch}}\), the lower is the [C/Fe] value of the ISM after the contribution of Pop III stars. By exploiting the available data we thus tentatively infer \(1\,M_{\odot}\lesssim m_{\rm ch}\lesssim 10\,M_{\odot}\), consistent with the constraints found by Rossi et al. (2021) based on ultra-faint dwarfs.
We conclude that the modest [C/Fe] values of CEMP stars identified in the bulge, [C/Fe] \(\approx+0.8\), along with the dearth of CEMP stars with [C/Fe] \(>+2\) could be an indirect probe of very massive first stars exploding as PISNe, which are extremely rare and hence can only form in highly star-forming progenitors of the MW bulge.
Through careful analysis of observational data and theoretical simulations, we thus suggest that the dearth of CEMP stars in the bulge might be intrinsic, and not only a consequence of systematic observational effects. Furthermore we have shown that the first star forming halos that end up in the bulge have typically significantly higher star-formation rates than those in the outskirts of the galaxy. This means that the Pop III IMF is better sampled in these systems, enabling the formation of rare populations such as very massive first stars that explode as PISNe. Ultimately, our analysis showed this to be a very plausible explanation for the dearth of CEMP stars in the bulge, as added PISN contribution will lower the observed [C/Fe]. Furthermore we suggested a new promising method that exploits the lack of CEMP stars to constrain the characteristic mass of the first stars.
## 6 Discussion and Future Outlook
Our findings provide a plausible explanation for the lack of CEMP stars with high [C/Fe] values in the Galactic bulge, which is not only linked to statistical issues or observational biases, but it is rather a consequence of the different formation path of this ancient but star-rich environment. The idea that Pop III stars exploding as PISNe can efficiently form in the bulge progenitors and lower the [C/Fe] value in their ISM is extremely appealing, since it can also explain the larger fractions of CEMP stars observed in the Galactic halo and in UFDs. The typical star-formation rates observationally inferred for UFDs (\(\lesssim 10^{-3}M_{\odot}/\)yr, e.g., Salvadori et al., 2014; Gallart et al., 2021) are indeed too low for enabling these galaxies to form very massive Pop III stars (Rossi et al., 2021). The early chemical enrichment stages of these small systems are thus likely dominated by faint SNe, whose explosion energy (\(E<10^{51}\) erg) is small enough to not exceed the halo binding energy (Rossi et al. in prep) and whose chemical products are characterized by a large amount of carbon and a tiny amount of iron. The progenitors of UFDs have been suggested to build-up the low-Fe tail of the Galactic halo (e.g. Salvadori et al., 2015; Bonifacio et al., 2021), where CEMP-no stars are found, making the overall scenario perfectly consistent.
Other sources that can partially wash-out the chemical signature of faint SN are primordial hypernovae, i.e. Pop III stars with masses
\(\rm{m}_{\rm{PopII}}\sim(10-100)\rm{M}_{\odot}\) that experience energetic SN explosions, \(E\sim 10^{52}\) erg (Heger & Woosley, 2010). In the last years the descendants of primordial hypernovae have been firstly identified thanks to their sub-solar [C/Fe] values in nearby dwarf spheroidal galaxies and halo stars (Skalodtaroff et al., 2012; Pacco et al., 2021). However, since Pop III stars exploding as faint SNe and as hypernovae have the same range of masses, we expect them to form with the same efficiency in both the progenitors of the Galactic halo, UFDs, and the bulge. Furthermore, the same reasoning can be applied to the contribution of Pop II stars in the chemical enrichment. In conclusion, this alternative solutions to explain the lower [C/Fe] value of bulge stars, is hard to reconcile with the global scenario. In a forthcoming study, we will further explore the contribution of different Pop III stars to the bulge enrichment by performing a self-consistent calculation, i.e. by following within the \(N\)-body simulation the chemical elements produced by Pop III stars with different masses and explosion energies (Koutsouridou et al. in prep.). In the same study we will relax the Instantaneous Recycling Approximation, which, although quite robust for z \(>6\), may have affected our results. In fact, taking into account the lifetimes of Pop III stars, the more massive PISNe would have exploded first - raising Fe and keeping C low - and then the faint SNe would have contributed - raising C with constant Fe. Consequently among the older Pop II stars we could have stars with higher [Fe/H] and lower [C/Fe].
Ultimately, the dearth of CEMP stars with high [C/Fe] values in the Galactic bulge might provide an _indirect_ probe for the long-searched very massive Pop III stars evolving as PISNe. A way to prove that primordial PISNe exist is to search for an under-abundance of [Zn/Fe] and [Cu/Fe] in their descendants (Salvadori et al., 2019; Aguado et al., 2023). Interestingly, stars with sub-solar [Zn/Fe] have been predominantly identified in the bulge (e.g. Barbuy et al., 2015; Duffau et al., 2017) and classical dwarf spheroidal galaxies (e.g. Skaladottir et al., 2017), which are indeed more massive and star-forming than UFDs. Furthermore, in the nearby future the WEAVE (Dalton, 2016), 4MOST (Christlieb et al., 2019; Chiappini et al., 2019; Bensby et al., 2019), and J-PLUS/S-PLUS (Cenarro et al., 2019; Mendes de Oliveira et al., 2019) Galactic Archaeology surveys will allow us to derive a large number of carbon measurements in both halo and bulge metal-poor stars, as well as the satellite dwarf galaxies (4DWARFS; Skalodtaroff et al., 2023, in press.). Thus, the CEMP-no frequency will be established with unprecedented accuracy, giving new insights into PISN events. Unfortunately, Zn will not be measured by 4MOST. However, if the green grating will be used in the high-resolution WEAVE survey, a large sample of Cu and Zn measurements will be extremely valuable to confirm this scenario and to search for PISNe descendants.
This is the perfect time to deepen our studies on Stellar Archaeology. The future of the field is indeed promising since great technology developments and new instruments will provide us unprecedented datasets to confirm our theoretical findings. In the coming years, there will be a surge of observations in the inner Galaxy of ancient, metal-poor stars thanks to the 4MOST Milky Way Disc and BuLgE Low-Resolution (4MIDABLE-LR, Chiappini et al., 2019) and 4MOST MILky Way Disc And BuLgE High-Resolution surveys (4MIDABLE-HR, Bensby et al., 2019). This will provide a completely new view of the central regions of the Milky Way, and hopefully, greatly advance our understanding of the first stars.
## Acknowledgements
SS, MR, DA, IK, and AS acknowledge funding from the European Research Council (ERC) under the European Unionion's Horizon 2020 research and innovation programme, project NEFERTITI (grant agreement No 804240). SS also acknowledges funding from the PRIN-MIUR17, The quest for the first stars, prot. in 2017T4ARJ5. The authors acknowledge the SDSS/APOGEE survey.
## Data Availability
The authors confirm that the data analysed in this study are available within the JINABASE (Abobalima & Frebel, 2018), the APOGEE spectroscopic survey (DR16, Jonsson et al., 2020), and the following papers: Norris et al. (2010), Frebel et al. (2014), Spite, M. et al. (2018), Frebel et al. (2009), Simon et al. (2010), Lai et al. (2011), Norris et al. (2010), Gilmore et al. (2013), Kirby et al. (2015), Frebel et al. (2010), Tafelmeyer, M. et al. (2010), Starkenburg et al. (2014), Simon et al. (2015), Jablonka, P. et al. (2015), Skalodttir et al. (2015). PIGS data have been provided via private communication with Anke Arentsen. The theoretical data that support the findings of this study are available upon request to the corresponding author GP.
|
2310.12995
|
Comprehensive Multimodal Segmentation in Medical Imaging: Combining
YOLOv8 with SAM and HQ-SAM Models
|
This paper introduces a comprehensive approach for segmenting regions of
interest (ROI) in diverse medical imaging datasets, encompassing ultrasound, CT
scans, and X-ray images. The proposed method harnesses the capabilities of the
YOLOv8 model for approximate boundary box detection across modalities,
alongside the Segment Anything Model (SAM) and High Quality (HQ) SAM for fully
automatic and precise segmentation. To generate boundary boxes, the YOLOv8
model was trained using a limited set of 100 images and masks from each
modality. The results obtained from our approach are extensively computed and
analyzed, demonstrating its effectiveness and potential in medical image
analysis. Various evaluation metrics, including precision, recall, F1 score,
and Dice Score, were employed to quantify the accuracy of the segmentation
results. A comparative analysis was conducted to assess the individual and
combined performance of the YOLOv8, YOLOv8+SAM, and YOLOv8+HQ-SAM models. The
results indicate that the SAM model performs better than the other two models,
exhibiting higher segmentation accuracy and overall performance. While HQ-SAM
offers potential advantages, its incremental gains over the standard SAM model
may not justify the additional computational cost. The YOLOv8+SAM model shows
promise for enhancing medical image segmentation and its clinical implications.
|
Sumit Pandey, Kuan-Fu Chen, Erik B. Dam
|
2023-10-04T20:30:49Z
|
http://arxiv.org/abs/2310.12995v1
|
Comprehensive Multimodal Segmentation in Medical Imaging: Combining YOLOv8 with SAM and HQ-SAM Models
###### Abstract
This paper introduces a comprehensive approach for segmenting regions of interest (ROI) in diverse medical imaging datasets, encompassing ultrasound, CT scans, and X-ray images. The proposed method harnesses the capabilities of the YOLOv8 model for approximate boundary box detection across modalities, alongside the Segment Anything Model (SAM) and High Quality (HQ) SAM for fully automatic and precise segmentation. To generate boundary boxes, the YOLOv8 model was trained using a limited set of 100 images and masks from each modality.
The results obtained from our approach are extensively computed and analyzed, demonstrating its effectiveness and potential in medical image analysis. Various evaluation metrics, including precision, recall, F1 score, and Dice Score, were employed to quantify the accuracy of the segmentation results. A comparative analysis was conducted to assess the individual and combined performance of the YOLOv8, YOLOv8+SAM, and YOLOv8+HQ-SAM models.
The results indicate that the SAM model performs better than the other two models, exhibiting higher segmentation accuracy and overall performance. While HQ-SAM offers potential advantages, its incremental gains over the standard SAM model may not justify the additional computational cost. The YOLOv8+SAM model shows promise for enhancing medical image segmentation and its clinical implications.
## 1 Introduction
Medical image segmentation plays a crucial role in a wide range of medical image analysis tasks and serves as a vital component in computer-aided diagnosis and pathology research. The field of medical image segmentation has witnessed significant advancements, particularly with the emergence of convolutional neural networks (CNNs) in computer vision. These CNNs, specifically Encoder-Decoder based architectures, have demonstrated remarkable success in various medical imaging applications such as brain Magnetic Resonance Imaging (MRI) [10], multi-organ segmentation, and cardiac ventricle analysis [13][5]. Their capacity for end-to-end image semantic segmentation has led to the development of notable variants like U-Net [16], 3D U-Net [4], tailored to segmenting images and volumes from different medical imaging modalities.
Despite the success of CNN-based methods, they encountered challenges in handling long-distance dependencies between image elements due to limited convolution kernel size. To overcome these challenges and effectively incorporate global contextual information, the Transformer [18] was introduced. Initially designed for sequence-to-sequence prediction tasks, the Transformer architecture revolutionized the field of natural language processing. Subsequently, it found application in medical image segmentation, leading to the creation of State of the art (SOTA) models like Attention U-Net [14], along with its variations such as Multi-res-attention UNet [17] and Attention Res-UNet [12]. Unlike traditional CNN-based methods, the Transformer architecture adopts a fully attention-based encoder-decoder structure, operating on one-dimensional sequences, which endows it with robust modeling capabilities for capturing global context information. Furthermore, the Transformer's potential is further harnessed through large-scale pre-training, making it adaptable to various downstream tasks in the medical image analysis domain.
The Transformer has emerged as a powerful tool for medical image segmentation, but its emphasis on global context information often comes at the cost of capturing fine local details. This limitation affects the accuracy in distinguishing between background and target regions. To address this issue and build upon the Transformer's strengths, researchers have integrated Convolutional Neural Networks (CNNs) with Transformer architectures. One promising approach is TransUNet [1], which cleverly combines CNNs to extract local features and Transformer for global context modeling. By incorporating a self-attention mechanism, TransUNet successfully retains the resolution of local features, resulting in significant improvements in image segmentation accuracy. Despite these advancements, TransUNet represents only a preliminary integration of CNNs and Transformer, leaving ample room for practical enhancements and refinements.
A crucial aspect of semantic segmentation lies in the extraction and fusion of low-dimensional image texture features, including structural and statistical features. These features significantly impact the segmentation performance. For example, [3] introduced DeepLabv3+, which enhances segmentation by incorporating an encoder into the DeepLabv3 model [2]. This modification facilitates the extraction and fusion of both shallow and deep image features. Similarly, [9] addressed the importance of edge features by introducing an edge preservation module, resulting in a notable overall boost in semantic segmentation performance. While these methods have highlighted the significance of low-dimensional features, few existing solutions have delved into the analysis of low-dimensional statistical features for grasping global image characteristics. By exploring the potential of these features, it may be possible to further improve the performance of medical image segmentation models and achieve even more accurate and reliable results.
In this paper, we propose a comprehensive approach for ROI segmentation in diverse medical imaging datasets by combining the YOLOv8 [6] model with the Segment Anything Model (SAM) [8] and the High Quality (HQ) SAM [7]. We aim to harness the strengths of both CNN-based models and the Transformer architecture, facilitating accurate and efficient ROI segmentation with improved global context modeling and detailed local feature preservation. The integration of YOLOv8 with SAM, and HQ-SAM offers a promising solution to address the challenges in medical image analysis and enhance the accuracy and efficiency of ROI segmentation in various medical imaging modalities. The experimental results demonstrate the effectiveness of our proposed approach, emphasizing its potential to advance medical image analysis tasks and improve patient care through automated and reliable ROI segmentation.
## 2 Methodology
Our study follows a three-part methodology. The first part involves data collection. In the second part, we train the YOLOv8 model using a limited dataset consisting of 100 images and masks to create approximate boundary boxes around the objects of interest. Finally, in the third part, we integrate YOLOv8 with SAM and HQ-SAM models to generate segmentation masks for ROI.
### Dataset Selection and Preprocessing
The first part of our study was Data collection and pre-processing, this study made use of three distinct datasets representing different modalities. The first dataset, acquired from Chang Gung Memorial Hospital, is called the Ultrasound Short-Axis Aorta Segmentation Dataset. It comprises 200 images of patients, along with their corresponding masks for segmentation. The second dataset utilized in the study is the Lung CT Scan Segmentation Dataset, which is an open-source dataset obtained from Kaggle [11]. This dataset contains 267 2D images, each accompanied by its respective masks for segmentation. Lastly, the third dataset employed in the study is also sourced from Kaggle and is known as the Lung X-ray Segmentation Dataset. This dataset contains 705 2D images, and similar to the others, it includes masks for segmentation [15]. By combining these three datasets, the study aims to explore and analyze various modalities in the context of segmentation tasks.
After data collection we performed preprocessing, this step includes cross-reference the images and masks, resizing, normalization, and augmentation to enhance the dataset's variability and generalizability.
### Training YOLOv8 for ROI Detection and Segmentation
The second part of our study focuses on training the YOLOv8 model using a meticulously curated multimodal medical image dataset. This dataset is a combination of three different datasets, each representing a distinct modality. The main objective of the YOLOv8 model is to accurately detect approximate boundary boxes around the Regions of Interest (ROI) present in these medical images.
After conducting few experiments with different numbers of image-mask pairs, we have come to the conclusion that training the YOLOv8 model on 100+ image-mask pairs yields satisfactory results in generating boundary boxes for the ROIs. As a result, we have decided to train the YOLOv8 model using only 100 randomly selected image-mask pairs from our datasets.
The reason for aiming at approximate boundary boxes is that the SAM model's dice score remains relatively consistent even when the boundary boxes vary by a small margin, specifically 5-10 pixels. To validate this claim, we conducted an experiment and plotted the box plot of dice scores
obtained from the SAM model. we varied pixel distance for boundary boxes from perfectly hand-curated boundary boxes (DS0) to DS5 (increased by 5 pixels from all four corners), DS10 (increased by 10 pixels from all four corners), and so on. The results in Figure 1 indicate that the dice score indeed remains relatively consistent when the pixel distance is increased by 5-10 pixels. This finding suggests that having approximate boundary boxes generated by the YOLOv8 model is sufficient for the subsequent SAM model's performance.
It is crucial to clarify that in this work, the YOLOv8 model is solely responsible for generating boundary boxes and not segmentation masks. This is because the limited dataset of images and masks would not be sufficient to generate accurate segmentation masks.
### Generating Boundary Boxes and Utilizing SAM and HQ-SAM for Segmentation
The third and final part of our study is to combine YOLOv8 with SAM models (SAM and HQ-SAM). SAM is a versatile and potent architecture designed for real-world segmentation tasks. Its strengths lie in supporting flexible prompts, real-time mask computation, and its ability to handle ambiguity. Utilizing image, prompt, and lightweight mask encoders, SAM accurately predicts segmentation masks, even generating multiple masks for a single prompt to adapt to ambiguous scenarios. Its effectiveness is further enhanced by being trained on a diverse set of masks using the innovative "data engine" strategy, resulting in high-quality and real-time mask predictions [8].
Building upon SAM's foundation, HQ-SAM is an advanced segmentation model that incorporates a learnable High-Quality Output Token. This augmentation enables precise object segmentation without significantly increasing parameters or computational complexity, while still maintaining promptability and zero-shot generalizability [7].
Figure 2 depicts the process of generating segmentation predicted boundary boxes using the YOLOv8 model. These boundary boxes serve as essential inputs to both SAM and HQ-SAM models, acting as regions of interest (ROIs). By directing the attention of subsequent models to relevant areas within the image, these boundary boxes aid in achieving accurate segmentation results.
The decision to employ SAM was based on its robustness and effectiveness in handling ambiguity, particularly concerning boundary boxes in this case. Integrating YOLOv8 with SAM and HQ-SAM models aims to achieve superior segmentation results compared to using segmentation masks directly from YOLOv8. By combining YOLOv8's approximate boundary boxes with the spatial attention mechanisms of SAM and HQ-SAM, this approach ensures better localization and segmentation of regions of interest in medical images.
## 3 Results
Figure 3 displays visual results of random images, along with their ground truth labels and the predicted masks by SAM (HQ-SAM, SAM) and YOLOv8 models. Upon visual comparison, both SAM models (HQ-SAM and SAM) demonstrate superior performance in all three modalities, as their predicted masks are significantly better than YOLOv8's. However, it's important to note that YOLOv8 is solely used for generating prompts for the SAM models and it was expected that YOLOv8 will perform bad on segmentation as compare to SAM models. The inclusion of YOLOv8's segmentation results in the paper aims to demonstrate its inferior performance in predicting segmentation masks, while also showcasing how the prompts (boundary boxes) derived from YOLOv8 enable full automation in the SAM models. To delve deeper into the results, a computational analysis of YOLOv8+SAM and YOLOv8+HQ-SAM was conducted.
In the X-ray dataset, the SAM model exhibited strong segmentation capabilities with a mean Dice Score of 0.9012, Precision of 0.8747, Recall of 0.9419, and F1-score of 0.9012. The HQ-SAM model, designed to provide higher quality segmentation, achieved slightly lower results, with a mean Dice Score of 0.8902, Precision of 0.8434, Recall of 0.9560, and F1-score of 0.8902. However, both SAM and HQ-SAM models significantly outperformed the YOLOv8 model, which struggled with a mean Dice Score of only 0.1938, Precision of 0.173, Recall of 0.241, and F1-score of 0.1938 on the same dataset (as shown in table 1 and Figure 4).
Moving on to the Short-axis Aorta Ultrasound dataset, the SAM model once again demonstrated its efficacy with a mean Dice Score of 0.769, Precision of 0.836, Recall of
Figure 1: This figure shows the dice scores’ (SAM model on ultrasound images) box plots when bounding boxes (manually drawn) are perfect fit to ROI (DS0), when bounding boxes are 5 pixels are bigger (DS5), when bounding boxes are 10 pixels are bigger (DS10) and so on.
0.732, and F1-score of 0.769. The HQ-SAM model's performance was similar to that of SAM, achieving a mean
Figure 3: The image displays random medical images with their ground truth labels and predicted masks from SAM (HQ-SAM, SAM) and YOLOv8 models.
Figure 2: We use YOLOv8 to generate boundary boxes as prompt on Lung CT Scan images. These prompts are then fed into SAM and HQ-SAM models, which produce segmentation masks for the Regions of Interest (ROI).
Dice Score of 0.7722, Precision of 0.834, Recall of 0.7392, and F1-score of 0.772. While the YOLOv8 model improved compared to its X-ray results, it still fell short, yielding a mean Dice Score of 0.5064, Precision of 0.8775, Recall of 0.4254, and F1-score of 0.5064 (as shown in table 2 and Figure 5).
For the lung CT scan segmentation dataset, both SAM and HQ-SAM models showcased their effectiveness. The SAM model obtained a mean Dice Score of 0.8799, Precision of 0.836, Recall of 0.948, and F1-score of 0.879. The HQ-SAM model also performed well with a mean Dice Score of 0.8554, Precision of 0.8429, Recall of 0.8903, and F1-score of 0.855. Once again, the YOLOv8 model lagged behind, achieving a mean Dice Score of 0.520, Precision of 0.5291, Recall of 0.525, and F1-score of 0.520 (as shown in table 3 and Figure 6).
## 4 Discussion
The findings presented in this paper provide valuable insights into the performance of three different models, SAM, HQ-SAM, and YOLOv8, on various medical imaging datasets. Medical image segmentation is a crucial task in healthcare, enabling accurate identification and delineation of anatomical structures, tumors, and other abnormalities. The results of this study shed light on the strengths
\begin{table}
\begin{tabular}{|l|c|c|c|c|} \hline States & HQ sum Dice Score & HQ sum Precision & HQ sum Recall & HQ sum F1-score \\ \hline Mean & 0.8902 & 0.8342 & 0.9350 & 0.9902 \\ Median & 0.9052 & 0.868 & 0.966 & 0.9052 \\ Sal. & 0.0669 & 0.1166 & 0.0407 & 0.066 \\ \hline States & SAM Dice Score & SAM F1-score & SAM F1-score & SAM F1-score \\ \hline Mean & 0.9012 & 0.8742 & 0.9419 & 0.9012 \\ Median & 0.9210 & 0.9112 & 0.953 & 0.9210 \\ Sal. & 0.0633 & 0.1120 & 0.0474 & 0.0633 \\ \hline States & YOLO Dice Score & YOLO Precision & YOLO Recall & YOLO F1-score \\ \hline Mean & 0.9198 & 0.173 & 0.241 & 0.9198 \\ Median & 0.193 & 0.162 & 0.2363 & 0.1932 \\ Sal. & 0.123 & 0.1227 & 0.149 & 0.123 \\ \hline \end{tabular}
\end{table}
Table 1: Results matrices for X-ray Dataset.
Figure 4: Box plot Results matrices for X-ray Dataset.
Figure 5: Box plot results matrices for Short-axis Aorta Ultrasound dataset.
\begin{table}
\begin{tabular}{|l|c|c|c|c|} \hline States & HQ sum Dice Score & HQ sum Precision & HQ sum Recall & HQ sum F1-score \\ \hline Mean & 0.8554 & 0.8429 & 0.9503 & 0.853 \\ Median & 0.8839 & 0.8659 & 0.970 & 0.883 \\ Sal. & 0.139 & 0.1557 & 0.1630 & 0.139 \\ \hline States & SAM Dice Score & SAM F1-score & SAM F1-score & SAM F1-score \\ \hline Mean & 0.8799 & 0.836 & 0.948 & 0.879 \\ Median & 0.9160 & 0.86 & 0.983 & 0.9160 \\ Sal. & 0.131 & 0.1582 & 0.1292 & 0.131 \\ \hline States & YOLO Dice Score & YOLO Precision & YOLO Recall & YOLO F1-score \\ \hline Mean & 0.520 & 0.5291 & 0.525 & 0.520 \\ Median & 0.5301 & 0.5446 & 0.52 & 0.530 \\ Sal. & 0.0652 6 & 0.095 & 0.0674 & 0.066 \\ \hline \end{tabular}
\end{table}
Table 3: Results matrices for lung CT scan segmentation dataset.
Figure 6: Box plot results matrices for lung CT scan segmentation dataset.
and limitations of each model and offer important considerations for researchers and practitioners working in the field of medical image analysis.
The SAM model consistently outperformed the other two models, HQ-SAM and YOLOv8, across most datasets. Its mean Dice Score, Precision, Recall, and F1-score consistently exhibited higher values, indicating superior segmentation accuracy and overall performance. The robustness and effectiveness of the SAM model can be attributed to its underlying architecture, which likely includes advanced convolutional neural networks (CNNs) and attention mechanisms. These components enable the model to effectively learn and represent complex patterns in medical images, leading to more accurate segmentations.
The HQ-SAM model was developed with the intention of providing higher quality segmentation results. However, the results indicate that the improvements achieved with HQ-SAM over the standard SAM model were not substantial in most cases. While the HQ-SAM model still demonstrated competitive performance, the marginal gains in accuracy and precision may not justify the additional computational cost required for its implementation. This suggests that practitioners and researchers should carefully weigh the benefits of using HQ-SAM against its increased computational demands, especially in real-time or resource-constrained environments.
The YOLOv8 model consistently struggled to produce accurate segmentations across all datasets. Its mean Dice Score, Precision, Recall, and F1-score were significantly lower compared to both SAM and HQ-SAM models. YOLOv8's inferior performance may be attributed to several factors, including trained on limited dataset, its architecture's suitability for medical imaging tasks and its limited ability to handle complex and diverse anatomical structures. But in our study the main goal of YOLOv8 model is to predicted approx. boundary boxes around ROI and by looking at the performances of SAM and HQ-SAM on all 3 medical modalities, we can conclude that this technique worked well.
The results underscore the importance of carefully selecting the appropriate model based on the specific requirements of the medical imaging task and the characteristics of the dataset. While the SAM model emerged as the top performer overall, its advantages should be weighed against the specific application and resource constraints. For instance, if real-time processing is a crucial requirement, the standard SAM model might be a more practical choice than HQ-SAM.
Interestingly, the performance of each model varied across datasets. While the SAM model achieved outstanding results in both X-ray and lung CT scan segmentation, its performance on the Short-axis Aorta Ultrasound dataset was slightly lower. This emphasizes the need to evaluate models on multiple datasets to gain a comprehensive understanding of their strengths and limitations. Additionally, dataset-specific characteristics, such as image quality, resolution, and class imbalances, can significantly influence model performance and should be carefully considered during the model selection process.
The results suggest that there is still room for improvement in medical image segmentation models. Further research could focus on enhancing the SAM model or exploring new architectures and attention mechanisms to push the boundaries of segmentation accuracy. Additionally, model ensembles or combining the strengths of multiple models could be investigated to potentially achieve even higher segmentation performance.
Accurate medical image segmentation is of paramount importance in clinical practice for disease diagnosis, treatment planning, and monitoring patient progress. The superior performance of the SAM model has promising clinical implications, as it can aid healthcare professionals in making more precise and informed decisions. Moreover, accurate segmentation can lead to improved automated analysis, reducing the burden on radiologists and enabling faster diagnosis and treatment.
It is essential to acknowledge the limitations of this study. The evaluation was limited to three specific models, and other state-of-the-art models might have been excluded. Additionally, the datasets used in the study may not fully represent the diversity of medical imaging challenges, and performance on other datasets might differ. Future studies should incorporate more diverse datasets and consider the transferability of the models to different medical imaging modalities.
## 5 Conclusion
In this paper, we conducted a comprehensive evaluation of three different models, namely, SAM, HQ-SAM, and YOLOv8, for medical image segmentation across multiple datasets. Medical image segmentation is a critical task in healthcare, enabling precise identification and delineation of anatomical structures and abnormalities. The results of our study provide valuable insights into the strengths and weaknesses of each model, offering important considerations for researchers and practitioners in the field of medical image analysis.
Our findings demonstrate that the SAM model consistently outperformed the other two models, HQ-SAM and YOLOv8, in most scenarios. With its higher mean Dice Score, Precision, Recall, and F1-score, the SAM model showcased superior segmentation accuracy and overall performance. This can be attributed to the model's advanced architecture, incorporating convolutional neural networks and attention mechanisms, which effectively learn and represent complex patterns in medical images.
Our study emphasizes the importance of carefully selecting the appropriate model based on the specific requirements of the medical imaging task and the characteristics of the dataset. While the SAM model excelled in both X-ray and lung CT scan segmentation datasets, its performance on the Short-axis Aorta Ultrasound dataset was slightly lower. This underlines the need to evaluate models on diverse datasets to gain a comprehensive understanding of their capabilities.
Our research also highlights the potential for further improvement in medical image segmentation models. Future studies could focus on enhancing the SAM model or exploring new architectures and attention mechanisms to advance segmentation accuracy. Additionally, investigating model ensembles or combining the strengths of multiple models could lead to even higher segmentation performance.
In conclusion, our study underscores the effectiveness of the SAM model for medical image segmentation tasks, particularly in X-ray and lung CT scan datasets. While HQSAM offers potential advantages, careful consideration of its computational cost is essential. YOLOv8, while lagging behind in performance, requires further refinement to become a viable option for medical image segmentation.
|
2305.16945
|
Levin Tree Search with Context Models
|
Levin Tree Search (LTS) is a search algorithm that makes use of a policy (a
probability distribution over actions) and comes with a theoretical guarantee
on the number of expansions before reaching a goal node, depending on the
quality of the policy. This guarantee can be used as a loss function, which we
call the LTS loss, to optimize neural networks representing the policy
(LTS+NN). In this work we show that the neural network can be substituted with
parameterized context models originating from the online compression literature
(LTS+CM). We show that the LTS loss is convex under this new model, which
allows for using standard convex optimization tools, and obtain convergence
guarantees to the optimal parameters in an online setting for a given set of
solution trajectories -- guarantees that cannot be provided for neural
networks. The new LTS+CM algorithm compares favorably against LTS+NN on several
benchmarks: Sokoban (Boxoban), The Witness, and the 24-Sliding Tile puzzle
(STP). The difference is particularly large on STP, where LTS+NN fails to solve
most of the test instances while LTS+CM solves each test instance in a fraction
of a second. Furthermore, we show that LTS+CM is able to learn a policy that
solves the Rubik's cube in only a few hundred expansions, which considerably
improves upon previous machine learning techniques.
|
Laurent Orseau, Marcus Hutter, Levi H. S. Lelis
|
2023-05-26T14:00:12Z
|
http://arxiv.org/abs/2305.16945v2
|
# Levin Tree Search with Context Models
###### Abstract
Levin Tree Search (LTS) is a search algorithm that makes use of a policy (a probability distribution over actions) and comes with a theoretical guarantee on the number of expansions before reaching a goal node, depending on the quality of the policy. This guarantee can be used as a loss function, which we call the LTS loss, to optimize neural networks representing the policy (LTS+NN). In this work we show that the neural network can be substituted with parameterized context models originating from the online compression literature (LTS+CM). We show that the LTS loss is convex under this new model, which allows for using standard convex optimization tools, and obtain convergence guarantees to the optimal parameters in an online setting for a given set of solution trajectories -- guarantees that cannot be provided for neural networks. The new LTS+CM algorithm compares favorably against LTS+NN on several benchmarks: Sokoban (Boxoban), The Witness, and the 24-Sliding Tile puzzle (STP). The difference is particularly large on STP, where LTS+NN fails to solve most of the test instances while LTS+CM solves each test instance in a fraction of a second. Furthermore, we show that LTS+CM is able to learn a policy that solves the Rubik's cube in only a few hundred expansions, which considerably improves upon previous machine learning techniques.
## 1 Introduction
We 1 consider the problem of solving a set of deterministic single-agent search problems of a given domain, by starting with little prior domain-specific knowledge. We focus on algorithms that learn from previously solved instances to help solve the remaining ones. We consider the satisficing setting where solvers should (learn to quickly find a solution, rather than to minimize the cost of the returned solutions.
Footnote 1: Extended version of the IJCAI 2023 paper. Source code at: [https://github.com/deepmind/levintreesearch_cm](https://github.com/deepmind/levintreesearch_cm).
Levin Tree Search (LevinTS, LTS) is a tree search algorithm for this setup that uses a policy, _i.e._, a probability distribution over actions, to guide the search [10]. LTS has a guarantee on the number of search steps required before finding a solution, which depends on the probability of the corresponding sequence of actions as assigned by the policy. Orseau and Lelis [2021] showed that this guarantee can be used as a loss function. This LTS loss is used to optimize a neural-network (NN) policy in the context of the Bootstrap search-and-learn process [1]: The NN policy is used in LTS (LTS+NN) to iteratively solve an increasing number of problems from a given set, optimizing the parameters of the NN when new problems are solved to improve the policy by minimizing the LTS loss.
One constant outstanding issue with NNs is that the loss function (whether quadratic, log loss, LTS loss, etc.) is almost never convex in the NN's parameters. Still, most of the time NNs are trained using online convex optimization algorithms, such as stochastic gradient descent, Adagrad [1], and its descendants. Such algorithms often come with strong convergence or regret guarantees that only hold under convexity assumptions, and can help to understand the effect of various quantities (number of parameters, etc.) on the learning speed [23, 16, 2]. In this paper we present parameterized context models for policies that are convex with respect to the model's parameters for the LTS loss. Such models guarantee that we obtain an optimal policy in terms of LTS loss for a given set of training trajectories -- a guarantee NNs do not have.
The context models we introduce for learning policies are based on the models from the online data compression literature [14, 15]. Our context models are composed of a set of contexts, where each context is associated with a probability distribution over actions. These distributions are combined using product-of-experts [13] to produce the policy used during the LTS search. The expressive power of product-of-experts comes mainly from the ability of each expert to (possibly softly) veto a particular option by assigning it a low probability. A similar combination using geometric mixing [11, 12] (a geometrically-parameterized variant of product-of-experts) in a multi-layer architecture has already proved competitive with NNs in classification, regression and density modelling tasks [21, 22, 13]. In our work the context distributions
are fully parameterized and we show that the LTS loss is convex for this parameterization.
In their experiments, Orseau and Lelis (2021) showed that LTS+NN performs well on two of the three evaluated domains (Sokoban and The Witness), but fails to learn a policy for the 24-Sliding Tile Puzzle (STP). We show that LTS with context models optimized with the LTS loss within the Bootstrap process is able to learn a strong policy for all three domains evaluated, including the STP. We also show that LTS using context models is able to learn a policy that allows it to find solutions to random instances of the Rubik's Cube with only a few hundred expansions. In the context of satisficing planning, this is a major improvement over previous machine-learning-based approaches, which require hundreds of thousands expansions to solve instances of the Rubik's Cube.
We start with giving some notation and the problem definition (Section 2), before describing the LTS algorithm, for which we also provide a new lower bound on the number of node expansions (Section 3). Then, we describe parameterized context models and explain why we can expect them to work well when using product-of-experts (Section 4), before showing that the LTS loss function is convex for this parameterization (Section 5) and considering theoretical implications. Finally we present the experimental results (Section 6) before concluding (Section 7).
## 2 Notation and Problem Definition
A table of notation can be found in Appendix I. We write \([t]=\{1,2,\ldots t\}\) for a natural number \(t\). The set of nodes is \(\mathcal{N}\) and is a forest, where each tree in the forest represents a search problem with the root being the initial configuration of the problem. The set of children of a node \(n\in\mathcal{N}\) is \(\mathcal{C}(n)\) and its parent is \(\text{par}(n)\); if a node has no parent it is a root node. The set of ancestors of a node is \(\text{anc}(n)\) and is the transitive closure of \(\text{par}(\cdot)\); we also define \(\text{anc}_{+}(n)=\text{anc}(n)\cup\{n\}\). Similarly, \(\text{desc}(n)\) is the set of the descendants of \(n\), and \(\text{desc}_{+}(n)=\text{desc}(n)\cup\{n\}\). The depth of a node is \(d(n)=|\text{anc}(n)|\), and so the depth of a root node is 0. The root \((n)\) of a node \(n\) is the single node \(n_{0}\in\text{anc}_{+}(n)\) such that \(n_{0}\) is a root. A set of nodes \(\mathcal{N}^{\prime}\) is a tree in the forest \(\mathcal{N}\) if and only if there is a node \(n^{0}\in\mathcal{N}^{\prime}\) such that \(\bigcup_{n\in\mathcal{N}^{\prime}}\text{root}(n)=\{n^{0}\}\). Let \(\mathcal{N}^{0}=\bigcup_{n\in\mathcal{N}}\text{root}(n)\) be the set of all root nodes. We write \(n_{[j]}\) for the node at depth \(j\in[d(n)]\) on the path from \(\text{root}(n)=n_{[0]}\) to \(n=n_{[d(n)]}\). Let \(\mathcal{N}^{*}\subseteq\mathcal{N}\) be the set of all _solution_ nodes, and we write \(\mathcal{N}^{*}(n)=\mathcal{N}^{*}\cap\text{desc}_{+}(n)\) for the set of solution nodes under \(n\). A _policy_\(\pi\) is such that for all \(n\in\mathcal{N}\) and for all \(n^{\prime}\in\mathcal{C}(n):\pi(n^{\prime}\mid n)\geq 0\) and \(\sum_{n^{\prime}\in\mathcal{C}(n)}\pi(n^{\prime}\mid n)\leq 1\). The policy is called _proper_ if the latter holds as an equality. We define, for all \(n^{\prime}\in\mathcal{C}(n)\), \(\pi(n^{\prime})=\pi(n)\pi(n^{\prime}\mid n)\) recursively and \(\pi(n)=1\) if \(n\) is a root node.
Edges between nodes are labeled with _actions_ and the children of any node all have different labels, but different nodes can have overlapping sets of actions. The set of all edge labels is \(\mathcal{A}\). Let \(a(n)\) be the label of the edge from \(\text{par}(n)\) to \(n\), and let \(\mathcal{A}(n)\) be the set of edge labels for the edges from node \(n\) to its children. Then \(n\neq n^{\prime}\wedge\text{par}(n)=\text{par}(n^{\prime})\) implies \(a(n)\neq a(n^{\prime})\).
Starting at a given root node \(n^{0}\), a tree search algorithm expands a set \(\mathcal{N}^{\prime}\subseteq\text{desc}_{+}(n^{0})\) until it finds a solution node in \(\mathcal{N}^{*}(n^{0})\). In this paper, given a set of root nodes, we are interested in parameterized algorithms that attempt to minimize the cumulative number of nodes that are expanded before finding a solution node for each root node, by improving the parameters of the algorithm from found solutions, and with only little prior domain-specific knowledge.
## 3 Levin Tree Search
Levin Tree Search (LevinTS, which we abbreviate to LTS here) is a tree/graph search algorithm based on best-first search (Pearl, 1984) that uses the cost function 2\(n\mapsto d(n)/\pi(n)\)(Orseau _et al._, 2018), which, for convenience, we abbreviate as \(\frac{d}{\pi}(n)\). That is, since \(\frac{d}{\pi}(\cdot)\) is monotonically increasing from parent to child, LTS expands all nodes by increasing order of \(\frac{d}{\pi}(\cdot)\)(Theorem 2, Orseau _et al._ (2018)).
Footnote 2: Orseau _et al._ (2018) actually use the cost function \((d(n)+1)/\pi(n)\). Here we use \(d(n)/\pi(n)\) instead which is actually (very) slightly better and makes the notation simpler. All original results can be straightforwardly adapted.
**Theorem 1** (LTS upper bound, adapted from Orseau _et al._ (2018), Theorem 3).: _Let \(\pi\) be a policy. For any node \(n^{*}\in\mathcal{N}\), let \(\overline{\mathcal{N}}(n^{*})=\{n\in\mathcal{N}:\text{root}(n)=\text{root}(n^ {*})\wedge\frac{d}{\pi}(n)\leq\frac{d}{\pi}(n^{*})\}\) be the set of nodes within the same tree with cost at most that of \(n^{*}\). Then_
\[|\overline{\mathcal{N}}(n^{*})|\leq 1+\frac{d(n^{*})}{\pi(n^{*})}\,.\]
Proof.: Let \(\mathcal{L}\) be the set of leaves of \(\overline{\mathcal{N}}(n^{*})\), then
\[|\overline{\mathcal{N}}(n^{*})| \leq 1+\sum_{n\in\mathcal{L}}d(n)=1+\sum_{n\in\mathcal{L}}\pi(n) \tfrac{d}{\pi}(n)\] \[\leq 1+\sum_{n\in\mathcal{L}}\pi(n)\tfrac{d}{\pi}(n^{*})\leq 1+ \tfrac{d}{\pi}(n^{*})\,,\]
where we used Lemma 10 (in Appendix) on the last inequality.
The consequence is that LTS started at \(\text{root}(n^{*})\) expands at most \(1+\frac{d}{\pi}(n^{*})\) nodes before reaching \(n^{*}\).
Orseau and Lelis (2021) also provides a related lower bound showing that, for any policy, there are sets of problems where any algorithm needs to expand \(\Omega(\frac{d}{\pi}(n^{*}))\) nodes before reaching some node \(n^{*}\) in the worst case. They also turn the guarantee of Theorem 1 into a loss function, used to optimize the parameters of a neural network. Let \(\mathcal{N}^{\prime}\) be a set of solution nodes whose roots are all different, define the _LTS loss function_:
\[L(\mathcal{N}^{\prime})=\sum_{n\in\mathcal{N}^{\prime}}\tfrac{d}{\pi}(n) \tag{1}\]
which upper bounds the total search time of LTS to reach all nodes in \(\mathcal{N}^{\prime}\). Equation (1) is the loss function used in Algorithm 2 (Appendix A) to optimize the policy -- but a more precise definition for context models will be given later. To further justify the use of this loss function, we provide a lower bound on the number of expansions that LTS must perform before reaching an (unknown) target node.
**Theorem 2** (Informal lower bound).: _For a proper policy \(\pi\) and any node \(n^{*}\), the number of nodes whose \(\frac{d}{\pi}\) cost is at most that of \(n^{*}\) is at least \([\frac{1}{d}\frac{d}{\pi}(n^{*})-1]/(|\mathcal{A}|-1)\), where \(\bar{d}-1\) is the average depth of the leaves of those nodes._
A more formal theorem is given in Appendix B.
**Example 3**.: _For a binary tree with a uniform policy, since \(\bar{d}=d(n^{*})+1\), the lower bound gives \(2^{d}d/(d+1)-1\) nodes for a node \(n^{*}\) at depth \(d\) and of probability \(2^{-d}\), which is quite tight since the tree has \(2^{d}-1\) nodes. The upper bound \(1+d2^{d}\) is slightly looser._
**Remark 4**.: _Even though pruning (such as state-equivalence pruning) can make the policy improper, in which case the lower bound does not hold and the upper bound can be loose, optimizing the parameters of the policy for the upper bound still makes sense, since pruning can be seen as a feature placed on top of the policy -- that is, the policy is optimized as if pruning is not used. It must be noted that for optimization Orseau and Lelis (2021) (Section 4) use the log gradient trick to replace the upper bound loss with the actual number of expansions in an attempt to account for pruning; as the results of this paper suggest, it is not clear whether one should account for the actual number of expansions while optimizing the model._
## 4 Context Models
Now we consider that the policy \(\pi\) has some parameters \(\beta\in\mathcal{B}\) (where \(\mathcal{B}\subseteq\mathbb{R}^{k}\) for some \(k\), which will be made more precise later) and we write \(\pi(\cdot;\beta)\) when the parameters are relevant to the discussion. As mentioned in the introduction, we want the LTS loss function of Eq. (1) to be convex in the policy's parameters, which means that we cannot use just any policy -- in particular this rules out deep neural networks. Instead, we use context models, which have been widely used in online prediction and compression (_e.g._, (Rissanen, 1983; Willems _et al._, 1995; Matthews, 2005; Veness _et al._, 2021)).
The set of contexts is \(\mathcal{Q}\). A context is either active or inactive at a given node in the tree. At each node \(n\), the set of active contexts is \(\mathcal{Q}(n)\), and the policy's prediction at \(n\) depends only on these active contexts.
Similarly to patterns in pattern databases (Culberson and Schaeffer, 1998), we organize contexts in sets of mutually exclusive contexts, called _mutex sets_, and each context belongs to exactly one mutex set. The set of mutex sets is \(\mathcal{M}\). For every mutex set \(M\in\mathcal{M}\), for every node \(n\), at most one context is active per mutex set. In this paper we are in the case where _exactly_ one context is active per mutex set, which is what happens when searching with multiple pattern databases, where each pattern database provides a single pattern for a given node in the tree. When designing contexts, it is often more natural to directly design mutex sets. See Figure 1 for an example, omitting the bottom parts of (b) and (d) for row.
To each context \(c\in\mathcal{Q}\) we associate a _predictor_\(p_{c}:\mathcal{A}\rightarrow[0,1]\) which is a (parameterized) categorical probability distribution over edge labels that will be optimized from training data -- the learning part will be explained in Section 5.1.
To combine the predictions of the active contexts at some node \(n\), we take their renormalized product, as an instance of product-of-experts (Hinton, 2002):
\[\forall a\in\mathcal{A}(n):p_{\times}(n,a)=\frac{\prod_{c\in\mathcal{Q}(n)}p_{ c}(a)}{\sum_{a^{\prime}\in\mathcal{A}(n)}\prod_{c\in\mathcal{Q}(n)}p_{c}(a^{ \prime})} \tag{2}\]
We refer to the operation of Eq. (2) as _product mixing_, by relation to geometric mixing (Mattern, 2013), a closely related operation. Then, one can use \(p_{\times}(n,a)\) to define the policy \(\pi(n^{\prime}|n)=p_{\times}(n,a(n^{\prime}))\) to be used with LTS.
The choice of this particular aggregation of the individual predictions is best explained by the following example.
**Example 5** (Wisdom of the product-of-experts crowd).: _Figure 1 (a) and (b) displays a simple maze environment where the agent is coming from the left. The only sensible action is to go Up (toward the exit), but no single context sees the whole picture. Instead, they see only individual cells around the agent, and one context also sees (only) the previous action (which is Right). The first two contexts only see empty cells to the left and top of the agent, and are uninformative (uniform probability distributions) about which action to take. But the next three contexts, while not knowing what to do, know what_ not _to do. When aggregating these predictions with product mixing, only one action remains with high probability: Up._
**Example 6** (Generalization and specialisation).: _Another advantage of product mixing is its ability to make use of both general predictors and specialized predictors. Consider a mutex set composed of \(m\) contexts, and assume we have a total of \(M\) data points (nodes on solution trajectories). Due to the mutual exclusion nature of mutex sets, these \(M\) data points must be partitioned among the \(m\) contexts. Assuming for simplicity a mostly uniform partitioning, then each context receives approximately \(M/m\) data points to learn from. Consider the mutex sets in Fig. 1 (b): The first 4 mutex sets have size 3 (each context can see a wall, an empty cell or the goal) and the last one has size 4. These are very small sizes and thus the parameters of the contexts predictors should quickly see enough data to learn an accurate distribution. However, while accurate, the distribution can hardly be specific, and each predictor alone is not sufficient to obtain a nearly-deterministic policy -- though fortunately product mixing helps with that. Now compare with the 2-cross mutex set in Fig. 1 (d), and assume that cells outside the grid are walls. A quick calculation, assuming only one goal cell, gives that it should contain a little less than 1280 different contexts. Each of these contexts thus receives less data to learn from on average than the contexts in (b), but also sees more information from the environment which may lead to more specific (less entropic) distributions, as is the case in situation (c)._
**Remark 7**.: _A predictor that has a uniform distribution has no effect within a product mixture. Hence, adding new predictors initialized with uniform predictions does not change the policy, and similarly, if a context does not happen to be useful to learn a good policy, optimization will push its weights toward the uniform distribution, implicitly discarding it._
Hence, product mixing is able to take advantage of both general contexts that occur in many situations and specialised contexts tailored to specific situations -- and anything in-between.
Our LTS with context models algorithm is given in Algorithm 1, building upon the one by Orseau and Lelis (2021) with a few differences. As mentioned earlier, it is a best-first search algorithm and uses a priority queue to maintain the nodes to be expanded next. It is also budgeted and returns "budget_reached" if too many nodes have been expanded. It returns "no_solution" if all nodes have been expanded without reaching a solution node -- assuming safe pruning or no pruning. Safe pruning (using visited_states) can be performed if the policy is Markovian (Orseau _et al._, 2018), which is the case in particular when the set of active contexts \(\mathcal{Q}(n)\) depends only on \(\texttt{state}(n)\). The algorithm assumes the existence of application-specific \(\texttt{state}\) and \(\texttt{state}\)_transition functions, such that \(\texttt{state}(n^{\prime})\)= \(\texttt{state}\)_transition \((\texttt{state}(n),a(n^{\prime}))\) for all \(n^{\prime}\in\mathcal{C}(n)\). Note that with context models the prediction \(\pi(n^{\prime}\mid n)\) depends on the active contexts \(\mathcal{Q}(n)\) but _not_ on the state of a child node. This allows us to delay the state transition until the child is extracted from the queue, saving up to a branching factor of state transitions (see also (Agostinelli _et al._, 2021)).
**Remark 8**.: _In practice, usually a mutex set can be implemented as a hashtable as for pattern databases: the active context is read from the current state of the environment, and the corresponding predictor is retrieved from the hashtable. This allows for a computational cost of \(O(\log|M|)\) per mutex set \(M\), or even \(O(1)\) with perfect hash functions, and thus \(O(\sum_{M\in\mathcal{M}}\log|M|)\) which is much smaller than \(|\mathcal{Q}|\). Using an imperfect hashtable, only the contexts that appear on the paths to the found solution nodes need to be stored._
## 5 Convexity
Because the LTS loss in Eq. (1) is different from the log loss (Cesa-Bianchi and Lugosi, 2006) (due to the sum in-between the products), optimization does _not_ reduce to maximum likelihood estimation. However, we show that convexity in the log loss implies convexity in the LTS loss. This means, in particular, that if a probability distribution is log-concave (such as all the members of the exponential family), that is, the log loss for such models is convex, then the LTS loss is convex in these parameters, too.
First we show that every sequence of functions with a convex log loss also have convex _inverse_ loss and LTS loss.
**Theorem 9** (Log loss to inverse loss convexity).: _Let \(f_{1},f_{2},\ldots f_{s}\) be a sequence of positive functions with \(f_{i}:\mathbb{F}^{n}\rightarrow(0,\infty)\) for all \(i\in[s]\) and such that \(\beta\mapsto-\log f_{i}(\beta)\) is convex for each \(i\in[s]\), then \(L(\beta)=\sum_{k}\frac{1}{\prod_{i}f_{k,t}(\beta)}\) is convex, where each \((k,t)\) corresponds to a unique index in \([s]\)._
The proof is in Appendix E.1. For a policy \(\pi(\cdot;\beta)\) parameterized by \(\beta\), the LTS loss in Eq. (1) is \(L_{\mathcal{N}^{\prime}}(\beta)=\sum_{k\in\mathcal{N}^{\prime}}d(n^{k})/\pi(n^ {k};\beta)\), and its convexity follows from Theorem 9 by taking \(f_{k,0}(\cdot)=1/d(n^{k})\), and \(f_{k,t}(\beta)=\pi(n^{k}_{[t]}|n^{k}_{[t-1]};\beta)\) such that \(\prod_{t=1}^{d(n^{k})}f_{k,t}(\beta)=\pi(n^{k};\beta)\).
Theorem 9 means that many tools of compression and online prediction in the log loss can be transferred to the LTS loss case. In particular, when there is only one mutex set (\(|\mathcal{M}|=1\)), the \(f_{i}\) are simple categorical distributions, that is, \(f_{i}(\beta)=\beta_{j_{t}}\) for some index \(j_{t}\), and thus \(-\log f_{i}\) is a convex function, so the corresponding LTS loss is convex too. Unfortunately, the LTS loss function for such a model is convex in \(\beta\) only when there is only one mutex set, \(|\mathcal{M}|=1\). Fortunately, it becomes convex for \(|\mathcal{M}|\geq 1\) when we reparameterize the context predictors with \(\beta\rightsquigarrow\exp\beta\).
Let \(\beta_{c,a}\in[\ln\varepsilon_{\text{low}},0]\) be the value of the parameter of the predictor for context \(c\) for the edge label \(a\). Then the prediction of a context \(c\) is defined as
\[\forall a\in\mathcal{A}(n):p_{c}(a;\beta)=\frac{\exp(\beta_{c,a})}{\sum_{a^{ \prime}\in\mathcal{A}(n)}\exp(\beta_{c,a^{\prime}})}\,. \tag{3}\]
We can also now make precise the definition of \(\mathcal{B}\): \(\mathcal{B}=[\ln\varepsilon_{\text{low}},0]^{|\mathcal{Q}|\times A}\), and note that \(p_{c}(a;\beta)\geq\varepsilon_{\text{low}}/|\mathcal{A}(n)|\). Similarly to geometric mixing (Mattern, 2013; Mattern, 2016), it can be proven that context models have a convex log loss,
Figure 1: (a) A simple maze environment. The dark gray cells are walls, the green circle is a goal. The blue arrow symbolizes the fact that the agent (red triangle) is coming from the left. (b) A simple context model with five mutex sets: One mutex set for each of the four cells around the triangle, and one mutex set for the last chosen action. Each of the first four mutex sets contains three contexts (wall, empty cell, goal), and the last mutex set contains four contexts (one for each action). The 5 active contexts (one per mutex set) for the situation shown in (a) are depicted at the top, while their individual probability predictions are the horizontal blue bars for each of the four actions. The last column is the resulting product mixing prediction of the 5 predictions. No individual context prediction exceeds 1/3 for any action, yet the product mixing prediction is close to 1 for the action Up. (c) Another situation. (d) A different set of mutex sets for the situation in (c): A 1-cross around the agent, a 2-cross around the agent, and the last action. The specialized 2-cross context is certain that the correct action is Right, despite the two other contexts together giving more weight to action Down. The resulting product mixing gives high probability to Right, showing that, in product mixing, specialized contexts can take precedence over less-certain more-general contexts.
and thus their LTS loss is also convex by Theorem 9. In Appendix E.2 we provide a more direct proof, and a generalization to the exponential family for finite sets of actions.
Plugging (3) into Eq. (2) and pushing the probabilities away from 0 with \(\varepsilon_{\text{mix}}>0\)(Orseau _et al._, 2018) we obtain the policy's probability for a child \(n^{\prime}\) of \(n\) (_i.e._, for the action \(a(n^{\prime})\) at node \(n\)) with parameters \(\beta\):
\[p_{\times}(n,a;\beta) =\frac{\exp(\sum_{c\in\mathcal{Q}(n)}\beta_{c,a})}{\sum_{a^{ \prime}\in\mathcal{A}(n)}\exp\left(\sum_{c\in\mathcal{Q}(n)}\beta_{c,a^{ \prime}}\right)}\,, \tag{4}\] \[\pi(n^{\prime}\mid n;\beta) =(1-\varepsilon_{\text{mix}})p_{\times}(n,a(n^{\prime});\beta)+ \frac{\varepsilon_{\text{mix}}}{|\mathcal{A}(n)|}\,. \tag{5}\]
### Optimization
We can now give a more explicit form of the LTS loss function of Eq. (1) for context models with a dependency on the parameters \(\beta\), for a set of solution nodes \(\mathcal{N}^{\prime}\) assumed to all have different roots:
\[L(\mathcal{N}^{\prime},\beta) =\sum_{n\in\mathcal{N}^{\prime}}\ell(n,\beta)\,, \tag{6}\] \[\ell(n,\beta) =\frac{d(n)}{\pi(n;\beta)}\ =\ \frac{d(n)}{\prod_{j=0}^{d(n)-1}\pi(n_{[j+1]} |n_{[j]};\beta)}\] (7) \[=d(n)\prod_{j=0}^{d(n)-1}\sum_{a^{\prime}\in\mathcal{A}(n_{[j]}) }\!\!\exp\left(\sum_{c\in\mathcal{Q}(n_{[j]})}\beta_{c,a^{\prime}}-\beta_{c,a( n_{[j+1]})}\right)\]
where \(a(n_{[j+1]})\) should be read as the action chosen at step \(j\), and the last equality follows from Eqs. (4) and (5) where we take \(\varepsilon_{\text{mix}}=0\) during optimization. Recall that this loss function \(L\) gives an upper bound on the total search time (in node expansions) required for LTS to find all the solutions \(\mathcal{N}^{\prime}\) for their corresponding problems (root nodes), and thus optimizing the parameters corresponds to optimizing the search time.
### Online Search-and-Learn Guarantees
Suppose that at each time step \(t=1,2\dots\), the learner receives a problem \(n_{t}^{0}\) (a root node) and uses LTS with parameters \(\beta^{t}\in\mathcal{B}\) until it finds a solution node \(n_{t}\in\mathcal{N}^{*}(n_{t}^{0})\). The parameters are then updated using \(n_{t}\) (and previous nodes) and the next step \(t+1\) begins.
Let \(\mathcal{N}_{t}=(n_{1},\dots,n_{t})\) be the sequence of found solution nodes. For the loss function of Eq. (6), after \(t\) found solution nodes, the optimal parameters _in hindsight_ are \(\beta_{t}^{*}=\operatorname*{argmin}_{\beta\in\mathcal{B}}L(\mathcal{N}_{t},\beta)\). We want to know how the learner fares against \(\beta_{t}^{*}\) -- which is a moving target as \(t\) increases. The _regret_(Hazan, 2016) at step \(t\) is the cumulative difference between the loss incurred by the learner with its time varying parameters \(\beta^{i},i=1,2,\dots,t\), and the loss when using the optimum parameters in hindsight \(\beta_{t}^{*}\):
\[\mathcal{R}(\mathcal{N}_{t})=\sum_{i\in[t]}\ell(n_{i},\beta^{i})-L(\mathcal{N }_{t},\beta_{t}^{*})\,.\]
A straightforward implication of the convexity of Eq. (7) is that we can use Online Gradient Descent (OGD) (Zinkevich, 2003) or some of its many variants such as Adagrad (Duchi _et al._, 2011) and ensure that the algorithm incurs a regret of \(\mathcal{R}(\mathcal{N}_{t})=O(|\mathcal{A}|\,|\mathcal{Q}|G\sqrt{t}\ln\frac{ 1}{\varepsilon_{\text{low}}})\), where \(G\) is the largest observed gradient in infinite norm 3 and when using quadratic regularization. Regret bounds are related to the learning speed (the smaller the bound, the faster the learning), that is, roughly speaking, how fast the parameters converge to their optimal values for the same sequence of solution nodes. Such a regret bound (assuming it is tight enough) also allows to observe the impact of the different quantities on the regret, such as the number of contexts \(|\mathcal{Q}|\), or \(\varepsilon_{\text{low}}\).
Footnote 3: The dependency on the largest gradient can be softened significantly, _e.g._, with Adagrad and sporadic resets of the learning rates.
OGD and its many variants are computationally efficient as they take \(O(d(n)|\mathcal{A}|\,|\mathcal{M}|)\) computation time per solution node \(n\), but they are not very data efficient, due to the _linearization_ of the loss function -- the so-called 'gradient
trick' [12]. To make the most of the data, we avoid linearization by sequentially minimizing the full regularized loss function \(L(\mathcal{N}_{t},\cdot)+R(\cdot)\) where \(R(\beta)\) is a convex regularization function. That is, at each step, we set:
\[\beta^{t+1}=\operatorname*{argmin}_{\beta\in\mathcal{B}}L(\mathcal{N}_{t}, \beta)+R(\beta) \tag{8}\]
which can be solved using standard convex optimization techniques (see Appendix C) [1]. This update is known as (non-linearized) Follow the Leader (FTL) which automatically adapts to local strong convexity and has a fast \(O(\log T)\) regret without tuning a learning rate [13], except that we add regularization to avoid overfitting which FTL suffers from. Unfortunately, solving Eq. (8) even approximately at each step is too computational costly, so we amortize this cost by delaying updates (see below), which of course incurs a learning cost, _e.g._, [14].
## 6 Experiments
As with previous work, in the experiments we use the LTS algorithm with context models (Algorithm 1) within the search-and-learn loop of the Bootstrap process [1] to solve a dataset of problems, then test the learned model on a separate test set. See Appendix A for more details. Note that the Bootstrap process is a little different from the online learning setting, so the theoretical guarantees mentioned above may not carry over strictly -- this analysis is left for future work.,
This allows us to compare LTS with context models (LTS+CM) in particular with previous results using LTS with neural networks (LTS+NN) [1, 18] on three domains. We also train LTS+CM to solve the Rubik's cube and compare with other approaches.
LTS+NN's domains.We foremost compare LTS with context models (LTS+CM) with LTS with a convolutional neural network [18] (LTS+NN) on the three domains where the latter was tested: (a) Sokoban (Boxoban) [1] on the standard 1000 test problems, a PSPACE-hard puzzle [10] where the player must push boxes onto goal positions while avoiding deadlocks, (b) The Witness, a color partitioning problem that is NP-hard in general [1], and (c) the 24 (\(5\times 5\)) sliding-tile puzzle (STP), a sorting problem on a grid, for which finding short solutions is also NP-hard [15]. As in previous work, we train LTS+CM on the same datasets of \(50\,000\) problems each, with the same initial budget (\(2000\) node expansions for Sokoban and The Witness, \(7000\) for STP) and stop as soon as the training set is entirely solved. Training LTS+CM for these domains took less than 2 hours each.
Harder Sokoban.Additionally, we compare algorithms on the Boxoban 'hard' set of \(3332\) problems. [1] trained a convLSTM network on the medium-difficulty dataset (450k problems) with a standard actor-critic setup -- not the LTS loss -- and used LTS (hence LTS+NN) at test time. The more recent ExPoSe algorithm [13] updates the parameters of a policy neural network 4_during_ the search, and is trained on both the medium set (450k problems) and the 'unfiltered' Boxoban set (900k problems) with solution trajectories obtained from an A* search.
Rubik's Cube.We also use LTS+CM to learn a fast policy for the Rubik's cube, with an initial budget of \(B_{1}=21000\). We use a sequence of datasets containing 100k problems each, generated with a random walk of between \(m\) and \(m^{\prime}=m+5\) moves from the solution, where \(m\) increases by steps of 5 from 0 to 50, after which we set \(m^{\prime}=m=50\) for each new generated set. DeepCubeA [1] uses a fairly large neural network to learn in a supervised fashion from trajectories generated with a backward model of the environment, and Weighted A* is used to solve random test cubes. Their goal is to learn a policy that returns solutions of near-optimal length. By contrast, our goal is to learn a fast-solving policy. Allen _et al._[1] takes a completely different approach (no neural network) by learning a set of 'focused macro actions' which are meant to change the state as little as possible so as to mimic the so-called 'algorithms' that human experts use to solve the Rubik's cube. They use a rather small budget of 2 million actions to learn the macro actions, but also use the more informative goal-count scoring function (how many variables of the state have the correct value), while we only assume access to the more basic solved/unsolved function. As with previous work, we report solution lengths in the quarter-turn metric. Our test set contains 1000 cubes scrambled 100 times each -- this is likely more than enough to generate random cubes [13] -- and we expect the difficulty to match that of previous work.
Footnote 4: The architecture of the neural network was not specified.
Machine description.We used a single EPYC 7B12 (64 cores, 128 threads) server with 512GB of RAM without GPU. During training and testing, 64 problems are attempted concurrently -- one problem per CPU core. Optimization uses 128 threads to calculate the loss, Jacobian and updates.
Hyperparameters.For all experiments we use \(\varepsilon_{\text{low}}=10^{-4}\), \(\varepsilon_{\text{mix}}=10^{-3}\), a quadratic regularization \(R(\beta)=5\|\beta-\beta_{0}\|^{2}\) where \(\beta_{0}=(1-1/A)\ln\varepsilon_{\text{low}}\) (see Appendix F). The convex optimization algorithm we use to solve Eq. (8) is detailed in Appendix C.
Figure 2: Example of a relative tiling of row span 2, column span 3, at maximum row distance 1 and maximum column distance 3 around the agent (red triangle). Each orange rectangle is a mutex set of at most \(4^{6}\) different contexts. A padding value can be chosen arbitrarily (such as the wall value) for cells outside the grid.
**Mutex sets.** For Sokoban, STP, and The Witness we use several mutex sets of rectangular shapes at various distances around the agent (the player in Sokoban, the tip of the'snake' in The Witness, the blank in STP), which we call _relative tilings_. An example of relative tiling is given in Fig. 2, and a more information can be found in Appendix G. For the Rubik's cube, each mutex set \(\{i,j\}\) corresponds to the ordered colors of the two cubies (the small cubes that make up the Rubik's cube) at location \(i\) and \(j\) (such as the up-front-right corner and the back-right edge). There are 20 locations, hence 190 different mutex sets, and each of them contains at most \(24^{2}\) contexts (there are 8 corner cubies, each with 3 possible orientations, and 12 side cubies, each with 2 possible orientations). For all domains, to these mutex sets we add one mutex set for the last action, indicating the action the agent performed to reach the node; for Sokoban this includes whether the last action was a push. The first 3 domains all have 4 actions (up, down, left, right), and the Rubik's cube has 12 actions (a rotation of each face, in either direction).
**Results.** The algorithms are tested on test sets that are separate from the training sets, see Table 1. For the first three domains, LTS+CM performs better than LTS+NN, even solving all test instances of the STP while LTS+NN solves less than 1% of them. On The Witness, LTS+CM learns a policy that allows it to expand 5 times fewer nodes than LTS+NN. LTS+CM also solves all instances of the Boxoban hard set, by contrast to previous published work, and despite being trained only on 50k problems. On the Rubik's cube, LTS+CM learns a policy that is hundreds of times faster than previous work -- though recall that DeepCubeA's objective of finding short solutions differs from ours. This may be surprising given how simple the contexts are -- each context'sees' only two cubies -- and is a clear sign that product mixing is taking full advantage of the learned individual context predictions.
## 7 Conclusion
We have devised a parameterized policy for the Levin Tree Search (LTS) algorithm using product-of-experts of context models that ensures that the LTS loss function is convex. While neural networks -- where convexity is almost certainly lost -- have achieved impressive results recently, we show that our algorithm is competitive with published results, if not better.
Convexity allows us in particular to use convex optimization algorithms and to provide regret guarantees in the online learning setting. While this provides a good basis to work with, this notion of regret holds against any competitor that learns from the same set of _solution_ nodes. The next question is how we can obtain an online search-and-learn regret guarantee against a competitor for the same set of _problems_ (root nodes), for which the cumulative LTS loss is minimum across all sets of solution nodes for the same problems. And, if this happens to be unachievable, what intermediate regret setting could be considered? We believe these are important open research questions to tackle.
We have tried to design mutex sets that use only basic domain-specific knowledge (the input representation of agent-centered grid-worlds, or the cubic representation of the Rubik's cube), but in the future it would be interesting to also _learn to search_ the space of possible context models -- this would likely require more training data.
LTS with context models, as presented here, cannot directly make use of a value function or a heuristic function, however they could either be binarized into multiple mutex sets, or be used as in PHS* [10] to estimate the LTS cost at the solution, or be used as features since the loss function would still be convex (see Appendix C).
\begin{table}
\begin{tabular}{l l r r r r} \hline \hline Domain & Algorithm & \%solved & Length & Expansions & Time (ms) \\ \hline \hline Boxoban & LTS+CM (this work) & 100.00 & 41.7 & **2 132.3** & 124 \\ & LTS+NN [10] & 100.00 & **40.1** & 2 640.4 & 19 500 \\ \hline The Witness & LTS+CM (this work) & 100.00 & 15.5 & **102.8** & 9 \\ & LTS+NN [10] & 100.00 & **14.8** & 520.2 & 3 200 \\ \hline STP (24-puzzle) & LTS+CM (this work) & **100.00** & 211.2 & **5 667.4** & 236 \\ & LTS+NN [10] & 0.90 & _145.1_ & _39 005.6_ & _31 100_ \\ \hline \hline Boxoban hard & LTS+CM (this work) & **100.00** & 67.8 & 48 058.6 & 3 275 \\ & LTS+NN [10] & 94.00 & n/a & n/a & _3 600_ \\ & ExPoSe [11] & 97.30 & n/a & n/a & n/a \\ \hline Rubik’s cube & LTS+CM (this work) & 100.00 & 81.7 & **498.0** & 16 \\ & DeepCubeA [1] & 100.00 & **21.5** & \(\sim\)600 000.0 & 24 220 \\ & GBFS(A+M) [1] & 100.00 & 378.0 & \(\dagger\)171 300.0 & n/a \\ \hline \hline \end{tabular}
\end{table}
Table 1: Results on the test sets. The last 3 columns are the averages over the test instances. The first three domains allow for a fair comparison between LTS with context models and LTS with neural networks [10] using the same 50k training instances and initial budget. For the last two domains, comparison to prior work is more cursory and is provided for information only, in particular because the objective of DeepCubeA is to provide near-optimal-length solutions rather than fast solutions. The values for LTS+{CM,NN} all use a single CPU, no GPU (except for LTS+NN [10]). DeepCubeA uses four high-end GPU cards. More results can be found in Table 2 in Appendix H. \(\dagger\)Does not account for the cost of macro-actions.
## Acknowledgments
We would like to thank the following people for their useful help and feedback: Csaba Szepesvari, Pooria Joulani, Tor Lattimore, Joel Veness, Stephen McAleer.
The following people also helped with Racket-specific questions: Matthew Flatt, Sam Tobin-Hochstadt, Bogdan Popa, Jeffrey Massung, Jens Axel Sogaard, Sorawee Porncharoenwase, Jack Firth, Stephen De Gabrielle, Alex Harsanyi, Shu-Hung You, and the rest of the quite helpful and reactive Racket community.
This research was supported by Canada's NSERC and the CIFAR AI Chairs program.
|
2302.02997
|
Erasure of Unaligned Attributes from Neural Representations
|
We present the Assignment-Maximization Spectral Attribute removaL (AMSAL)
algorithm, which erases information from neural representations when the
information to be erased is implicit rather than directly being aligned to each
input example. Our algorithm works by alternating between two steps. In one, it
finds an assignment of the input representations to the information to be
erased, and in the other, it creates projections of both the input
representations and the information to be erased into a joint latent space. We
test our algorithm on an extensive array of datasets, including a Twitter
dataset with multiple guarded attributes, the BiasBios dataset and the
BiasBench benchmark. The last benchmark includes four datasets with various
types of protected attributes. Our results demonstrate that bias can often be
removed in our setup. We also discuss the limitations of our approach when
there is a strong entanglement between the main task and the information to be
erased.
|
Shun Shao, Yftah Ziser, Shay Cohen
|
2023-02-06T18:32:17Z
|
http://arxiv.org/abs/2302.02997v2
|
# Erasure of Unaligned Attributes from Neural Representations
###### Abstract
We present the Assignment-Maximization Spectral Attribute removal (AMSAL) algorithm, which erases information from neural representations when the information to be erased is implicit rather than directly being aligned to each input example. Our algorithm works by alternating between two steps. In one, it finds an assignment of the input representations to the information to be erased, and in the other, it creates projections of both the input representations and the information to be erased into a joint latent space. We test our algorithm on an extensive array of datasets, including a Twitter dataset with multiple guarded attributes, the Bias-Bios dataset and the BiasBench benchmark. The last benchmark includes four datasets with various types of protected attributes. Our results demonstrate that bias can often be removed in our setup. We also discuss the limitations of our approach when there is a strong entanglement between the main task and the information to be erased.1
Footnote 1: Our code is available at [https://github.com/jasonshaoshun/AMSAL](https://github.com/jasonshaoshun/AMSAL).
## 1 Introduction
Developing a methodology for adjusting neural representations to preserve user privacy and avoid encoding bias in them has been an active area of research in recent years. Previous work shows it is possible to erase undesired information from representations so that downstream classifiers cannot use that information in their decision-making process. This previous work assumes that this sensitive information (or **guarded** attributes, such as gender or race) is available for each input instance. These guarded attributes, however, are sensitive, and obtaining them on a large scale is often challenging and, in some cases, not feasible (Han et al., 2021b). For example, Blodgett et al. (2016) studied the characteristics of African-American English (AAE) on Twitter, and could not couple the ethnicity attribute directly with the tweets they collected due to the attribute's sensitivity.
This paper introduces a novel debiasing setting in which the guarded attributes are not paired up with each input instance and an algorithm to remove information from representations in that setting. In our setting, we assume that each neural input representation is coupled with a guarded attribute value, but this assignment is unavailable. In cases where the domain of the guarded attribute is small (for example, with binary attributes), this means that the guarded attribute information consists of **priors** with respect to the whole population and not instance-level information.
The intuition behind our algorithm is that if we were to find a strong correlation between the input variable and a set of gua
Figure 1: A depiction of the problem setting and solution. The inputs are aligned to each guarded sample, based on strength using two projections \(\mathbf{U}\) and \(\mathbf{V}\). We solve a bipartite matching problem to find the blue edges, and then recalculate \(\mathbf{U}\) and \(\mathbf{V}\).
either in the form of an unordered list of **records** or as priors, then it is unlikely to be coincidental if the sample size is sufficiently large (SS3.5). We implement this intuition by jointly finding projections of the input samples and the guarded attributes into a joint embedding space _and_ an alignment between the two sets in that joint space.
Our resulting algorithm (SS3), the Alignment-Maximization Spectral Attribute removaL algorithm (AMSAL), is a coordinate-ascent algorithm reminiscent of the hard expectation-maximization algorithm (hard EM; MacKay 2003). It first loops between two Alignment and Maximization steps, during which it finds an alignment (A) based on existing projections and then projects the representations and guarded attributes into a joint space based on an existing alignment (M). After these two steps are iteratively repeated and an alignment is identified, the algorithm takes another step to erase information from the input representations based on the projections identified. This step closely follows the work of Shao et al. (2023), who use Singular Value Decomposition (SVD) to remove principal directions of the covariance matrix between the input examples and the guarded attributes. Figure 1 depicts a sketch of our setting and the corresponding algorithm, with \(\mathbf{x}_{i}\) being the input representations and \(\mathbf{z}_{j}\) being the guarded attributes. Our algorithm is modular: while our use of the algorithm of Shao et al. (2023) for the removal step is natural due to the nature of the AM steps, a user can use any such algorithm to erase the information from the input representations (SS3.4).
Our contributions are as follows: (1) We propose a new setup for removing guarded information from neural representations where there are few or no labeled guarded attributes; (2) We present a novel two-stage coordinate-ascent algorithm that iteratively improves (a) an alignment between guarded attributes and neural representations; and (b) information removal projections.
Using an array of datasets, we perform extensive experiments to assess how challenging our setup is and whether our algorithm is able to remove information without having aligned guarded attributes (SS4). We find in several cases that little information is needed to align between neural representations and their corresponding guarded attributes. The consequence is that it is possible to erase the information such guarded attributes provide from the neural representations while preserving the information needed for the main task decision-making. We also study the limitations of our algorithm by experimenting with a setup where it is hard to distinguish between the guarded attributes and the downstream task labels when aligning the neural representations with the guarded attributes (SS4.5).
## 2 Problem Formulation and Notation
For an integer \(n\) we denote by \([n]\) the set \(\{1,\ldots,n\}\). For a vector \(\mathbf{v}\), we denote by \(||\mathbf{v}||_{2}\) its \(\ell_{2}\) norm. For two vectors \(\mathbf{v}\) and \(\mathbf{u}\), by default in column form, \(\langle\mathbf{v},\mathbf{u}\rangle=\mathbf{v}^{\top}\mathbf{u}\) (dot product). Matrices and vectors are in boldface font (with uppercase or lowercase letters, respectively). Random variable vectors are also denoted by boldface uppercase letters. For a matrix \(\mathbf{A}\), we denote by \(a_{ij}\) the value of cell \((i,j)\). The Frobenius norm of a matrix \(\mathbf{A}\) is \(||\mathbf{A}||_{F}=\sqrt{\sum_{i,j}a_{ij}^{2}}\). The spectral norm of a matrix is \(||\mathbf{A}||_{2}=\max_{||\mathbf{x}||_{2}=1}||\mathbf{A}\mathbf{x}||_{2}\). The expectation of a random variable \(\mathbf{T}\) is denoted by \(\mathbb{E}[\mathbf{T}]\).
In our problem formulation, we assume three random variables: \(\mathbf{X}\in\mathbb{R}^{d}\), \(\mathbf{Y}\in\mathbb{R}\) and \(\mathbf{Z}\in\mathbb{R}^{d^{\prime}}\) such that \(d^{\prime}\leq d\) and the expectation of all three variables is \(0\) (see Shao et al. 2023). Samples of \(\mathbf{X}\) are the inputs for a classifier to predict corresponding samples of \(\mathbf{Y}\). The random vector \(\mathbf{Z}\) represents the guarded attributes. We want to maintain the ability to predict \(\mathbf{Y}\) from \(\mathbf{X}\), while minimizing the ability to predict \(\mathbf{Z}\) from \(\mathbf{X}\).
We assume \(n\) samples of \((\mathbf{X},\mathbf{Y})\) and \(m\) samples of \(\mathbf{Z}\), denoted by \((\mathbf{x}^{(i)},\mathbf{y}^{(i)})\) for \(i\in[n]\), and \(\mathbf{z}^{(i)}\) for \(i\in[m]\) (\(m\leq n\)). While originally, these samples were generated jointly from the underlying distribution \(p(\mathbf{X},\mathbf{Y},\mathbf{Z})\), we assume a shuffling of the \(\mathbf{Z}\) samples in such a way that we are only left with \(m\) samples that are unique (no repetitions) and an underlying unknown many-to-one mapping \(\pi\colon[n]\to[m]\) that maps each \(\mathbf{x}^{(i)}\) to its original \(\mathbf{z}^{(j)}\).
The problem formulation is such that we need to remove the information from the \(x\)s in such a way that we consider the samples of \(z\)s as a set. In our case, we do so by iterating between trying to infer \(\pi\), and then using standard techniques, remove the information from \(x\)s based on their alignment to the corresponding \(z\)s.
Singular Value DecompositionLet \(\mathbf{A}=\mathbb{E}[\mathbf{X}\mathbf{Z}^{\top}]\), the matrix of cross-covariance between \(\mathbf{X}\) and \(\mathbf{Z}\). This means that \(\mathbf{A}_{ij}=\mathrm{Cov}(\mathrm{X}_{i},\mathrm{Z}_{j})\) for \(i\in[d]\) and \(j\in[d^{\prime}]\).
For any two vectors, \(\mathbf{a}\in\mathbb{R}^{d},\mathbf{b}\in\mathbb{R}^{d^{\prime}}\), the following holds due to the linearity of expectation:
\[\mathbf{a}\boldsymbol{A}\mathbf{b}^{\top}=\mathrm{Cov}(\mathbf{a}^{\top} \mathbf{X},\mathbf{b}^{\top}\mathbf{Z}). \tag{1}\]
Singular value decomposition on \(\boldsymbol{A}\), in this case, finds the "principal directions": directions in which the projection of \(\mathbf{X}\) and \(\mathbf{Z}\) maximize their covariance. The projections are represented as two matrices \(\boldsymbol{U}\in\mathbb{R}^{d\times d}\) and \(\boldsymbol{V}\in\mathbb{R}^{d^{\prime}\times d^{\prime}}\). Each column in these matrices plays the role of the vectors \(\mathbf{a}\) and \(\mathbf{b}\) in Eq. 1. SVD finds \(\boldsymbol{U}\) and \(\boldsymbol{V}\) such that for any \(i\in[d^{\prime}]\) it holds that:
\[\mathrm{Cov}(\boldsymbol{U}_{i}^{\top}\mathbf{X},\boldsymbol{V}_{i}^{\top} \mathbf{Z})=\max_{(\mathbf{a},\mathbf{b})\in\mathcal{O}_{i}}\mathrm{Cov}( \mathbf{a}^{\top}\mathbf{X},\mathbf{b}^{\top}\mathbf{Z}),\]
where \(\mathcal{O}_{i}\) is the set of pairs of vectors \((\mathbf{a},\mathbf{b})\) such that \(||\mathbf{a}||_{2}=||\mathbf{b}||_{2}=1\), \(\mathbf{a}\) is orthogonal to \(\boldsymbol{U}_{1},\ldots,\boldsymbol{U}_{i-1}\) and similarly, \(\mathbf{b}\) is orthogonal to \(\boldsymbol{V}_{1},\ldots,\boldsymbol{V}_{i-1}\).
Shao et al. (2023) showed that SVD in this form can be used to debias representations. We calculate SVD between \(\mathbf{X}\) and \(\mathbf{Z}\) and then prune out the principal directions that denote the highest covariance. We will use their method, SAL (Spectral Attribute removaL), in the rest of the paper. See also SS3.4.
## 3 Methodology
We view the problem of information removal with unaligned samples as a joint optimization problem of: (a) finding the alignment; (b) finding the projection that maximizes the covariance between the alignments, and using its complement to project the inputs. Such an optimization, in principle, is intractable, so we break it down into two coordinate-ascent style steps: A-step (in which the alignment is identified as a bipartite graph matching problem) and M-step (in which based on the previously identified alignment, a maximal-covariance projection is calculated). Formally, the maximization problem we solve is:
\[(\boldsymbol{U},\boldsymbol{V},\pi)=\arg\max_{\boldsymbol{U},\boldsymbol{V},\pi }\sum_{i=1}^{n}(\mathbf{x}^{(i)})^{\top}\boldsymbol{U}\boldsymbol{V}^{\top} \mathbf{z}^{(i)},\]
where we constrain \(\boldsymbol{U}\) and \(\boldsymbol{V}\) to be matrices with orthonormal columns in \(\mathbb{R}^{n\times k}\).
Note that the sum in the above equation has a term per pair of \((\mathbf{x}^{(i)},\mathbf{z}^{\pi(i)})\), which enables us to frame the A-step as an integer linear programming (ILP) problem (SS3.1). The full algorithm is given in Figure 2, and we proceed in the next two steps to further explain the A-step and the M-step.
### A-step (Guarded Sample Assignment)
In the Assignment Step, we are required to find a many-to-one alignment \(\pi\colon[n]\to[m]\) between \(\{\mathbf{x}^{(1)},\ldots,\mathbf{x}^{(n)}\}\) and \(\{\mathbf{z}^{(1)},\ldots,\mathbf{z}^{(m)}\}\). Given \(\boldsymbol{U}\) and \(\boldsymbol{V}\) from the previous M-step, we can find such an assignment by solving the following optimization problem:
\[\arg\max_{\pi}\sum_{i=1}^{n}\langle\boldsymbol{U}^{\top}\mathbf{x}^{(i)}, \boldsymbol{V}^{\top}\mathbf{z}^{(\pi(i))}\rangle.\]
This maximization problem can be formulated as an integer linear program of the following form:
\[\max_{\boldsymbol{P}\in\{0,1\}^{n\times m}} \sum_{j=1}^{m}\sum_{i=1}^{n}p_{ij}\langle\boldsymbol{U}^{\top} \mathbf{x}^{(i)},\boldsymbol{V}^{\top}\mathbf{z}^{(j)}\rangle\] \[\text{\emph{s.t.}}\ \forall i. \sum_{j=1}^{m}p_{ij}=1,\] \[\forall j. b_{0j}\leq\sum_{i=1}^{m}p_{ij}\leq b_{1j}. \tag{2}\]
This is a solution to an assignment problem (Kuhn, 1955; Ramshaw and Tarjan, 2012), where
Figure 2: The main Assignment-Maximization Spectral Attribute removaL (AMSAL) algorithm for removal of information without alignment between samples of \(\mathbf{X}\) and \(\mathbf{Z}\).
\(p_{ij}\) denotes whether \(\mathbf{x}^{(i)}\) is associated with the (type of) guarded attribute \(\mathbf{z}^{(j)}\). The values \((b_{0j},b_{1j})\) determine lower and upper bounds on the number of \(x\)s a given \(\mathbf{z}^{(j)}\) can be assigned to. While a standard assignment problem can be solved efficiently using the Hungarian method of Kuhn (1955), we choose to use the ILP formulation, as it enables us to have more freedom in adding constraints to the problem, such as the lower and upper bounds.
### M-step (Covariance Maximization)
The result of an A-step is an assignment \(\pi\) such that \(\pi(i)=j\) implies \(\mathbf{x}^{(i)}\) was deemed as aligned to \(\mathbf{z}_{j}\). With that \(\pi\) in mind, we define the following empirical covariance matrix \(\mathbf{\Omega}_{\pi}\in\mathbb{R}^{d\times d^{\prime}}\):
\[\mathbf{\Omega}_{\pi}=\sum_{i=1}^{n}\mathbf{x}^{(i)}(\mathbf{z}^{(\pi(i))})^{ \top}. \tag{3}\]
We then apply SVD on \(\mathbf{\Omega}_{\pi}\) to get new \(\mathbf{U}\) and \(\mathbf{V}\) that are used in the next iteration of the algorithm with the A-step, if the algorithm continues to run. When the maximal number of iterations is reached, we follow the work of Shao et al. (2023) in using a truncated part of \(\mathbf{U}\) to remove the information from the \(x\)s. We do that by projecting \(\mathbf{x}^{(i)}\) using the singular vectors of \(\mathbf{U}\) with the smallest singular values. These projected vectors co-vary the least with the guarded attributes, assuming the assignment in the last A-step was precise. This method has been shown by Shao et al. (2023) to be highly effective and efficient in debiasing neural representations.
### A Matrix Formulation of the AM Steps
Let \(\mathbf{e}_{1},\ldots,\mathbf{e}_{m}\) be the standard basis vectors. This means \(\mathbf{e}_{i}\) is a vector of length \(m\) with \(0\) in all coordinates except for the \(i\)th coordinate, where it is \(1\).
Let \(\mathcal{E}\) be the set of all matrices \(\mathbf{E}\) where each \(\mathbf{E}\in\mathcal{E}\) is such that \(\mathbf{E}\in\mathbb{R}^{n\times m}\) and each row is one of \(\mathbf{e}_{i}\), \(i\in[m]\). In that case, \(\mathbf{E}\mathbf{Z}^{\top}\) is an \(n\times d^{\prime}\) matrix, such that the \(j\)th row is a copy of the \(i\)th column of \(\mathbf{Z}\in\mathbb{R}^{d^{\prime}\times n}\). Therefore, the AM steps can be viewed as solving the following maximization problem using coordinate ascent:
\[\operatorname*{argmax}_{\mathbf{E}\in\mathcal{E},\mathbf{U},\mathbf{V},\mathbf{\Sigma}}||\bm {U}^{\top}\mathbf{\Sigma}\mathbf{V}-\mathbf{X}\mathbf{E}\mathbf{Z}^{\top}||_{F}^{2},\]
where \(\mathbf{U}\), \(\mathbf{V}\) are orthonormal matrices, and \(\mathbf{\Sigma}\) is a diagonal matrix with non-negative elements. This corresponds to the SVD of the matrix \(\mathbf{X}\mathbf{E}\mathbf{Z}^{\top}\).
In that case, the matrix \(\mathbf{E}\) can be directly mapped to an assignment in the form of \(\pi\), where \(\pi(i)\) would be the \(j\) such that the \(j\)th coordinate in the \(i\)th row of \(\mathbf{E}\) is non-zero.
### Removal Algorithm
The AM steps are best suited for the removal of information through SVD with an algorithm such as SAL. This is because the AM steps are optimizing an objective of the same type of SAL - relying on the projections \(\mathbf{U}\) and \(\mathbf{V}\) to project the inputs and guarded representations into a joint space. However, a by-product of the algorithm in Figure 2 is an assignment function \(\pi\) that aligns between the inputs and the guarded representations.
With that assignment, other removal algorithms can be used, for example, the algorithm of Ravfogel et al. (2020). We experiment with this idea in SS4.
### Justification of the AM Steps
Next, we justify our algorithm (which may be skipped on the first reading). Our justification is based on the observation that if indeed \(\mathbf{X}\) and \(\mathbf{Z}\) are linked together (this connection is formalized as a latent variable in their joint distribution), then for a given sample that is permuted, the singular values of \(\mathbf{\Omega}\) will be larger the closer the permutation is to the identity permutation. This justifies finding such a permutation that maximizes the singular values in an SVD of \(\mathbf{\Omega}\).
More DetailsLet \(\iota\colon[n]\to[n]\) be the identity permutation, \(\iota(i)=i\). We will assume the case in which \(n=m\) (but the justification can be generalized to the case \(m<n\)), and that the underlying joint distribution \(p(\mathbf{X},\mathbf{Z})\) is mediated by a latent variable \(\mathbf{H}\), such that
\[p(\mathbf{X},\mathbf{Z},\mathbf{H})=p(\mathbf{H})p(\mathbf{X}\mid\mathbf{H})p( \mathbf{Z}\mid\mathbf{H}). \tag{4}\]
This implies there is a latent variable that connects \(\mathbf{X}\) and \(\mathbf{Z}\), and that the joint distribution \(p(\mathbf{X},\mathbf{Z})\) is a mixture through \(\mathbf{H}\).
**Proposition 1** (informal).: _Let \(\{(\mathbf{x}^{(i)},\mathbf{z}^{(i)})\}\) be a sample of size \(n\) from the distribution in Eq. 4. Let \(\pi\) be a permutation over \([n]\) uniformly sampled from the set of permutations. Then with high likelihood, the sum of the singular values of \(\mathbf{\Omega}_{\pi}\) is smaller than the sum of singular values under \(\mathbf{\Omega}_{t}\)._
For full details of this claim, see Appendix A.
Experiments
In our experiments, we test several combinations of algorithms. We use the \(k\)-means (KMeans) as a substitute for the AM steps as a baseline for the assignment step of \(x\)s to \(z\)s. In addition, for the removal step (once an assignment has been identified), we test two algorithms: SAL (Shao et al., 2023; resulting in AMSAL) and INLP (Ravfogel et al., 2020). We also compare these two algorithms in _oracle_ mode (in which the assignment of guarded attributes to inputs is known), to see the loss in performance that happens due to noisy assignments from the AM or \(k\)-means algorithm (OracleSAL and OracleINLP).
When running the AM algorithm or \(k\)-means, we execute it with three random seeds (see also SS4.6) for a maximum of a hundred iterations and choose the projection matrix with the largest objective value over all seeds and iterations. For the slack variables (\(b_{0j}\) and \(b_{1j}\) variables in Eq. 2), we use 20%-30% above and below the baseline of the guarded attribute priors according to the training set. With the SAL methods, we remove the number of directions according to the rank of the \(\mathbf{\Omega}\) matrix (between \(2\) to \(6\) in all experiments overall).
In addition, we experiment with a partially supervised assignment process, in which a small seed dataset of aligned \(x\)s and \(z\)s is provided to the AM steps. We use it for model selection: rather than choosing the assignment with the highest SVD objective value, we choose the assignment with the highest accuracy on this seed dataset. We refer to this setting as Partial (for "partially supervised assignment").
Finally, in the case of a gender-protected attribute, we compare our results against a baseline in which the input \(\mathbf{x}\) is compared against a list of words stereotypically associated with the genders of male or female.2 Based on the overlap with these two lists, we heuristically assign the gender label to \(\mathbf{x}\) and then run SAL or INLP (rather than using the AM algorithm). While this wordlist heuristic is plausible in the case of gender, it is not as easy to derive in the case of other protected attributes, such as age or race. We give the results for this baseline using the marker WL in the corresponding tables.
Footnote 2: [https://tinyurl.com/33bzddtw](https://tinyurl.com/33bzddtw)
Main FindingsOur overall main finding shows that our novel setting in which guarded information is erased from individually-unaligned representations is viable. We discovered that AM methods perform particularly well when dealing with more complex bias removal scenarios, such as when multiple guarded attributes are present. We also found that having similar priors for the guarded attributes and downstream task labels may lead to poor performance on the task at hand. In these cases, using a small amount of supervision often effectively helps reduce bias while maintaining the utility of the representations for the main classification of the regression problem. Finally, our analysis of alignment stability shows that our AM algorithm often converges to suitable solutions that align \(\mathbf{X}\) with \(\mathbf{Z}\).
Due to the unsupervised nature of our problem setting, we advise validating the utility of our method in the following way. Once we run the AM algorithm, we check whether there is a high-accuracy alignment between \(\mathbf{X}\) and \(\mathbf{Y}\) (rather than \(\mathbf{Z}\), which is unavailable). If this alignment is accurate, then we run the risk of significantly damaging task performance. An example is given in SS4.5.
### Word Embedding Debiasing
As a preliminary assessment of our setup and algorithms, we apply our methods to GloVe word embeddings to remove gender bias, and following the previous experiment settings of this problem (Bolukbasi et al., 2016; Ravfogel et al., 2020; Shao et al., 2023). We considered only the 150,000 most common words to ensure the embedding quality and omitted the rest. We sort the remaining embeddings by their projection on the \(\overrightarrow{\text{he-she}}\) direction. Then we consider the top 7,500 word embeddings as male-associated words (\(z=1\)) and the bottom 7,500 as female-associated words (\(z=-1\)).
Our findings are that both the \(k\)-means and the AM algorithms perfectly identify the alignment between the word embeddings and their associated gender label (100%). Indeed, the dataset construction itself follows a natural perfect clustering that these algorithms easily discover. Since the alignments are perfectly identified, the results of predicting the gender from the word embeddings after removal are identical to the oracle case. These results are quite close to the results of a random guess, and we refer the reader to Shao et al. (2023) for details on experiments with SAL and INLP for this dataset. Considering Figure 3, it is evident that our algorithm essentially follows a natural clustering of the word embeddings into two clusters,
female and male, as the embeddings are highly separable in this case. This is why the alignment score of \(\mathbf{X}\) (embedding) to \(\mathbf{Z}\) (gender) is perfect in this case. This finding indicates that this standard word embedding dataset used for debiasing is _trivial to debias_ - debiasing can be done even without knowing the identity of the stereotypical gender associated with each word.
### BiasBios Results
De-Arteaga et al. (2019) presented the BiasBios dataset, which consists of self-provided biographies paired with the profession and gender of their authors. A list of pronouns and names is used to obtain the authors' gender automatically. They aim to expose the caveats of automated hiring systems by showing that even the simple task of predicting a candidate's profession can be affected by the candidate's gender, which is encoded in the biography representation. For example, we want to avoid one being identified as "he" or "she" in their biography, affecting the likelihood of them being classified as engineers or teachers.
We follow the setup of De-Arteaga et al. (2019), predicting a candidate's professions (\(\mathbf{y}\)), based on a self-provided short biography (\(\mathbf{x}\)), aiming to remove any information about the candidate's gender (\(\mathbf{z}\)). Due to computational constraints, we use only random 30K examples to learn the projections with both SAL and INLP (whether in the unaligned or aligned setting). For the classification problem, we use the full dataset. To get vector representations for the biographies, we use two different encoders, FastText word embeddings (Joulin et al., 2016), and BERT (Devlin et al., 2019). We stack a multi-class classifier on top of these representations, as there are 28 different professions. We use 20% of the training examples for the Partial setting. For BERT, we followed De-Arteaga et al. (2019) in using the last CLS token state as the representation of the whole biography. We used the BERT model bert-base-uncased.
\begin{table}
\begin{tabular}{l r r r} \hline \hline Model & Task Acc. & TPR-GAP \\ \hline BertModel & 0.79 & 0.20 \\ + AMINLP & 0.12 & 0.67 & 0.12 & 0.09 \\ + Kmeans + INLP & 0.11 & 0.68 & 0.12 & 0.08 \\ + OracleINLP & 0.11 & 0.68 & 0.12 & 0.08 \\ + PartialINLP & 0.12 & 0.67 & 0.13 & 0.08 \\ + AMSAL & 0.79 & 0.02 & 0.18 \\ + Kmeans + SAL & 0.79 & 0.02 & 0.18 \\ + OracleSAL & 0.79 & 0.02 & 0.18 \\ + PartialSAL & 0.79 & 0.02 & 0.18 \\ + WL + SAL & 0.79 & 0.02 & 0.18 \\ + WL + INLP & 0.12 & 0.68 & 0.12 & 0.08 \\ \hline FastText & 0.77 & 0.20 \\ + AMINLP & 0.05 & 0.73 & 0.01 & 0.21 \\ + Kmeans + INLP & 0.08 & 0.69 & 0.19 \\ + OracleINLP & 0.03 & 0.74 & 0.10 & 0.09 \\ + PartialINLP & 0.04 & 0.74 & 0.04 & 0.16 \\ + AMSAL & 0.03 & 0.74 & 0.03 & 0.17 \\ + Kmeans + SAL & 0.04 & 0.73 & 0.02 & 0.17 \\ + OracleSAL & 0.01 & 0.76 & 0.08 & 0.12 \\ + PartialSAL & 0.01 & 0.76 & 0.02 & 0.18 \\ + WL + SAL & 0.01 & 0.76 & 0.08 & 0.12 \\ + WL + INLP & 0.03 & 0.74 & 0.10 & 0.10 \\ \hline \hline \end{tabular}
\end{table}
Table 1: BiasBios dataset results. The top part uses BERT embeddings to encode the biographies, while the bottom part uses FastText embeddings.
Figure 3: A t-SNE visualization of the word embeddings before and after gender information removal. In (a) we see the embeddings naturally cluster into the corresponding gender.
Evaluation MeasuresWe use an extension of the True Positive Rate (TPR) gap, the root mean square (RMS) TPR gap of all classes, for evaluating bias in a multiclass setting. This metric was suggested by De-Arteaga et al. (2019), who demonstrated it is significantly correlated with gender imbalances, which often lead to unfair classification. The higher the metric value is, the bigger the gap between the two categories (for example, between male and female) for the specific main task prediction. For the profession classification, we report accuracy.
ResultsTable 1 provides the results for the biography dataset. We see that INLP significantly reduces the TPR-GAP in all settings, but this comes at a cost: the representations are significantly less useful for the main task of predicting the profession. When inspecting the alignments, we observe that their accuracy is quite high with BERT: 100% with \(k\)-means, 85% with the AM algorithm and 99% with Partial AM. FastText results are lower, having around 55% for all three methods. The high BERT assignment performance indicates that the BiasBios BERT representations are naturally separated by gender. We also observe that the results of WL+SAL and WL+INLP are correspondingly identical to Oracle+SAL and Oracle+INLP. This comes as no surprise, as the gender label is derived from a similar word list, which enables the WL approach to get a nearly perfect alignment (over 96% agreement with the gender label).
### BiasBench Results
Meade et al. (2022) followed an empirical study of an array of datasets in the context of debiasing. They analyzed different methods and tasks, and we follow their benchmark evaluation to assess our AMSAL algorithm and other methods in the context of our new setting. We include a short description of the datasets we use in this section. We include full results in Appendix B, with a description of other datasets. We also encourage the reader to refer to Meade et al. (2022) for details on this benchmark. We use 20% of the training examples for the Partial setting.
StereoSet (Nadeem et al., 2021)This dataset presents a word completion test for a language model, where the completion can be stereotypical or non-stereotypical. The bias is then measured by calculating how often a model prefers the stereotypical completion over the non-stereotypical one. Nadeem et al. (2021) introduced the language model score to measure the language model usability, which is the percentage of examples for which a model prefers the stereotypical or non-stereotypical word over some unrelated word.
CrowS-Pairs (Nangia et al., 2020)This dataset includes pairs of sentences that are minimally different at the token level, but these differences lead to the sentence being either stereotypical or anti-stereotypical. The assessment measures how many times a language model prefers the stereotypical element in a pair over the anti-stereotypical element.
ResultsWe start with an assessment of the BERT model for the CrowS-Pairs gender, race and religion bias evaluation (Table 2). We observe that all approaches for gender, except AM+INLP reduce the stereotype score. Race and religion are more difficult to debias in the case of BERT. INLP with \(k\)-means works best when no seed alignment data is provided at all, but when we consider PartialSAL, in which we use the alignment algorithm with some seed aligned data, we see that the results are the strongest. When we consider the RoBERTa model, the results are similar, with PartialSAL significantly reducing the bias. Our findings from Table 2 overall indicate that the ability to debias a representation _highly depends on the model that generates the representation_. In Table 10 we observe that the representations, on average, are not damaged for most GLUE tasks.
As Meade et al. (2022) have noted, when changing the representations of a language model to remove bias, we might cause such adjustments that damage the usability of the language model. To test which methods possibly cause such an issue, we also assess the language model score on the StereoSet dataset in Table 3. We overall see that often SAL-based methods give lower stereotype score, while INLP methods more significantly damage the language model score. This implies that the _SAL-based methods remove bias effectively while less significantly harming the usability of the language model representations_.
We also conducted comprehensive results for other datasets (SEAT and GLUE) and categories of bias (based on race and religion). The results, especially for GLUE, demonstrate the effectiveness of our method of unaligned information removal. For GLUE, we consistently retain the baseline task performance almost in full. See Appendix B.
### Multiple-Guarded Attribute Sentiment
We hypothesize that AM-based methods are better suited for setups where multiple guarded attributes should be removed, as they allow us to target several guarded attributes with different priors. To examine our hypothesis, we experiment with a dataset curated from Twitter (tweets encoded using BERT, bert-base-uncased), in which users are surveyed for their age and gender (Cachola et al., 2018). We bucket the age into three groups (0-25, 26-50 and above 50). Tweets in this dataset are annotated with their sentiment, ranging from one (very negative) to five (very positive). The dataset consists of more than 6,400 tweets written by more than 1,700 users. We removed users that no longer have public Twitter accounts and users with locations that do not exist based on a filter,3 re
\begin{table}
\begin{tabular}{l r r} \hline Model & Stt. Score \\ \hline \hline Gender & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \hline BERT & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ + AM + INLP & \(\uparrow\)0.38 & 57.63 \\ + Kmeans + INLP & \(\downarrow\)3.81 & 53.44 \\ + OracleINLP & \(\downarrow\)4.58 & 52.67 \\ + PartialINLP & \(\downarrow\)4.58 & 52.67 \\ + AMSAL & \(\downarrow\)3.05 & 54.20 \\ + Kmeans + SAL & \(\downarrow\)2.29 & 54.96 \\ + OracleSAL & \(\downarrow\)5.72 & 51.53 \\ + PartialSAL & \(\downarrow\)5.72 & 51.53 \\ \hline ALBERT & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ + AM + INLP & \(\uparrow\)1.14 & 46.95 \\ + Kmeans + INLP & \(\uparrow\)0.38 & 47.71 \\ + OracleINLP & \(\uparrow\)4.58 & 43.51 \\ + PartialINLP & \(\uparrow\)4.20 & 43.89 \\ + AMSAL & \(\downarrow\)5.82 & 43.30 \\ + Kmeans + SAL & \(\downarrow\)6.40 & 43.88 \\ + OracleSAL & \(\downarrow\)7.76 & 47.33 \\ + PartialSAL & \(\uparrow\)0.76 & 47.33 \\ \hline RoBERTa & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ + AM + INLP & \(\downarrow\)3.45 & 56.70 \\ + Kmeans + INLP & \(\uparrow\)6.52 & 52.49 \\ + OracleINLP & \(\downarrow\)4.98 & 55.17 \\ + PartialINLP & \(\downarrow\)4.98 & 55.17 \\ + PartialINLP & \(\downarrow\)4.98 & 55.17 \\ + AMSAL & \(\downarrow\)3.45 & 56.70 \\ + Kmeans + SAL & \(\downarrow\)3.83 & 56.32 \\ + OracleSAL & \(\downarrow\)8.81 & 48.66 \\ + PartialSAL & \(\downarrow\)3.81 & 48.66 \\ \hline GPT-2 & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ + AM + INLP & \(\downarrow\)3.88 & 55.81 \\ + Kmeans + INLP & \(\downarrow\)1.16 & 58.53 \\ + OracleINLP & \(\uparrow\)0.19 & 59.88 \\ + PartialINLP & \(\downarrow\)6.11 & 50.76 \\ + PartialINLP & \(\downarrow\)6.11 & 50.76 \\ + Marginal & \(\downarrow\)6.11 & 50.76 \\ + Marginal & \(\downarrow\)6.11 & 50.76 \\ + Marginal & \(\downarrow\)6.11 & 58.02 \\ + Kansas + SAL & \(\downarrow\)3.05 & 53.82 \\ + OracleSAL & \(\downarrow\)56.87 \\ + PartialSAL & \(\downarrow\)4.85 & 54.84 \\ \hline (a) & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \hline \end{tabular}
\begin{tabular}{l r r} \hline Model & Stt. Score \\ \hline \hline Race & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \hline BERT & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ + AM + INLP & \(\uparrow\)0.38 & 57.63 \\ + Kmeans + INLP & \(\downarrow\)1.85 & 67.18 \\ + OracleINLP & \(\uparrow\)5.63 & 67.96 \\ + PartialINLP & \(\downarrow\)5.63 & 67.96 \\ + AMSAL & \(\downarrow\)1.05 & 62.52 \\ + Kmeans + SAL & \(\downarrow\)1.92 & 62.52 \\ + OracleSAL & \(\downarrow\)1.72 & 51.53 \\ + PartialSAL & \(\uparrow\)0.78 & 63.11 \\ + PartialSAL & \(\uparrow\)0.78 & 63.11 \\ \hline ALBERT & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ + AM + INLP & \(\uparrow\)0.98 & 36.50 \\ + Kmeans + INLP & \(\uparrow\)3.50 & 33.98 \\ + OracleINLP & \(\downarrow\)7.18 & 55.34 \\ + OracleINLP & \(\downarrow\)7.18 & 55.34 \\ + Marginal & \(\downarrow\)7.18 & 55.34 \\ + Marginal & \(\downarrow\)7.18 & 55.34 \\ + Marginal & \(\downarrow\)7.18 & 43.89 \\ + PartialINLP & \(\downarrow\)7.18 & 55.34 \\ + Marginal & \(\downarrow\)7.18 & 55.31 \\ + Marginal & \(\downarrow\)7.18 & 55.34 \\ + Marginal & \(\downarrow\)7.
sulting in a dataset with over 3,000 tweets, written by 817 unique users. As tweets are short by nature and their number is relatively small, the debiasing signal in this dataset, the amount of information it contains about the guarded attributes, might not be sufficient for the attribute removal. To amplify this signal, we concatenated each tweet in the dataset to at most ten other tweets from the same user.
Footnote 3: The absolute error of prediction \(a\) with true value \(b\) is \(|a-b|\).
We study the relationship between the main task of sentiment detection and the two protected attributes of age and gender. As a protected attribute \(\mathbf{z}\), we use the combination of both age and gender as a binary one-hot vector. This dataset presents a use-case for our algorithm of a composed protected attribute. Rather than using a classifier for predicting the sentiment, we use linear regression. Following Cachola et al. (2018), we use Mean Absolute Error (MAE) to report the error of the sentiment predictions. Given that the sentiment is predicted as a continuous value, we cannot use the TPR gap as in previous sections. Rather, we use the following formula:
\[\mathrm{MAEGap}=\mathrm{std}(\mathrm{MAD}_{z=j}\mid j\in[m]), \tag{5}\]
where \(\mathrm{MAD}_{z=j}=\frac{1}{\ell}\sum_{i}|\eta_{ij}-\mu_{j}|\) where \(i\) ranges over the set of size \(\ell\) of examples with protected attribute value \(j\), \(\mu_{j}\) is the average of absolute \(\mathbf{Y}\) prediction error for that set and \(\eta_{ij}\) is the absolute difference between \(\mu_{j}\) and the absolute error of example \(i\).4 The function \(\mathrm{std}\) in this case indicates the standard deviation of the \(m\) values of \(\mathrm{MAD}_{z=j}\), \(j\in[m]\).
Footnote 4: The absolute error of prediction \(a\) with true value \(b\) is \(|a-b|\).
ResultsTable 4 presents our results. Overall, AMSAL reduces the gender and age gap in the predictions while not increasing by much MAE.
Figure 4: Accuracy of the AM steps with respect to age and gender separately (on unseen data), as a function of the fraction of the labeled dataset used by the AM algorithm.
\begin{table}
\begin{tabular}{l r r} \hline \hline Model & S. Score (\%) & LM Score (\%) \\ \hline BERT & 60.28 & 84.17 \\ + AM + INLP & 11.14 59.14 & 10.43 83.75 \\ + Kmeans + INLP & 10.16 60.12 & 10.47 83.70 \\ + OracleINLP & 29.93 57.35 & 11.07 83.11 \\ + PartialINLP & 29.93 57.35 & 11.07 83.10 \\ + AMSAL & 0.61 60.89 & 10.09 84.26 \\ + Kmeans + SAL & 0.19 60.47 & 10.13 84.30 \\ + OracleSAL & 0.83 59.44 & 10.53 84.70 \\ + PartialSAL & 0.83 59.44 & 10.53 84.70 \\ \hline ALBERT & 59.93 & 89.77 \\ + AM + INLP & 10.29 59.64 & 11.45 88.32 \\ + Kmeans + INLP & 0.59 59.34 & 10.08 89.69 \\ + OracleINLP & 2.73 57.20 & 11.59 88.17 \\ + PartialINLP & 2.72 57.21 & 11.62 88.15 \\ + AMSAL & 0.22 59.71 & 10.32 89.45 \\ + Kmeans + SAL & 0.56 60.49 & 10.10 89.67 \\ + OracleSAL & 2.18 57.75 & 10.16 89.61 \\ \hline RoBERTa & 66.32 & 88.95 \\ + AM + INLP & 4.95 61.37 & 10.04 88.99 \\ + Kmeans + INLP & 2.20 64.13 & 11.47 87.48 \\ + OracleINLP & 3.82 62.51 & 10.92 88.03 \\ + PartialINLP & 3.82 62.51 & 10.91 88.04 \\ + AMSAL & 0.63 65.70 & 10.60 89.54 \\ + Kmeans + SAL & 0.49 65.83 & 10.46 89.41 \\ + OracleSAL & 1.32 63.00 & 10.40 89.35 \\ + PartialSAL & 3.32 63.00 & 10.40 89.35 \\ \hline GPT-2 & 62.65 & 91.01 \\ + AM + INLP & 11.65 61.00 & 13.77 87.24 \\ + Kmeans + INLP & 11.57 61.08 & 13.09 87.93 \\ + OracleINLP & 11.26 61.39 & 91.01 \\ + PartialINLP & 11.26 61.39 & 91.01 \\ + AMSAL & 11.58 61.07 & 10.23 90.79 \\ + Kmeans + SAL & 4.00 58.64 & 10.60 90.41 \\ + OracleSAL & 4.55 58.09 & 11.75 89.26 \\ + PartialSAL & 4.55 58.09 & 11.75 89.26 \\ \hline \hline \end{tabular}
\end{table}
Table 3: StereoSet stereotype scores (Stt. Score) and language modeling scores (LM Score) for the gender category. Stereotype scores indicate the least bias at 50%, and the LM scores indicate high usability at 100%.
In addition, we can see both AM-based methods outperform their \(k\)-means counterparts which increase unfairness (Kmeans + INLP) or significantly harm the downstream-task performance (Kmeans + SAL). We also consider Figure 4, which shows the quality of the assignments of the AM algorithm change as a function of the labeled data used. As expected, the more labeled data we have, the more accurate the assignments are, but the differences are not very large.
### An Example of Our Method Limitations
We now present the main limitation in our approach and setting. This limitation arises when the random variables \(\mathbf{Y}\) and \(\mathbf{Z}\) are not easily distinguishable through information about \(\mathbf{X}\).
We experiment with a binary sentiment analysis (\(\mathbf{y}\)) task, predicted on users' tweets (\(\mathbf{x}\)), aiming to remove information regarding the authors' ethnic affiliations. To do so, we use a dataset collected by Blodgett et al. (2016), which examined the differences between African-American English (AAE) speakers and Standard American English (SAE) speakers. As information about one's ethnicity is hard to obtain, the user's geolocation information was used to create a distantly supervised mapping between authors and their ethnic affiliations. We follow previous work Shao et al. (2023); Ravfogel et al. (2020) and use the DeepMoji encoder Felbo et al. (2017) to obtain representations for the tweets. The train and test sets are balanced regarding sentiment and authors' ethnicity. We use 20% of the examples for the Partial setting. Table 5 gives the results for this dataset. We observe that the removal with the assignment (\(k\)-means, AM or Partial) significantly harms the performance on the main task and reduces it to a random guess.
This presents a limitation of our algorithm. A priori, there is no distinction between \(\mathbf{Y}\) and \(\mathbf{Z}\), as our method is unsupervised. In addition, the positive labels of \(\mathbf{Y}\) and \(\mathbf{Z}\) have the same prior probability. Indeed, when we check the assignment accuracy in the sentiment dataset, we observe that the \(k\)-means, AM and Partial AM assignment accuracy for identifying \(\mathbf{Z}\) are between 0.55 and 0.59. If we check the assignment against \(\mathbf{Y}\), we get an accuracy between 0.74 and 0.76. This means that all assignment algorithms actually identify \(\mathbf{Y}\) rather than \(\mathbf{Z}\) (both \(\mathbf{Y}\) and \(\mathbf{Z}\) are binary variables in this case). The conclusion from this is that our algorithm works best when sufficient information on \(\mathbf{Z}\) is presented such that it can provide a basis for aligning samples of \(\mathbf{Z}\) with samples of \(\mathbf{X}\). Suppose such information is unavailable or unidentifiable with information regarding \(\mathbf{Y}\). In that case, we may simply identify the natural clustering of \(\mathbf{X}\) according to their main task classes, leading to low main-task performance.
In Table 5, we observe that this behavior is significantly mitigated when the priors over the sentiment and the race are different (0.8 for sentiment and 0.5 for race). In that case, the AM algorithm is able to distinguish between the race-protected attribute (\(\mathbf{z}\)) and the sentiment class (\(\mathbf{y}\)) quite consis
\begin{table}
\begin{tabular}{l r r r} \hline \hline Model & MAE & Age (gap) & Gender (gap) \\ \hline BertModel & 0.745 & 0.031 & 0.011 \\ + AM + INLP & 0.027 & 0.717 & 0.031 & 0.008 0.003 \\ + Kmeans + INLP & 0.052 & 0.693 & 0.001 & 0.030 & 0.010 0.021 \\ + OracleINLP & 0.022 & 0.723 & 0.008 & 0.022 & 0.008 0.017 \\ + PartialINLP & 0.025 & 0.719 & 0.007 & 0.038 & 0.011 \\ + AMSAL & 0.009 & 0.754 & 0.005 & 0.026 & 0.009 0.002 \\ + Kmeans + SAL & 0.039 & 0.783 & 0.001 & 0.030 & 0.007 0.004 \\ + OracleSAL & 0.012 & 0.757 & 0.002 & 0.029 & 0.009 0.003 \\ + PartialSAL & 0.025 & 0.769 & 0.001 & 0.030 & 0.005 0.006 \\ \hline \hline \end{tabular}
\end{table}
Table 4: MAE and debiasing gap values on the Twitter dataset, when using BERT to encode the tweets. For age and gender, we give the MAE gap as in Eq. 5.
Figure 5: Accuracy of the AM steps (in identifying the correct assignment of inputs to guarded information) as a function of the iteration number. Shaded gray gives upper and lower bound on the standard deviation over five runs with different seeds for the initial \(\pi\). FastText refers to the BiasBios dataset, the BERT models are for the CrowS-Pairs dataset and Emb. refers to the word embeddings dataset from §4.1.
tently with INLP and SAL, and the gap is reduced.
We also observe that INLP changed neither the accuracy nor the TPR-GAP for the balanced scenario (Table 5) when using a \(k\)-means assignment or an AM assignment. Upon inspection, we found out that INLP returns an identity projection in these cases, unable to amplify the relatively weak signal in the assignment to change the representations.
### Stability Analysis of the Alignment
In Figure 5, we plot the accuracy of the alignment algorithm (knowing the true value of the guarded attribute per input) throughout the execution of the AM steps for the first ten iterations. The shaded area indicates one standard deviation. We observe that the first few iterations are the ones in which the accuracy improves the most. For most of the datasets, the accuracy does not decrease between iterations, though in the case of DeepMoji we do observe a "bump." This is indeed why the Partial setting of our algorithm, where a small amount of guarded information is available to determine at which iteration to stop the AM algorithm, is important. In the word embeddings case, the variance is larger because, in certain executions, the algorithm converged quickly, while in others, it took more iterations to converge to high accuracy.
Figure 6 plots the relative change of the objective value of the ILP from SS3.1 against iteration number. The relative change is defined as the ratio between the objective value before the algorithm begins and the same value at a given iteration. We see that there is a relative stability of the algorithm and that the AM steps converge quite quickly. We also observe the DeepMoji dataset has a large increase in the objective value in the first iteration (around \(\times 5\) compared to the value the algorithm starts with), after which it remains stable.
\begin{table}
\begin{tabular}{l r r r} \hline \hline Model & Task Acc. & TPR-GAP \\ \hline deepmoji & 0.77 & 0.14 \\ + AM + INLP & 0.77 & 0.14 \\ + KMeans + INLP & 0.77 & 0.14 \\ + OracleINLP & 0.02 & 0.74 & 0.04 & 0.10 \\ + PartialINLP & 0.01 & 0.75 & 0.06 & 0.08 \\ + AMSAL & 0.24 & 0.52 & 0.03 & 0.17 \\ + KMeans + SAL & 0.23 & 0.54 & 0.12 & 0.26 \\ + OracleSAL & 0.76 & 0.03 & 0.11 \\ + PartialSAL & 0.19 & 0.57 & 0.15 & 0.29 \\ \hline \hline \end{tabular}
\end{table}
Table 5: The performance of removing race information from the DeepMoji dataset is shown for two cases: with balanced ratios of race and sentiment (left) and with ratios of 0.8 for sentiment and 0.5 for race (right). In both cases, the total size of the dataset used is 30,000 examples. To evaluate the performance of the unbalanced sentiment dataset, we use the \(F_{1}\) macro measure, because in an unbalanced dataset such as this one, a simple classifier that always returns one label will achieve an accuracy of 80%. Such a classifier would have a \(F_{1}\) macro score of \(0.44\dot{4}\).
Figure 6: Ratio of the objective value in iteration \(t\) and iteration \(0\) of the ILP for the AM steps as a function of the iteration number \(t\). Shaded gray gives upper and lower bound on the standard deviation over five runs with different seeds for the initial \(\pi\). See legend explanation in Table 5.
Related Work
There has been an increasing amount of work about detecting and erasing undesired or protected information from neural representations, with standard software packages for this process having been developed (Han et al., 2022). For example, in their seminal work, Bolukbasi et al. (2016) showed that word embeddings exhibit gender stereotypes. To mitigate this issue, they projected the word embeddings to a neutral space with respect to a "heshe" direction. Influenced by this work, Zhao et al. (2018) proposed a customized training scheme to reduce the gender bias in word embeddings. Gonen and Goldberg (2019) examined the effectiveness of the methods mentioned above and concluded they remove bias in a shallow way. For example, they demonstrated that classifiers can accurately predict the gender associated with a word when fed with the embeddings of both debiasing methods.
Another related strand of work uses adversarial learning (Ganin et al., 2016), where an additional objective function is added for balancing undesired-information removal and the main task (Edwards and Storkey, 2016; Li et al., 2018; Coavoux et al., 2018; Wang et al., 2021). Elazar and Goldberg (2018) have also demonstrated that an ad-hoc classifier can easily recover the removed information from adversarially trained representations. Since then, methods for information erasure such as INLP and its generalization (Ravfogel et al., 2020, 2022), SAL (Shao et al., 2023) and methods based on similarity measures between neural representations (Colombo et al., 2022) have been developed. With a similar motivation to ours, Han et al. (2021) aimed to ease the burden of obtaining guarded attributes at a large scale by decoupling the adversarial information removal process from the main task training. They, however, did not experiment with debiasing representations where no guarded attribute alignments are available. Shao et al. (2023) experimented with the removal of features in a scenario in which a low number of protected attributes is available.
Additional previous work showed that methods based on causal inference (Feder et al., 2021), train-set balancing (Han et al., 2021), and contrastive learning (Shen et al., 2021; Chi et al., 2022) effectively reduce bias and increase fairness. In addition, there is a large body of work for detecting bias, its evaluation (1) and its implications in specific NLP applications. Savoldi et al. (2022) detected a gender bias in speech translation systems for gendered languages. Gender bias is also discussed in the context of knowledge base embeddings by Fisher et al. (2019); Du et al. (2022), and multilingual text classification (Huang, 2022).
## 6 Conclusions and Future Work
We presented a new and challenging setup for removing information, with minimal or no available sensitive information alignment. This setup is crucial for the wide applicability of debiasing methods, as for most applications, obtaining such sensitive labels on a large scale is challenging. To ease this problem, we present a method to erase information from neural representations, where the guarded attribute information does not accompany each input instance. Our main algorithm, AMASL, alternates between two steps (Assignment and Maximization) to identify an assignment between the input instances and the guarded information records. It then completes its execution by removing the information by minimizing covariance between the input instances and the aligned guarded attributes. Our approach is modular, and other erasure algorithms, such as INLP, can be used with it. Experiments show that we can reduce the unwanted bias in many cases while keeping the representations highly useful. Future work might include extending our technique to the kernelized case, analogously to the method of Shao et al. (2023).
### Ethical Considerations
The AM algorithm could potentially be misused by rather than using the AM steps to erase information, using them to link records of two different types, undermining the privacy of the record holders. Such a situation may merit additional concern because the links returned between the guarded attributes and the input instances will likely contain mistakes. The links are unreliable for decision-making at the _individual level_. Instead, they should be used on an aggregate as a statistical construct to erase information from the input representations. Finally,5 we note that the automation of the debiasing process, without properly statistically confirming its accuracy using a correct sample may promote a false sense of security that a given system is making fair decisions. We do not recommend using our method for debiasing without proper statistical control and empirical verification of correctness.
Footnote 5: We thank the anonymous reviewer for raising this issue.
## Acknowledgments
We thank the reviewers, the action editors and Marcio Fonseca for their thorough feedback. We also thank Daniel Preotiuc-Pietro for his help with the Twitter data. We thank Kousha Etessami for being a sounding board for certain parts of the paper. The experiments in this paper were supported by compute grants from the Edinburgh Parallel Computing Center and from the Baskerville Tier 2 HPC service (University of Birmingham).
|
2305.17483
|
The positively charged carbon vacancy defect as a near-infrared emitter
in 4H-SiC
|
Certain intrinsic point defects in silicon carbide are promising quantum
systems with efficient spin-photon interface. Despite carbon vacancy in silicon
carbide is an elementary and relatively abundant intrinsic defect, no optical
signal has been reported associated with it. Here, we revisit the positively
charged carbon vacancy defects in the 4H polytype of silicon carbide (4H-SiC)
by means of \textit{ab initio} calculations. We find that the excited state is
optically active for the so-called h-site configuration of carbon vacancy in
4H-SiC, with zero-phonon line at $0.65~\mathrm{eV}$. We propose this defect as
an exotic paramagnetic near-infrared emitter in the IR-B region.
|
Meysam Mohseni, Péter Udvarhelyi, Gergő Thiering, Adam Gali
|
2023-05-27T14:27:03Z
|
http://arxiv.org/abs/2305.17483v1
|
# The positively charged carbon vacancy defect as a near-infrared emitter in 4H-SiC
###### Abstract
Certain intrinsic point defects in silicon carbide are promising quantum systems with efficient spin-photon interface. Despite carbon vacancy in silicon carbide is an elementary and relatively abundant intrinsic defect, no optical signal has been reported associated with it. Here, we revisit the positively charged carbon vacancy defects in the 4H polytype of silicon carbide (4H-SiC) by means of _ab initio_ calculations. We find that the excited state is optically active for the so-called h-site configuration of carbon vacancy in 4H-SiC, with zero-phonon line at 0.65 eV. We propose this defect as an exotic paramagnetic near-infrared emitter in the IR-B region.
## I Introduction
Silicon carbide (SiC) is one of the most promising material platforms for the integration of quantum defects. Wafer scale single crystals and isotope engineered samples are readily available. Scalable arrays of integrated quantum emitters were already demonstrated in this platform [1; 2]. Advanced micro-fabrication techniques and recent advances in the integrated photonic devices [3; 4; 5; 6] enable for improved magneto-optical properties for the hosted quantum emitters. Various point defects in the 4H polytype of SiC have been already utilized for quantum communication [7], quantum computing [8] and nanoscale sensing [9; 10; 11; 12] applications. Most of the promising defects are vacancy-related, created by irradiation techniques [13; 14] or laser writing [15]. Isolated silicon vacancy (V\({}_{\text{Si}}\)) is a well known fundamental defect in silicon carbide associated with single photon emitters with coherently controllable electron spin even at room temperature [16; 17; 18; 19; 20; 21; 22; 23]. Divacancy (V\({}_{\text{Si}}\)V\({}_{\text{C}}\)) is a defect complex of adjacent silicon and carbon vacancies [24] created by annealing. Its neutral charge state is a color center [25; 26; 27] with a triplet ground state for which coherent manipulation was firstly demonstrated among the quantum defects in SiC [28; 29]. For the other constituent of the divacancy, the single carbon vacancy defect, no associated emission was reported in experiments, which is a critical information for the initialization and readout of the spin state. As the electron irradiation creation of this fundamental defect is even more efficient than that of the silicon vacancy [30], we investigate its possible application as an abundant quantum defect in SiC.
The carbon vacancy (V\({}_{\text{C}}\)) defect in 4H-SiC was investigated by various experimental techniques before. Deep level transient spectroscopy (DLTS) [31] and electron paramagnetic resonance (EPR) [32] measurements assigned two paramagnetic centers, EI5 and EI6, to the V\({}_{\text{C}}^{+}\) charge state corresponding to the quasi-cubic (k) and hexagonal (h) defect sites, respectively. The four Si neighbours of the defect showed considerable hyperfine constants. The EI5 center was reported to be Jahn-Teller (JT) distorted to C\({}_{\text{1h}}\) symmetry [33]. Temperature activated averaging to C\({}_{\text{3v}}\) symmetry was also reported above T \(\approx\) 50 K, with an activation energy of 0.014 eV [33; 34]. From photo-EPR measurements, Son et al. [35] revealed that the EI5 center is a stable deep donor, with (+/0) charge transition level at 1.47 eV above the valence band maximum [31]. However, no optical signal has been associated with these basic defects in 4H-SiC which is surprising as photoluminescence is a very sensitive technique when compared to DLTS or EPR methods.
The V\({}_{\text{C}}\) defect was investigated in several theoretical studies too. The formation energy was calculated for both k- and h-site to be 4.07 eV and 4.21 eV, respectively [36; 37]. Hyperfine constants calculated with LSDA method showed good agreement with experimental data [34; 38]. The Jahn-Teller distortion in the k-site defect was revealed to be pseudo-Jahn-Teller (pJT) effect, originating from the electron-phonon interaction between \(a\) and \(e\) defect orbitals in the non-degenerate ground state [34]. The lack of pJT effect for the h-site was attributed to the larger crystal field around the core of the defect.
In this work, we determine the key optical and spin properties of the positively charged carbon vacancy (V\({}_{\text{C}}^{+}\)) defect at both k- and h-sites in 4H-SiC by means of advanced density functional theory calculations. We confirm its identification to the EI5 and EI6 EPR centers. Our calculations reveal a zero-phonon line (ZPL) at the h-site defect in the near-infrared wavelength region associated with a relatively large Debye-Waller factor. This provides crucial information for the optical initialization of the quantum state, a first step towards the coherent control of its spin. Based on these findings, we propose the h-site V\({}_{\text{C}}^{+}\) defect as a promising quantum emitter possessing a paramagnetic ground state.
## II Methods
The electronic properties of V\({}_{\text{C}}^{+}\) were calculated using density functional theory (DFT) [39; 40] as implemented in the Vienna Ab-initio Simulation Package (VASP) plane wave based code [41; 42]. The hybrid exchange
functional of Heyd, Scuseria,and Ernzerhof (HSE06) was used [43]. The defect structure is modeled in a \(6\times 6\times 2\) supercell (576 atoms), allowing for accurate \(\Gamma\)-point sampling in the k-space. The atomic positions were optimized until the Hellman-Feynman forces acting on them were less than 0.01 eV/A. \(\Delta\)SCF or constraint occupation DFT [44] was employed for the calculation of the electronic excitations. For the calculation of phonon spectrum and normal modes, the functional of Perdew, Burke and Ernzerhof (PBE) [45] was applied using density functional perturbation theory (DFPT) method. Choosing the exchange correlation functional of PBE over HSE06 in our phonon calculations provides reasonably accurate results while significantly reducing the computational overhead. Transition dipole matrix elements were calculated from the overlap of the pseudo wavefunctions between the Kohn-Sham (KS) states involved in the excitation, as implemented in the PyVaspwfc code [46]. Non-radiative transition rates were calculated with the NONRAD code of Turiansky _et al._[47, 48].
Partially self-consistent GW0 [49] and BSE [50, 51] calculations were performed using VASP in a 128-atom supercell model, keeping the \(\Gamma\)-point sampling. The orbital basis was calculated with DFT HSE06 functional, with number of unoccupied orbitals more than 15-times the occupied ones. The energy cutoff for the calculation of the response function was limited to 100 eV. BSE was calculated beyond the Tamm-Dancoff approximation [52], including 50 occupied and 50 unoccupied orbitals.
## III Results
The carbon vacancy model in 4H-SiC exhibits C\({}_{3\mathrm{v}}\) symmetry without considering the electron-phonon interaction (by restricting the symmetry). The vacancy dangling bonds introduce two defect levels, \(a\) and \(e\), in the band gap, labeled by the irreducible representations of the symmetry group. In the positively charged ground state of the defect, a single electron occupies the \(a\) level, while the double degenerate \(e\) level is empty, resulting in a spin doublet state. After lifting the symmetry constraints, we obtain a pJT distorted relaxed structure at k-site with C\({}_{1\mathrm{h}}\) symmetry with a pJT relaxation energy of 82 meV. However, the same type of calculations at the h-site results in a negligible pJT energy, in line with previous findings [34]. The defect-level-diagram for both site defects is shown in Fig. 1. In Fig. 2, we visualize the geometric structure and the ground state spin density for both h- (a) and k-sites (b). For the C\({}_{3\mathrm{v}}\) symmetric h-site defect, the distance between the three top-side silicon atoms (Si\({}_{2}\), Si\({}_{3}\) and Si\({}_{4}\)) and their distance to Si\({}_{1}\) are 3.08 A and 3.22 A, respectively. For C\({}_{1\mathrm{h}}\) symmetry at k-site, the distance between Si\({}_{1}\)-Si\({}_{2}\) and Si\({}_{3}\)-Si\({}_{4}\) are 3.15 A and 3.02 A, respectively. The Si\({}_{3,4}\)-Si\({}_{2}\) and Si\({}_{3,4}\)-Si\({}_{1}\) distances are 3.10 A and 3.26 A, respectively.
### Hyperfine parameters
For the EPR-active doublet ground state of the V\({}_{\mathrm{C}}^{+}\) defect, we calculate the hyperfine parameters using the
Figure 1: Electronic level diagrams of the V\({}_{\mathrm{C}}^{+}\) defect in C\({}_{3\mathrm{v}}\) symmetry at (a) h-site and (b) k-site and (c) in C\({}_{1\mathrm{h}}\) symmetry at k-site. Each Kohn-Sham level is labeled according to the irreducible representations of the corresponding point group symmetry. The conduction band (CB) and the valance band (VB) are shown in green and blue colors, respectively.
Figure 2: Spin density of the V\({}_{\mathrm{C}}^{+}\) defect at (a) h-site with C\({}_{3\mathrm{v}}\) symmetry and (b) k-site with C\({}_{1\mathrm{h}}\) symmetry. The hyperfine parameters for the atoms labeled here are given in Table 1.
HSE06 hybrid functional. The hyperfine tensor describing the interaction between the nuclear spin at \(R_{I}\) and the electron spin density \(\rho_{s}\) of the defect is given by
\[A_{ij}=\frac{4\pi}{3}\frac{g_{N}\gamma_{N}g\gamma_{e}}{\left<\hat{S}_{z}\right>} \int\mathrm{d}^{3}r\rho_{s}(r)m_{i,j}\left(r-R_{I}\right), \tag{1}\]
where \(g_{N}\), \(\gamma_{N}\), \(g\) and \(\gamma_{e}\), are the \(g\) factors and gyromagnetic ratios of the nucleus and the electron, respectively. The \(m_{i,j}=[\delta_{ij}\delta(r)-\frac{1}{2}(3x_{i}x_{j}-r^{2}\delta_{i,j})r^{-5}]\) is the interaction potential between electron and nuclear spins, which include the Fermi-contact and a dipole-dipole interaction terms. Our results for the hyperfine parameters of the first neighbor silicon atoms are listed in Table 1, at both k- and h-sites. We also compare the results to experimental data for EI5 (k-site and C\({}_{1h}\) symmetry) and EI6 (h-site and C\({}_{3v}\) symmetry) [53], showing reasonable agreement.
### Vibronic spectrum
Next, we describe the vibronic interaction in the k-site ground state and model the microscopic origin of thermal averaging to C\({}_{3v}\) symmetry, observed in the EI5 EPR center [33]. It was attributed to the dynamic pJT effect in the ground state in Ref. [34]; however, their calculated barrier energy was considerably larger (50 meV) than the thermal activation energy of 14 meV. In our HSE06 calculations, we obtain 76 meV for the barrier energy, which is comparable to the pJT energy itself (82 meV). Based on these results, we cannot consider the pJT barrier as a perturbation directly corresponding to the dynamics of the system. Instead, we go beyond the Born-Oppenheimer approximation and approach the problem as a dephasing process through strongly coupled phonon excitations of the JT active modes. In this model, we identify the thermal occupation of the first vibronic excited state of the system as the onset of the dynamics, where the activation energy corresponds to the polaronic gap, i.e., the energy difference of the first vibronic excited state and the vibronic ground state.
To this end, we apply a similar model as detailed in Ref. [54]. We separate the effect of the crystal field present in the 4H-SiC crystal and describe the defect orbitals in high symmetry T\({}_{\mathrm{d}}\). In this picture, the vacancy dangling bonds introduce a single \(t_{2}\) orbital into the band gap, occupied by a single electron. This \({}^{2}T\) electronic configuration is Jahn-Teller unstable coupling to phonon modes of \(t_{2}\) and \(e\) symmetries, called the \(T\otimes(e\oplus t_{2})\) problem. The C\({}_{3v}\) crystal field is added as a perturbation in this model. The potential-energy surfaces (PES) of the \(T\) orbitals are formulated with pseudo-spin of three dimensions. Therefore, the vibronic interaction can be expressed on this basis as a \(3\times 3\) matrix:
\[W=\begin{bmatrix}F_{E}\left(\frac{Q_{\phi}}{2}-\frac{\sqrt{3}Q_{\epsilon}}{2} \right)&-F_{T}Q_{\zeta}&-F_{T}Q_{\eta}\\ -F_{T}Q_{\zeta}&F_{E}\left(\frac{Q_{\phi}}{2}+\frac{\sqrt{3}Q_{\epsilon}}{2} \right)&-F_{T}Q_{\xi}\\ -F_{T}Q_{\eta}&-F_{T}Q_{\xi}&-F_{E}Q_{\phi}\end{bmatrix}, \tag{2}\]
where the orbital degrees of freedom (\(t_{2}^{(x)}\), \(t_{2}^{(y)}\) and \(t_{2}^{(z)}\)) are depicted by the rows and columns of the \(3\times 3\) matrix and the vibrational degrees of freedom are expressed by the \(Q_{i}\) configuration coordinates. \(F_{\mathrm{T}}\) and \(F_{\mathrm{E}}\) are the linear vibronic coupling parameters of the corresponding phonon symmetries. The linear vibronic coupling can be expressed with the Jahn-Teller energy (\(E_{\mathrm{JT}}\)) as [see Eq. (3.46, 3.48) in Ref. [55] ]
\[F_{E}=\sqrt{2\hbar\omega_{E}E_{\mathrm{JT}}^{E}},\hskip 28.452756ptF_{T}= \sqrt{\frac{3}{2}\hbar\omega_{T}E_{\mathrm{JT}}^{T}}, \tag{3}\]
where \(\hbar\omega\) is the phonon energy of the corresponding JT active mode in the harmonic approximation. Our calculations results for the determination of the above parameters can be seen in Fig. 3, where we fit a linear vibronic coupling model for each JT modes. Thus, the adiabatic potential-energy surface (APES)
\[\varepsilon(\mathbf{Q})= \frac{1}{2}\hbar\omega_{\mathrm{E}}\left(Q_{\varepsilon}^{2}+Q_{ \theta}^{2}\right)\mathbf{I}+\frac{1}{2}\hbar\omega_{\mathrm{T}}\left(Q_{ \xi}^{2}+Q_{\zeta}^{2}+Q_{\eta}^{2}\right)\mathbf{I}\] \[+\mathbf{W}(\mathbf{Q})-\frac{\delta}{3}\begin{pmatrix}0&1&1\\ 1&0&1\\ 1&1&0\end{pmatrix}. \tag{4}\]
\begin{table}
\begin{tabular}{c c c c c c c c c c} & \multicolumn{6}{c}{HSE06 calculation} & \multicolumn{6}{c}{Experimental} \\ & Atoms & \(A_{xx}\) (MHz) & \(A_{yy}\) (MHz) & \(A_{zz}\) (MHz) & \(\theta\) (\({}^{\circ}\)) & \(A_{xx}\) (MHz) & \(A_{yy}\) (MHz) & \(A_{zz}\) (MHz) & \(\theta\) (\({}^{\circ}\)) \\ \hline h-site & Si\({}_{1}\) & -323.3 & -323.3 & -492.4 & 0 & 297.29 & 297.29 & 433.75 & 0 \\ & Si\({}_{2}\)-Si\({}_{3}\)-Si\({}_{4}\) & -17.1 & -15.0 & -30.5 & 90.33 & 39.23 & 39.23 & 59.12 & 98 \\ k-site & Si\({}_{1}\) & -130.7 & -127.2 & -198.4 & 8.7 & 124.4 & 94.14 & 181.01 & 7.7 \\ & Si\({}_{2}\) & -67.1 & -64.7 & -105.5 & 125.79 & 91.06 & 89.38 & 132.81 & 121.5 \\ & Si\({}_{3}\)-Si\({}_{4}\) & -99.45 & -96.3 & -148.6 & 102.23 & 107.88 & 106.76 & 154.67 & 103.2 \\ \end{tabular}
\end{table}
Table 1: HSE06 calculated and experimental [53] hyperfine principal values (\(A_{xx}\), \(A_{yy}\) and \(A_{zz}\)) for the nearest neighbor silicon atoms (see Fig. 2) in the V\({}_{\mathrm{C}}^{+}\) defect. The core contributions is included in the results. \(\theta\) is the polar angle of the \(A_{zz}\) principal axis, measured from the e-axis of the 4H-SiC crystal.
consists of the phonon energy associated to the harmonic potential of the electronic APES (\(\mathbf{I}\) is the identity matrix), the vibronic interaction, and the crystal-field splitting (\(\delta\)). The latter can be obtained by artificially turning off the origin of the occupational instability, i.e., by smearing the electron occupation on all the three \(t_{2}\) defect orbitals. The residual splitting corresponds to the crystal field parameter \(\delta=85\) meV. The associated Hamiltonian describes three coupled five-dimensional harmonic oscillators as
\[\hat{H}= \hbar\omega_{E}\left(\hat{a}_{e}^{\dagger}\hat{a}_{e}+\hat{a}_{ \vartheta}^{\dagger}\hat{a}_{\vartheta}+1\right) \tag{5}\] \[+\hbar\omega_{T}\left(\hat{a}_{i}^{\dagger}\hat{a}_{\xi}+\hat{a}_ {\vartheta}^{\dagger}\hat{a}_{\eta}+\hat{a}_{\zeta}^{\dagger}\hat{a}_{\zeta}+ \frac{3}{2}\right)\] \[-F_{E}\left(\hat{T}_{\varepsilon}\hat{Q}_{\varepsilon}+\hat{T}_{ \vartheta}\hat{Q}_{\vartheta}\right)\] \[-F_{T}\left(\hat{T}_{\xi}\hat{Q}_{\xi}+\hat{T}_{\eta}\hat{Q}_{ \eta}+\hat{T}_{\zeta}\hat{Q}_{\zeta}\right)\] \[-\frac{\delta}{3}\left(\hat{T}_{\xi}+\hat{T}_{\eta}+\hat{T}_{ \zeta}\right),\]
where \(\hat{a}_{i}^{\dagger}\) is the oscillator \(i\)-mode creation operator, \(\hat{Q}_{i}=\frac{1}{\sqrt{2}}(\hat{a}_{i}^{\dagger}+\hat{a}_{i})\) are the coordinate operators, and the pseudo-spin of \(T\) orbitals is represented by the orbital operators
\[\hat{I}=\begin{pmatrix}1&0&0\\ 0&1&0\\ 0&0&1\end{pmatrix}, \hat{T}_{\varepsilon}=\begin{pmatrix}\frac{\sqrt{3}}{2}&0&0\\ 0&-\frac{\sqrt{3}}{2}&0\\ 0&0&0\end{pmatrix}, \tag{6}\] \[\hat{T}_{\vartheta}=\begin{pmatrix}-\frac{1}{2}&0&0\\ 0&-\frac{1}{2}&0\\ 0&0&1\end{pmatrix}, \hat{T}_{\xi}=\begin{pmatrix}0&0&0\\ 0&0&1\\ 0&1&0\end{pmatrix},\] \[\hat{T}_{\eta}=\begin{pmatrix}0&0&1\\ 0&0&0\\ 1&0&0\end{pmatrix}, \hat{T}_{\zeta}=\begin{pmatrix}0&1&0\\ 1&0&0\\ 0&0&0\end{pmatrix}.\]
We solve the model Hamiltonian using the following ansatz as the basis of the vibronic states
\[\left|\widetilde{\Psi}\right\rangle=\sum_{j,k,l,n,m} \left(c_{jklm}^{(\varepsilon)}\left|t_{2}^{(\varepsilon)}\right\rangle +c_{jklnm}^{(\vartheta)}\left|t_{2}^{(\vartheta)}\right\rangle\right. \tag{7}\] \[+c_{jklnm}^{(\xi)}\left|t_{2}^{(\xi)}\right\rangle+c_{jklnm}^{( \eta)}\left|t_{2}^{(\eta)}\right\rangle\] \[\left.+c_{jklnm}^{(\zeta)}\left|t_{2}^{(\zeta)}\right\rangle \right)\left|j,k,l,n,m\right\rangle,\]
where \(\mathcal{O}=(j+k+l+n+m)\) is the order of phonon excitations, acting as the cutoff for the basis size. We solve for the first polaronic excited state energy in the function of the excitation order up-to \(\mathcal{O}=10\) and extrapolate the finite order basis assuming exponential convergence in the energy. The final result for the polaronic gap is 14.4 meV, in excellent agreement with the observed activation energy of the thermal averaging in the EI5 center.
### Excited state calculations
In the following, we describe the electronic structure of the excited states. The lowest lying excited state of the defect can be described as promoting a single electron from the \(a\) orbital to the \(e\) degenerate orbital in the C\({}_{3\nu}\) symmetric ground state configurations. In this vertical excitation, the electron-hole interaction decreases the \(a-e\) KS level gap. We obtain vertical excitation energies of 0.794 eV and 0.967 eV for the k- and h-site defects, respectively. The former is in good agreement with the GW+BSE result of 0.89 eV reported in Ref. [56]. Next, we allow the relaxation of the atomic positions in the restricted high symmetry. During the relaxation, the KS level gap is further decreased. This results in ZPL energies of 0.656 eV and 0.839 eV for the k- and h-site defects, respectively. However, these excited states are JT unstable, further relaxing to C\({}_{1\text{h}}\) symmetry. For the k-site defect, the effect of this relaxation changes the qualitative picture of the electron promotion, as the order of the occupied \(e\) and empty \(a\) level is interchanged. The resulting electronic structure indicates that the k-site defect does
Figure 3: Adiabatic potential energy surfaces in the function of configuration coordinates corresponding to (a) \(T\)- and (b) \(E\)-symmetric distortions in the V\({}_{C}^{+}\) defect at k-site. The fitted curve corresponds to the linear Jahn-Teller solution of the \(T\times(e+t_{2})\) problem.
not form a stable emitter. This effect is not present at the h-site despite of the calculated JT energy of 0.187 eV. We account the stability of the h-site emitter to the larger crystal field splitting. Its final ZPL energy connecting the JT distorted excited state to the high-symmetry ground state is 0.652 eV.
We apply GW+BSE calculations in order to prove the stability of the optical emission from the JT distorted excited state of the h-site defect. In our partially self-consistent EVGW0 calculations in the ground state C\({}_{\rm 3v}\) geometry, we obtain quasi-particle levels of 1.93 eV, 2.76 eV, and 3.53 eV energies with respect to the VBM for the a, e, CBM levels, respectively. We note that the CBM level is obtained in the \(\Gamma\)-point of the 128-atom supercell which does not fold the \(M\)-point (CBM k-point). The vertical absorption in the BSE calculation is 1.029 eV. We apply the same methods in the JT distorted excited state geometry, resulting in the vertical emission at 0.187 eV. We obtain a 72% contribution of the \(a\to e\) transition in the excitonic wavefunction which implies that the constructed exciton wavefunction in the \(\Delta\)SCF procedure is relatively well described. The HSE06 relaxation energy between the optimised JT distorted excited state geometry and the ground state geometry is 0.339 eV in the ground state electronic configuration. The sum of this HSE06 relaxation energy and the BSE vertical emission energy results in an estimate for the ZPL energy of 0.526 eV, which is in good agreement with the ZPL energy obtained from the \(\Delta\)SCF method in a larger supercell. This result confirms the stability of the radiative transition in the h-site defect.
### Photoluminescence spectrum
The photoluminescence line-shape is calculated using the method of Alkauskas _et al._[57]. The calculated PL spectrum from h-site V\({}_{\rm C}^{+}\) defect is plotted in Fig. 4. The Huang-Rhys (HR) factor determines the intensity of the phonon sideband. It quantifies the electronic and vibronic states. In one-dimensional approximation, the HR factor is given by
\[S=\frac{E_{\rm FC}}{\hbar\omega_{0}}, \tag{8}\]
where \(\omega_{0}\) is the vibrational frequency of the effective mode and \(E_{\rm FC}\) is the Frank-Condon relaxation energy [58]. The Debye-Weller (DW) factor is the ratio of the ZPL intensity in the total emission. It is directly related to the HR factor by
\[W_{\rm ZPL}=e^{-S}, \tag{9}\]
where S is the total HR factor. For our calculated \(S=1.605\), the DW factor is 20%.
### Radiative lifetime
The transition dipole moment (\(\mu\)) between the ground and excited state is calculated using the pseudo-wavefunctions of the defect Kohn-Sham levels \(a\) and \(e\) in the JT distorted excited state. The radiative transition rate of the h-site defect is calculated as
\[\frac{1}{\tau_{r}}=\frac{n\omega^{3}|\mu|^{2}}{3\pi\epsilon_{0}\hbar c^{3}}, \tag{10}\]
where \(c\) is the speed of light, \(\hbar\omega=0.652\) eV is the calculated ZPL energy, \(\mu=29.8\) D is the optical-transition dipole moment, \(n=2.647\) is the refractive index of 4H-SiC, and \(\epsilon_{0}\) is the vacuum permittivity. The resulting lifetime is \(\tau_{r}=9.3\) ns, which is comparable to the values reported for the negatively charged nitrogen-vacancy center (NV\({}^{-}\)) in diamond [59]. We also note that the JT distortion has a large effect on the optical transition dipole moment. It slightly distorts the wavefunction along with the geometry relaxation. However, the largest effect is the greatly decreased energy separation of the Kohn-Sham levels resulting in a large transition dipole moment. Performing the same calculation in the restricted C\({}_{\rm 3v}\) symmetric h- and k-site excited states, we obtain optical lifetimes of 198 ns and 321 ns, respectively. The latter results are in line with previous GW plus Bethe-Salpeter equation calculations reporting negligible absorption in the \(a\) to \(e\) defect level transition [56].
### Nonradiative transition
Finally, we describe the nonradiative relaxation rate from the JT distorted excited state to the high symmetry ground state, coupled by electron-phonon interaction for the h-site carbon vacancy defect. In this work, we consider the interaction within the first order of electron-phonon coupling. According this assumption the capture rate is given by Fermi's golden rule [60];
\[r=\frac{2\pi}{\hbar}g\sum_{m}\omega_{m}\sum_{n}\left|\Delta H^{e-ph}_{im;fn} \right|^{2}\delta(E_{im}-E_{fn}), \tag{11}\]
Figure 4: Simulated PL spectrum of the h-site V\({}_{\rm C}^{+}\) defect. The calculated HR and DW factors are 1.605 and 20%, respectively.
here \(\omega_{m}\) is the thermal occupation of the vibrational state m of the excited electronic state, \(E_{im}\) and \(E_{fn}\) are total energies of the initial and the final vibronic states, \(g\) is the degeneracy factor of the final state. \(\Delta H_{im;fn}^{e-ph}\) is the electron-phonon coupling matrix element which can be expressed as
\[\Delta H_{im;fn}^{e-ph}=\sum_{k}\left\langle\Psi_{i}\left|\partial H/\partial Q _{k}\right|\Psi_{f}\right\rangle\left\langle\chi_{im}\left|Q_{k}-Q_{0;k} \right|\chi_{fn}\right\rangle, \tag{12}\]
where \(H\) is Hamiltonian of the combined system of electrons and ions. The sum runs over all phonon modes \(Q_{k}\) and \(Q_{0;k}\) is the projection of the initial atomic configuration \(Q_{0}\) along each of the phonon coordinates. \(W_{if}=\left\langle\Psi_{i}\left|\partial H/\partial Q_{k}\right|\Psi_{f}\right\rangle\) is the electron-phonon coupling matrix element pertaining to the phonon mode \(k\). In this work, the corresponding parameters are \(g=3\), \(W_{if}=0.15\) eVamu\({}^{-\frac{1}{2}}\)A\({}^{-1}\), \(\omega_{a}=0.05\) eV and \(\omega_{e}=0.072\) eV. The calculated nonradiative lifetime for the one-particle transition from \(e\) to \(a\) Kohn-Sham level is \(\tau_{nr}\sim 3.38\) ns. For this transition, the quantum efficiency (QE) of 27% was calculated as
\[\text{QE}=\frac{\tau_{nr}}{\tau_{r}+\tau_{nr}}, \tag{13}\]
where \(\tau_{nr}\) is nonradiative lifetime and \(\tau_{r}\) is radiative lifetime. The calculated nonradiative lifetime at elevated temperatures is shown in Fig. 5.
## IV Discussion
The negatively charged carbon vacancy defect in 4H-SiC is a promising paramagnetic defect for quantum technology applications. Owing to its non-zero electron spin, its ground state structure have been already thoroughly investigated by _ab initio_ calculations. Here we report the in-depth characterization of the excited state optical properties of the defect that have not been detected in experiments to-date. We find that the Jahn-Teller effect plays an important role in the stability of the excited state, enabling a stable C\({}_{1\text{h}}\) configuration at the h-site. We propose the V\({}_{\text{C}}^{+}\) defect as a near infrared (IR-B) emitter with the calculated ZPL energy of 0.652 eV. We find the properties of this emission very promising in the IR-B region. It shows a very short optical lifetime of 9.3 ns, which is excellent compared to the values of 11.6 ns, 14 ns and 5.5 ns values measured for NV\({}^{-}\) center in diamond [59], divacancy [29] and negatively charged silicon vacancy [61] in 4H-SiC, respectively. Moreover, the quantum efficiency of this transition is obtained to be 27%. The optical coherence is remarkable as well, with the calculated Debye-Waller factor of 20%. We conclude that the carbon vacancy defect at the h-site in 4H-SiC can be observed in photoluminescence and may act as a single photon emitter when the defects are engineered to be isolated in the host material.
## Acknowledgement
AG acknowledges the Hungarian NKFIH grant No. KKP129866 of the National Excellence Program of Quantum-coherent materials project and the support for the Quantum Information National Laboratory from the Ministry of Innovation and Technology of Hungary, and the EU H2020 project QuanTELCO (Grant No. 862721). The calculations were performed on the Hungarian Supercomputer Centre at KIFU.
|
2303.05309
|
MixSpeech: Cross-Modality Self-Learning with Audio-Visual Stream Mixup
for Visual Speech Translation and Recognition
|
Multi-media communications facilitate global interaction among people.
However, despite researchers exploring cross-lingual translation techniques
such as machine translation and audio speech translation to overcome language
barriers, there is still a shortage of cross-lingual studies on visual speech.
This lack of research is mainly due to the absence of datasets containing
visual speech and translated text pairs. In this paper, we present
\textbf{AVMuST-TED}, the first dataset for \textbf{A}udio-\textbf{V}isual
\textbf{Mu}ltilingual \textbf{S}peech \textbf{T}ranslation, derived from
\textbf{TED} talks. Nonetheless, visual speech is not as distinguishable as
audio speech, making it difficult to develop a mapping from source speech
phonemes to the target language text. To address this issue, we propose
MixSpeech, a cross-modality self-learning framework that utilizes audio speech
to regularize the training of visual speech tasks. To further minimize the
cross-modality gap and its impact on knowledge transfer, we suggest adopting
mixed speech, which is created by interpolating audio and visual streams, along
with a curriculum learning strategy to adjust the mixing ratio as needed.
MixSpeech enhances speech translation in noisy environments, improving BLEU
scores for four languages on AVMuST-TED by +1.4 to +4.2. Moreover, it achieves
state-of-the-art performance in lip reading on CMLR (11.1\%), LRS2 (25.5\%),
and LRS3 (28.0\%).
|
Xize Cheng, Linjun Li, Tao Jin, Rongjie Huang, Wang Lin, Zehan Wang, Huangdai Liu, Ye Wang, Aoxiong Yin, Zhou Zhao
|
2023-03-09T14:58:29Z
|
http://arxiv.org/abs/2303.05309v1
|
# MixSpeech: Cross-Modality Self-Learning with Audio-Visual Stream Mixup
###### Abstract
Multi-media communications facilitate global interaction among people. However, despite researchers exploring cross-lingual translation techniques such as machine translation and audio speech translation to overcome language barriers, there is still a shortage of cross-lingual studies on visual speech. This lack of research is mainly due to the absence of datasets containing visual speech and translated text pairs. In this paper, we present **AVMuST-TED**, the first dataset for Audio-**V**isual **M**ultilingual **S**peech **T**ranslation, derived from **TED** talks. Nonetheless, visual speech is not as distinguishable as audio speech, making it difficult to develop a mapping from source speech phonemes to the target language text. To address this issue, we propose MixSpeech, a cross-modality self-learning framework that utilizes audio speech to regularize the training of visual speech tasks. To further minimize the cross-modality gap and its impact on knowledge transfer, we suggest adopting mixed speech, which is created by interpolating audio and visual streams, along with a curriculum learning strategy to adjust the mixing ratio as needed. MixSpeech enhances speech translation in noisy environments, improving BLEU scores for four languages on AVMuST-TED by +1.4 to +4.2. Moreover, it achieves state-of-the-art performance in lip reading on CMLR (11.1%), LRS2 (25.5%), and LRS3 (28.0%).
## 1 Introduction
Multi-media techniques, including Audio-Visual Speech Recognition (AVSR) [4, 1, 2, 50], Audio-Visual Speech Translation (AVST) [8, 35, 58], and Audio-Visual Speech Generation (AVSG) [45, 30, 23], are commonly employed in various online communication scenarios, such as conferences, education, and healthcare. As a tool for ultra-remote communication, many online interactions involve multiple languages, prompting the need for addressing cross-lingual challenges. Several works have attempted to tackle these challenges, including Machine Translation (MT) [9, 34, 14] for text utterance, Speech Translation (ST) [55, 18] for audio utterance, and Speech-to-Speech Translation (S2ST) [55, 18, 16, 31, 27] for simultaneous interpretation. However, research on cross-lingual visual speech is still limited, as illustrated in Figure 1. As an essential component of multi-media speech, visual speech can be combined with audio to enhance the recognition and understanding of speech content as audio-visual speech [1, 2, 51], and is the unique resource for speech content understanding in audio-disabled scenarios [33].
Visual speech translation has never been studied, mainly for the lack of visual speech datasets with translated texts in different languages. The few remaining works [54, 57, 41] also cannot be quantitatively verified for this reason, making them unconvincing. The available visual speech corpus is often very scarce compared to audio speech owing to the high demands of visual speech for model training, which requires mostly-frontal and high-resolution videos with a sufficiently high frame rate, such that motions around the lip area are clearly captured [22]. In this paper, we propose the
Figure 1: Diagram of speech tasks. Audio speech and visual speech are paired parallel speech streams which can be employed for speech recognition and speech translation. However, only Lip-Translation remains unexplored.
first Audio-Visual Multilingual Speech Translation dataset, AVMuST-TED. During the process of acquisition, we first screen out videos with professional translations in four different languages from TED talk which performs strict translation and review processes, and then determine the real speaker's talking head by checking whether each pair of visual speech (, talking head) and audio speech matches in the manner of [1, 2]. Incidentally, this dataset can also be used for quantitative evaluation of other multi-modality translation tasks, such as cross-lingua audio-visual speech generation [48, 57].
The cascaded model comprising of a speech recognition model and a machine translation model can handle speech translation tasks but suffers from error accumulation due to model cascades and cannot process languages without text (, Minnan). Our proposed end-to-end model, which can translate directly from source speech to target text, addresses the above issues. However, visual speech is less distinguishable than audio speech, making it difficult to develop a mapping from source speech phonemes to the target language text. To address this, we introduce MixSpeech, a method that first pretrains the decoder using high-discrimination audio speech to obtain a mapping from speech phonemes to text and then generalizes this mapping to the visual speech task through cross-modality self-learning. Furthermore, since audio speech and visual speech are two distinct modalities of speech, there is a significant modality gap between them that hinders knowledge transfer. To narrow this gap and improve knowledge transfer, we propose mixed speech, which is created by interpolating audio and visual streams, rather than relying solely on audio speech. We also propose a curriculum-learning [7] based strategy to adjust the mixing ratio as the training progresses and cross-modality integration deepens.
The code and dataset are available1, the main contributions of this paper are as follows:
Footnote 1: [https://github.com/Exqc/AVMuST-TED](https://github.com/Exqc/AVMuST-TED)
* We present the first lip-translation baseline and introduce the Audio-Visual Multilingual Speech Translation dataset, AVMuST-TED.
* We present a cross-modality self-learning framework that leverages distinguishable audio speech translation to regularize visual speech translation for effective cross-modality knowledge transfer.
* We present to adopt the mixed speech, interpolated from audio and visual speeches, and a curriculum-learning based mixing ratio adjustment strategy to reduce the inter-modality gap during knowledge transfer.
* We achieve state-of-the-art performance in lip translation for four languages on AVMuST-TED, with a +1.4 to +4.2 boost in BLEU scores and in lip reading on CMLR (11.1%), LRS2 (25.5%) and LRS3 (28.0%).
## 2 Related work
### Audio-Visual Speech
Audio and visual speeches are two separate modalities that convey speech content. Numerous works [42, 12, 1, 2, 44, 26] have explored ways to extract information from speech using these modalities. Speech recognition [42, 6, 21] is widely used in online meetings and social applications to recognize speech content. Speech translation [55, 62, 18] is commonly used in simultaneous interpretation applications for cross-lingual communication in cross-border travel and meetings. Keyword spotting [5, 49, 28] is employed in short video applications to quickly retrieve relevant content. Additionally, in noisy scenarios, relevant speech tasks [13, 20, 44, 39] rely on visual speech to avoid interference from surrounding speech and background noise. Despite the growing interest in speech tasks that rely on visual speech, researches [54, 57] on visual speech translation are limited and lacks validation due to the lack of multilingual audio-visual speech transcription datasets. This paper proposes a baseline for visual speech translation and introduces the first large-scale audio-visual multilingual translation dataset, AVMuST-TED, which includes 706 hours of audio-visual speech and translation pairs in Spanish, French, Italian, and Portuguese. AVMuST-TED lays a solid foundation or future cross-lingual audio-visual translation tasks, such as Cross-Lingual Talking Head Generation [41].
### Transfer learning from Audio to Visual
Many researchers [47, 50, 36] attempt to enhance the representation of visual speech by leveraging corresponding audio speech, as the two are paired parallel speech streams. Some [47, 50, 36] use knowledge distillation to bootstrap the training of visual speech models using audio speech models, while others [67, 36] have proposed various distillation strategies to optimize the representation of visual speech by mining the intrinsic connection between audio and visual speeches. Some [50] also use self-supervised learning, with audio as auxiliary supervision for visual utterances, to obtain fine-grained visual representations. The success of these works demonstrates the critical role of audio speech, which has a higher discrimination compared to visual speech, in training visual speech models. However, previous works face the modality shift problem during knowledge transfer because they start directly from speeches of two different modalities, audio and visual speeches, with a significant modality gap. In this paper, we propose an cross-modality self-learning framework MixSpeech, that uses synthetic mixed speech to regularize visual speech translation for effective cross-modality knowledge transfer, reducing the gap between the two modalities during knowledge transfer.
### Mixup for Cross-Modality Transfer
Many works [64, 19, 60, 18, 23] bridge the gap between modalities with mixup. [63] proposes mixup for data augmentation to improve model robustness. [10] suggests mixing at the representation-level to mine implicit associations between labeled and non-labeled sentences. Other works [60, 56, 17, 25] also use mixing to build bridges between different modalities. Some [60, 17] use CLIP [46] to retrieve semantically consistent images with text tokens and synthesize mixed sentences for text-visual consistency representation training. Others [56, 18] construct manifold mixup interpolations based on semantic consistency between audio and text to enhance understanding of audio with textual datasets. By implementing the mixup strategy, these studies have shown notable improvements across a range of tasks, highlighting its potential to facilitate knowledge transfer between different modalities. However, previous works use fixed hyperparameters [63] or mapping functions [18] for mixing ratios, which are typically not optimal and cannot be adapted to the training situation. In this paper, we propose an uncertainty-based [40] curriculum learning [7] strategy that gradually adjusts mixing ratios and apply mixup strategy for cross-modality knowledge transfer between audio and visual speeches for the first time.
## 3 Method
### Task Formulation
As the twin task of speech recognition, speech translation involves translating source language speech into target language text. The speech translation model takes audio speech utterance \(\mathbf{A}\)=\(\left\{\mathbf{A}_{t}\right\}_{t=1}^{T}\in\mathbb{R}^{T\times D}\) or visual speech utterance \(\mathbf{V}\)=\(\left\{\mathbf{V}_{t}\right\}_{t=1}^{T}\in\mathbb{R}^{T\times D}\) as input and generates the target language text \(\mathbf{w}\)=\(\left\{\mathbf{w}_{i}\right\}_{i=0}^{T}\), where \(\mathbf{A}_{t}\) and \(\mathbf{V}_{t}\) represent the \(t\)-th features in the audio and visual speeches, and \(\mathbf{w}_{i}\) represents the \(i\)-th word in the target language translation with a total length of \(S\). Note that, we stack 4 adjacent acoustic frames together for syncing with visual speech, both with \(T\) frames.
### Overview
We propose a cross-modal self-learning framework for visual speech translation with audio speech regularization, named MixSpeech, as illustrated in Figure 2. This model consists of three modules - a feature extractor for extracting speech embeddings, a speech encoder for attending to the contextual dependencies of speech, and a target language-oriented translation decoder. We utilize the pre-trained feature extractor (AV-Encoder) and speech encoder (Speech-Encoder) from the AV-Hubert [50]
Figure 2: Illustration of our proposed MixSpeech. We first pretrain the model with audio speech translation as shown in the dashed boxed, and then train the visual speech translation with mixed speech regularization. The blank dashed boxes denote the modality missing speech.
to extract speech representations from both audio and visual speech utterances. Additionally, a randomly initialized translation decoder (Trans-Decoder) is used to autoregressively decode the speech representation into the target language text. MixSpeech is a two-stage training process: 1) Pretraining the translation decoder with high-discrimination audio speech utterances to learn inter-lingual mapping relations between source language phonemes and target language text, as detailed in subsection 3.3. 2) Aligning visual speech with audio speech to transfer the inter-lingual mapping from audio speech to visual in 3.5. The mixed speech 3.4 is synthesized by interpolating audio speech with visual speech in MixupSpeech, bridging the modality gap and enhancing knowledge transfer.
### Pretraining with Audio Speech
For uni-modality audio speech \(\mathbf{A}\in\mathbb{R}^{T\times D}\) or visual speech \(\mathbf{V}\in\mathbb{R}^{T\times D}\), the uni-modality audio-visual feature \(\mathbf{e}^{u}\)=\(\left\{\mathbf{e}^{u}_{t}\right\}_{t=1}^{T}\in\mathbb{R}^{T\times 2D}\) fed into feature extractor can be defined as:
\[\mathbf{e}^{u}_{t}=\begin{cases}\mathtt{concat}(\mathbf{0}_{D},\mathbf{V}_{t} ),&\mathbf{V}_{t}\neq\text{None},\\ \mathtt{concat}(\mathbf{A}_{t},\mathbf{0}_{D}),&\mathbf{A}_{t}\neq\text{None},\end{cases} \tag{1}\]
where \(\mathbf{0}_{D}\) denotes the feature of missing modality, following the practice of [50]. And then, we obtain the audio-visual fusion feature \(\mathbf{e}^{f}\in\mathbb{R}^{T\times D}\) with AV-Encoder. The transformer-based Speech-Encoder allows us to obtain the phoneme embedding \(\mathbf{e}^{p}\in\mathbb{R}^{T\times D}\) with the contextual speech details. A target language oriented translation decoder Trans-Decoder is appended to autoregressively decode the phoneme embedding \(\mathbf{e}^{p}\) into the target probabilities \(P^{u}\), where \(P^{u}\)=\(\{P^{u}_{t}\}_{t=1}^{S}\)=\(\{p(\mathbf{w}_{t}|\{\mathbf{w}_{i}\}_{i=1}^{t-1},\mathbf{e}^{p})\}_{t=1}^{S}\) represents the probability of the \(t\)-th word being \(\mathbf{w}_{t}\) when the previous \(t-1\) predictions are \(\{\mathbf{w}_{i}\}_{i=1}^{t-1}\) and \(s\) is the length of the target language translation. During the pretraining, the overall model is trained on audio speech with cross-entropy loss :
\[\mathcal{L}_{CE}\text{=}-\sum_{t=1}^{S}\log p(w_{t}|\{w_{i}\}_{i=1}^{t-1}, \mathbf{e}^{p}). \tag{2}\]
### Audio-Visual Speech Mixing
Audio and visual speeches have a huge modality gap, which greatly impacts knowledge transfer across modalities. We attempt to employ mixed speech to bridge two different modalities of speech. Since the pair of audio and visual speeches is strictly temporally synchronous, we take advantage of this property to interpolate the mixed speech. For a pair of synchronized audio and video speech \((\mathbf{A},\mathbf{V})\in\mathbb{R}^{2\times T\times D}\), each visual feature \(\mathbf{V}_{t}\) at \(t\)-th frame has its corresponding audio feature \(\mathbf{A}_{t}\), representing the same phonetic content. We interpolate with probability \(\phi\) to obtain a mixed speech \(\mathbf{e}^{m}\)=\(\{\mathbf{e}^{m}_{t}\}_{t=1}^{T}\in\mathbb{R}^{T\times 2D}\) derived partly from audio speech and partly from visual speech:
\[\mathbf{e}^{m}_{t}=\begin{cases}\mathtt{concat}(\mathbf{0}_{D},\mathbf{V}_{t} ),&p<\phi,\\ \mathtt{concat}(\mathbf{A}_{t},\mathbf{0}_{D}),&p\geq\phi,\end{cases} \tag{3}\]
where \(p\) is sampled from the uniform distribution \(U(0,1)\) and \(\phi\) is the ratio of speech mixing. In particular, we propose a curriculum learning [7] based mixing ratio adjustment method that adapts the appropriate \(\phi\) as the training progresses. The prediction uncertainty [40] indicates the confidence of the prediction (smaller is better), and we take it as a signal to adjust the mixing ratio:
\[\mathbf{u}=\frac{1}{S}\sum_{t=1}^{S}\mathtt{Entropy}(P_{t}). \tag{4}\]
If the discrimination of mixed speech is insufficient to regularize visual speech translation and maintain \(n\) steps (\(\Delta\mathbf{u}\)=\(\mathbf{u}^{v}\)-\(\mathbf{u}^{m}\)\(<\)\(k\)\(\mathbf{u}^{v}\), where \(\mathbf{u}^{v}\) and \(\mathbf{u}^{m}\) represent the uncertainty of uni-modality (visual) and mixed speech, respectively, and the threshold hyperparameter \(k\) is set to 0.05 with \(n\) set to 20 in our work), we gradually increase the proportion of audio at a rate of \(\alpha\) (\(\phi^{\prime}\)=\(\alpha\)\(\phi\)). We initialize \(\phi\)=\(0.1\) to prevent excessive initial modality gap and maintain \(\phi\in[0.1,0.9]\) throughout the training process.
### Cross-Modality Self-Learning for Speech
Since audio speech is more distinguished compared to visual speech, we intend to boost visual speech translation with the knowledge from audio speech. And the mixed speech bridges the gap between audio speech and visual speech, allowing us to boost cross-modality knowledge transfer with it. With audio speech feature \(\mathbf{A}\in\mathbb{R}^{T\times D}\) and visual speech feature \(\mathbf{V}\in\mathbb{R}^{T\times D}\) fed into the modules with shared parameters, the uni-modality visual speech feature \(\mathbf{e}^{u}\) and the mixed speech feature \(\mathbf{e}^{m}\) are decoded into the target probabilities \(P^{u}\) and \(P^{m}\), respectively.
After the pre-training with audio speech translation, the model is promising enough for mixed speech containing partial audio speech, we adopt the Jensen-Shannon Divergence (JSD) [38] to regularize the probabilities of these two different speeches:
\[\mathcal{L}_{JSD}=\sum_{t=1}^{S}JSD(P^{m}_{t}\|P^{u}_{t}). \tag{5}\]
As this probability is across the entire training vocabulary, we are able to perform fine-grained regularization to enhance the training of visual speech. Meanwhile, we also minimize the cross-entropy loss between two speech translations and the real translation, \(\mathcal{L}\)=\(\mathcal{L}^{uni}_{CE}\)+\(\lambda_{1}\)\(\mathcal{L}^{mix}_{CE}\)+\(\lambda_{2}\)\(\mathcal{L}_{JSD}\), where \(\lambda_{1}\) and \(\lambda_{2}\) are hyperparameters of loss weights, while \(\lambda_{1}\)=\(\lambda_{2}\)=\(1.0\) in this work.
## 4 Experiments
### Datasets
**AVMuST-TED**. To obtain a corpus for AVST, we screened a set of TED and TEDx talks with multilingual subtitles as the data source. All transcriptions and translations are performed strictly following the TED Translation Guidelines and require collaboration between at least one translator (or transcriber) and one reviewer. The prior lip-reading dataset acquisition pipeline is followed to crop face-tracks, and an audio-visual alignment network, SynNet, is adopted for speaker proofreading. Table 1 compares AVMuST-TED with related datasets, and it is the first audio-visual speech translation dataset containing translations from English (En) to four target languages: Spanish (Es), French (Fr), Italian (It), and Portuguese (Pt). These four languages have the most translated subtitles in TED, and 1024/1536 pieces of data are randomly sampled for each language as the _test/validation_ set. The information about AVMuST-TED is detailed in Appendix A.
**LRS2&3**[1, 2], two commonly used publicly available English wild audio-visual speech recognition datasets, are adopted to demonstrate the lip-reading performance, containing 224 hours of video from BBC television shows and 433 hours of video from TED and TEDx talks. The training data in both datasets is divided into two partitions, namely _Pretrain_ and _Train_, both of which are transcribed from videos to text at the sentence level. The only difference is that the video clips in the _Pretrain_ partition are not strictly trimmed and sometimes longer than the corresponding text. In our experiments, we employ different amounts of training data from LRS2 and LRS3, including _Pretrain+Train_ (224/433h) for high resource and _Train_ (29/30h) for low resource.
**CMLR**[66], widely used dataset for Mandarin audio-visual speech recognition, contains 61 hours audio-visual speech utterances collected from Chinese TV stations. In our experiments, we adopt this dataset to demonstrate the performance of our proposed MixSpeech in low-resource languages such as Mandarin. Additionally, we sample a training set containing only 12 hours of utterances in the manner of [67] for low resource scenario.
### Evaluation and Implementation Details
In this paper, we measure the performance of MixSpeech on two speech tasks, speech recognition and speech translation. For speech recognition, word error rate (WER) is adopted as the evaluation metric, which is defined as WER=\((S+D+I)/M\), where \(S,D,I,M\) represent the number of words replaced, deleted, inserted, and referenced. As for speech translation, the case-sensitive detokenized BLEU score is computed using SacreBLEU [43], following the same evaluation methodology as in previous speech translation works [15, 59]. The implementation details are provided in Appendix B due to page limitations.
### Performance of Speech Translation
**End-To-End Models VS. Cascaded Models.** Table 2 presents a comparison of the lip translation performance between two representative methods: 1) an end-to-end model,
\begin{table}
\begin{tabular}{c c c c c c c c c c c} \hline \hline \multirow{2}{*}{**Dataset**} & \multicolumn{6}{c}{**Target Language Hours**} & \multirow{2}{*}{
\begin{tabular}{c} **\# \(\sum\)****Rens** \\ \end{tabular} } \\ \cline{2-10} & **En** & **Es** & **Fr** & **It** & **Pt** & **\# Lang** & **\# \(\sum\)****Rrs** & **\# \(\sum\)****Sents** & **src** & **tgt** \\ \hline \hline \multicolumn{10}{l}{**Audio-Only**} \\ \multicolumn{10}{l}{LibriSpeech [42] 960h} & - & - & - & - & 1 & 960h & 180K & 5.9M & 5.9M \\ \multicolumn{10}{l}{MuST-C [15]} & - & 504h & 492h & 465h & 385h & 8 & 3 617h & 2 016K & 38.1M & 35.8M \\ \multicolumn{10}{l}{VoxPopuli [59]} & 543h & 441h & 427h & 461h & - & 16 & 5 967h & 2 045K & 65.0M & 60.1M \\ \hline \hline \multicolumn{10}{l}{**Audio-Visual**} \\ \multicolumn{10}{l}{LRS2 [1]} & 224h & - & - & - & - & 1 & 224h & 143K & 2.3M & 2.3M \\ \multicolumn{10}{l}{LRS3 [2]} & 433h & - & - & - & - & 1 & 433h & 151K & 4.2M & 4.2M \\ \multicolumn{10}{l}{AVMuST-TED (ours)} & - & 198h & 185h & 165h & 158h & 4 & 706h & 925K & 7.3M & 7.0M \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison of audio-visual speech recognition/translation datasets. #Lang denotes the number of target languages. #\(\sum\)**Hrs** denotes the overall duration of speech in the dataset, #Sents and #Tokens denote the overall sentences and the overall token, respectively.
\begin{table}
\begin{tabular}{l l c c c c} \hline \hline \multirow{2}{*}{**Method**} & \multirow{2}{*}{
\begin{tabular}{c} **M** \\ \end{tabular} } & \multicolumn{4}{c}{**BLEU**\(\uparrow\) } \\ \cline{3-6} & & **En-Es** & **En-Fr** & **En-It** & **En-Pt** \\ \hline Cascaded & V & 12.7 & 11.3 & 11.5 & 13.2 \\ AV-Hubert [50] & V & 14.2 & 12.6 & 12.9 & 14.8 \\ Cascaded & A\({}_{\text{(+Noise)}}\) & 16.0 & 12.9 & 12.6 & 15.5 \\ AV-Hubert [50] & A\({}_{\text{(+Noise)}}\) & 17.6 & 14.5 & 14.1 & 17.1 \\ MixSpeech(ours) & V & **18.5** & **15.1** & **14.3** & **17.2** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparison of the performances of visual speech translation on AVMuST-TED with those of the noisy audio speech translation. The results of noisy audio speech translation are the mean value at five SNRs {-20, -10, 0, 10, 20}db.
implemented based on the state-of-the-art AV-Hubert [50] method for visual speech-related tasks, and 2) a cascaded model, combining a speech recognition model (,, Lip-Reading or ASR) with a machine translation model. In the cascaded model, we use the speech recognition model trained by AV-Hubert on LRS3, which achieve the best lip-reading performance to date, and a transformer-based machine translation model trained on the paired translated text corpus in AVMuST-TED. Comparing the lip translation performance of the end-to-end model and the cascade model, we find that the BLEU score of the end-to-end model improved by +1.3 to +1.6. This result demonstrates that the end-to-end trained model can effectively prevent the accumulation of errors caused by the model cascade, and that lip translation cannot be simply disassembled as the superposition of lip reading and machine translation.
**MixSpeech VS. Prior Methods.** Due to the discrimination of speech between modalities, visual speech models are not able to translate speech content as accurately as audio speech models. To address the issue of low discrimination in visual speech, we propose MixSpeech, which is a cross-modality self-learning framework that employs mixed speech to transfer knowledge obtained from audio speech pre-training into the visual speech model. Our proposed MixSpeech significantly improves the BLEU score by another +1.4 to +4.3. Furthermore, the improvement from MixSpeech is related to the discrepancy in speech translation between audio and visual modalities. For example, En-Es exhibits a larger discrepancy of 14.7 between audio and visual speech translation, ranging from 28.9 to 14.2, and MixSpeech significantly improves it by +4.3. Conversely, Italian shows a smaller discrepancy of 10.9, ranging from 23.8 to 12.9, and improves only by +1.4. This highlights that the improvement in lip translation stems from the knowledge acquired from audio speech translation.
**Visual Speech VS. Noisy Audio Speech.** We also evaluate the performance of audio speech translation in noisy environments, by adding noise sampled from MUSAN [52] to the audio speech and measuring the performance at five SNR levels \(\{\text{-}20,\text{-}10,\text{0},\text{10},\text{20}\}\text{db}\). We compare the average BLEU scores of different SNRs and present the detailed performance in Appendix C.1. Our experiments show that although noisy audio speech performs better than visual speech, the translation performance is still significantly lower compared to noiseless audio speech. In contrast, MixSpeech, which fully leverages the knowledge of audio speech, greatly improves the visual speech translation performance, making it more reliable in noisy scenes. We also provide a comparison of translation with audio speech and audio-visual speech, demonstrating that visual speech enhances the ceiling and robustness of speech translation, but the details are only available in the Appendix C.1 since audio-visual speech does not require the cross-modality knowledge transfer proposed in this paper.
### Performance of Speech Recognition
As shown in Table 3, we compare the performance of MixSpeech on another visual speech task, lip reading (, Visual Speech Recognition), to highlight the mixspeech from more perspectives. MixSpeech obtain state-of-the-art performance on three datasets, two for English (25.5% on LRS2 and 28.0% on LRS3) and one for Chinese (11.1% on CMLR), demonstrating that this cross-modality self-learning framework can be applied for different languages to capture the intrinsic association between audio and visual speeches and thus effectively improve the understanding of visual speech. Since visual speech is relatively low-resource, we verify whether MixSpeech can effectively improve the performance of visual speech tasks in low-resource with audio speech. Compared with previous methods, MixSpeech boosts the WER of lip-reading by -3.9% to -7.3%, highlighting the critical role of high-resource audio speech in low-resource visual speech tasks. Specifically, on LRS2 and LRS3, the performance of MixSpeech in the low-resource scenario (26.9%/28.6% WER obtained with only 29h/30h visual utterances) outperforms the performances of prior methods in the high-resource scenario (28.7%/28.6% obtained with 224h/433h or even more visual utterances). Even though with only limited labeled visual corpus, our proposed MixSpeech performs no less than works with more. It is the bridge between two modalities of speech, which helps visual speech to access the knowl
\begin{table}
\begin{tabular}{l l c c c} \hline \hline \multirow{2}{*}{**\# RES**} & \multirow{2}{*}{**Method**} & \multicolumn{2}{c}{**WER(_Labeled Visual Units Hrsy_ )**} \\ \cline{3-5} & & **CMLR** & **LRS2** & **LRS3** \\ \hline \multirow{6}{*}{**High**} & WAS [53] & 38.9\({}_{(61)}\) & 70.4\({}_{(224)}\) & - \\ & TM-seq2seq [1] & - & 49.8\({}_{(698)}\) & 59.9\({}_{(698)}\) \\ & CSSMCM [66] & 32.5\({}_{(61)}\) & - & - \\ & Conv-seq2seq [65] & - & 51.7\({}_{(698)}\) & 60.1\({}_{(698)}\) \\ & CTC+KD [3] & - & 51.3\({}_{(224)}\) & 58.9\({}_{(433)}\) \\ & LIBS [67] & 31.3\({}_{(61)}\) & 65.3\({}_{(698)}\) & - \\ & CTCH [37] & 22.0\({}_{(61)}\) & - & - \\ & Master [47] & - & 49.2\({}_{(698)}\) & 59.0\({}_{(698)}\) \\ & Sub-Word [44] & - & 28.9\({}_{(698)}\) & 40.6\({}_{(698)}\) \\ & \(\dagger\)AV-Hubert [50] & 12.7\({}_{(61)}\) & 28.7\({}_{(224)}\) & 28.6\({}_{(433)}\) \\ & MixSpeech(ours) & **11.1\({}_{(61)}\)** & **25.5\({}_{(224)}\)** & **28.0\({}_{(433)}\)** \\ \hline \multirow{3}{*}{**Low**} & LIBS [67] & 50.5\({}_{(12)}\) & - & - \\ & \(\dagger\)AV-Hubert [50] & 25.8\({}_{(12)}\) & 31.4\({}_{(29)}\) & 32.5\({}_{(30)}\) \\ & MixSpeech(ours) & **18.5\({}_{(12)}\)** & **26.9\({}_{(29)}\)** & **28.6\({}_{(30)}\)** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Comparison of lip reading methods under different resource conditions. # RES represents the amount of resources. (Hours) highlighted in blue are used for low resources. \(\dagger\) For better comparison, we reproduce AV-Hubert on CMLR and LRS2.
edge stored in high-resource and high-discrimination audio speech without barriers.
### Can MixSpeech bridge cross-modality speech?
Our proposed MixSpeech builds a bridge between cross-modality speech through cross-modality self-learning, with the properly mixes speech. The details are as follows:
**Cross-Modality Self-Learning for Knowledge Transfer.** The experiments in Figure 3 provide a positive answer to the question of whether MixSpeech can contribute to achieving knowledge transfer between audio and visual speeches. We evaluate the performance of visual speech translation with different regularization strategies: no audio speech regularization (_i.e._, \(\phi\) = 0), mixed speech regularization with different mixing ratios (_i.e._, \(\phi\)\(\in\)\((0,1)\), audio speech regularization (_i.e._, \(\phi\) = 1), and mixing ratio adjustable mixed speech regularization (_i.e._, dashed lines). It is evident that the cross-modality self-learning framework significantly enhances visual speech translation, as all performances with audio speech regularization are noticeably better than those without self-learning (\(\phi\) = 0), demonstrating the effectiveness of our proposed MixSpeech.
**Narrow the Cross-Modality Distance with Properly Mixed Speech.** Moreover, the introduction of mixed speech facilitates smoother cross-modality knowledge transfer by narrowing the modality gap between speeches. Some segments in the mixed speech come from the visual speech, making it much closer to visual speech in terms of modality distance than audio speech. When regularizing with mixed speech in En-Es, the translation performance of visual speech improves further by +0.3 to +0.8 compared to audio speech regularization alone. Among them, bootstrapping with mixed speech of mixing ratio \(\phi\) = 0.5 achieves the highest BLEU score of 18.3. This demonstrates that a reasonably mixed ratio ensures that it is neither overly biased towards visual speech, leading to a lack of knowledge of audio speech, nor overly biased towards audio speech, leading to excessive cross-modality distances that affect knowledge transfer. The adjustable mixing ratio strategy based on curriculum learning further increases the applicability of mixed speech to cross-modality self-learning training, thereby boosting visual speech translation performance again.
### What role does each part play in MixSpeech?
The effectiveness of MixSpeech, which is a cross-modality self-learning framework designed to improve visual translation performance, has been demonstrated. In this study, we investigate the role of each component in detail and present relevant experiments in Table 4:
**Bridging the cross-modality gaps.** We observe a significant improvement in the lip translation performance with the inclusion of \(\mathcal{L}_{JSD}\) (ID: #3, #4) for regularizing the probabilities of visual speech and mixed speech, compared to without (ID: #1, #2). Specifically, experiment #3 with \(\mathcal{L}_{JSD}\) outperform experiment #2 with \(\mathcal{L}_{CE}^{mix}\) by +0.6 in lip translation performance on En-Es. This demonstrates that \(\mathcal{L}_{JSD}\) is the main contributor to achieving cross-modality knowledge transfer by building a bridge between the two speeches and performing fine-grained regularization across the probability of each word.
**Maintaining knowledge of audio speech.** It is also important to note that during the regularization process, the representation of audio speech is also affected by visual speech, which can interfere with the knowledge of audio speech and ultimately harm the lip translation performance of MixSpeech. As evidenced by experiment #2, the lip translation performance on En-Es decrease by -0.4 compared to experiment #3 when \(\mathcal{L}_{CE}^{mix}\) is not applied. To address this issue, \(\mathcal{L}_{CE}^{mix}\) is introduced to enhance the training ceiling of the cross-modality self-learning framework. By maintaining the translation performance of mixed speech and preventing the excessive disturbance to audio speech knowledge, \(\mathcal{L}_{CE}^{mix}\) helps to improve the overall performance of MixSpeech.
\begin{table}
\begin{tabular}{l|c c c|c c c c} \hline \hline \multirow{2}{*}{**ID**} & \multicolumn{3}{c|}{**Method**} & \multicolumn{3}{c}{**BLEU \(\uparrow\)**} \\ \cline{2-9} & \(\mathcal{L}_{CE}^{mon}\) & \(\mathcal{L}_{CE}^{mix}\) & \(\mathcal{L}_{JSD}\) & **En-Es** & **En-Fr** & **En-It** & **En-Pt** \\ \hline \#1 & ✔ & & & 14.2 & 12.6 & 12.9 & 14.8 \\ \#2 & ✔ & ✔ & & 17.5 & 14.3 & 13.6 & 16.5 \\ \#3 & ✔ & & ✔ & 18.1 & 14.8 & 14.1 & 16.9 \\ \#4 & ✔ & ✔ & ✔ & **18.5** & **15.1** & **14.3** & **17.2** \\ \hline \hline \end{tabular}
\end{table}
Table 4: BLEU of different module combinations in MixSpeech.
Figure 3: BLEU scores of MixSpeech with different speech regularization on En-Es and En-Fr. \(\phi\) = 0: no audio speech regularization, \(\phi\)\(\in\)\((0,1)\): mixed speech regularization, \(\phi\) = 1: only audio speech regularization. The dashed lines represent the adjustable mixing ratio strategy based on curriculum learning.
### Qualitative results
We present several examples of lip translation in Table 5 to qualitatively evaluate the translation quality of MixSpeech. The translation results are very close to the ground truth, and the semantics are consistent. We observe two types of words that differ in translation: synonyms and context-sensitive translations. Synonyms that have different spellings but the same meaning, such as salvar and rescat0 in Spanish, both meaning'rescue', and diecimila and 10000 in Italian, both meaning 'ten thousand', are commonly found in translation tasks and can affect translation consistency. Additionally, there are translations that require context information, such as when the speaker refers to themselves as a child, and the translation in Spanish needs to take into account the speaker's gender to choose between nina for girl' or nino for boy' and 'child'. The qualitative translation results of MixSpeech demonstrate its capability to achieve reliable cross-lingua lip translation. In Appendix C.2, we also provide translation results of noisy audio speech translation with visual speech translation and audio speech translation with audio-visual speech translation to highlight the importance of visual speech in speech translation.
## 5 Conclusion
With the advancement of online technologies, such as online healthcare and sales, language barriers often prevent these tools from reaching and benefiting disadvantaged areas. In light of this, we focus on visual speech, a branch of the speech stream, and aim to translate visual speech from source languages to other target languages for cross-linguistic communication, specifically through lip translation. We meticulously curate the AVMuST-TED dataset, consisting of 706 hours of speech clips with professional translations from TED, to facilitate cross-linguistic research on visual speech. We also introduce MixSpeech, a cross-modality self-learning framework that utilizes mixed speech to regularize visual speech translation and achieves state-of-the-art performance in lip translation on AVMuST-TED and lip reading on LRS2, LRS3, and CMLR datasets.
Moreover, our work on visual speech and AVMuST-TED lay a solid foundation for further research on visual speech in cross-lingual fields. There are numerous related tasks with great potential for practical applications, such as Cross-Lingual Talking Head Generation [41]. These tasks hold immense promise for breaking down language barriers and promoting communication across diverse communities.
\begin{table}
\begin{tabular}{|c|c|c|} \cline{2-4} \multicolumn{1}{c|}{**En-Es**} & \multicolumn{1}{c|}{\multirow{2}{*}{**En**}} & Transcription: and always as a child I had this fantasy that somebody would come and rescue me \\ \multicolumn{1}{c|}{} & \multicolumn{2}{c|}{MixSpeech: and always as a child I had this fantasy that somebody would come and rescue me } \\ \cline{2-4} \multicolumn{1}{c|}{} & \multirow{2}{*}{**Es**} & Ground Truth: y de nina siempre tenía la fantasia de que alguien vendria a salvar me de \\ \cline{3-4} \multicolumn{1}{c|}{} & & MixSpeech: y de (desde) nina miño siempre tenía la esta fantasia de que alguien vendria a salvar me (rescat0) de \\ \hline \multirow{4}{*}{**En-Fr**} & \multicolumn{1}{c|}{\multirow{2}{*}{**En**}} & Transcription: and solutions create new problems which have to be solved in their turn \\ \cline{3-4} \multicolumn{1}{c|}{} & & MixSpeech: and solutions to create new problems which have to be solved in their turn \\ \cline{2-4} \multicolumn{1}{c|}{} & \multirow{2}{*}{**Fr**} & Ground Truth: et les solutions créent de nouveaux problemes devant ê récsolus à leur tour \\ \cline{3-4} \multicolumn{1}{c|}{} & & MixSpeech: et les (des) solutions peur créer de nouveaux problemes devant étuéévent ê résolus à leur tour \\ \hline \multirow{4}{*}{**En-It**} & \multicolumn{1}{c|}{\multirow{2}{*}{**En**}} & Transcription: one of the last 10,000 years, and the other certainly of the last 25 years \\ \cline{3-4} \multicolumn{1}{c|}{} & & MixSpeech: one of the last 10,000 years, and the other certainly assisténce of the last 25 years \\ \cline{2-4} \multicolumn{1}{c|}{} & \multirow{2}{*}{**It**} & Ground Truth: un presente negli ultimi diecimila anni e l’altro certamente negli ultimi 25 anni \\ \cline{3-4} \multicolumn{1}{c|}{} & & MixSpeech: un presente negli ultimi diecimila (10000) anni e l’altro certamente negli ultimi 25 anni \\ \hline \multirow{4}{*}{**En-Pt**} & \multicolumn{1}{c|}{\multirow{2}{*}{**En**}} & Transcription: has around 2,000 people descending on MIT’s campus \\ \cline{3-4} \multicolumn{1}{c|}{} & & MixSpeech: has around 2,000 people descending on MIT’s campus \\ \cline{3-4} \multicolumn{1}{c|}{} & \multirow{2}{*}{**Pt**} & Ground Truth: congregam cerca de 2000 pessoas no campus do mit \\ \cline{3-4} \multicolumn{1}{c|}{} & MixSpeech: congregam **de** cerca de 2000 pessoas **e-estadler** no campus do mit \\ \hline \end{tabular}
\end{table}
Table 5: Qualitative performance of Visual Speech Recognition and Translation on AVMuST-TED. Red Strikocut Words: mistranslated words with opposite meaning, (Blue Words in parentheses): mistranslated words with similar meaning, Gray Words: the absent words.
|
2310.05231
|
MindfulDiary: Harnessing Large Language Model to Support Psychiatric
Patients' Journaling
|
In the mental health domain, Large Language Models (LLMs) offer promising new
opportunities, though their inherent complexity and low controllability have
raised questions about their suitability in clinical settings. We present
MindfulDiary, a mobile journaling app incorporating an LLM to help psychiatric
patients document daily experiences through conversation. Designed in
collaboration with mental health professionals (MHPs), MindfulDiary takes a
state-based approach to safely comply with the experts' guidelines while
carrying on free-form conversations. Through a four-week field study involving
28 patients with major depressive disorder and five psychiatrists, we found
that MindfulDiary supported patients in consistently enriching their daily
records and helped psychiatrists better empathize with their patients through
an understanding of their thoughts and daily contexts. Drawing on these
findings, we discuss the implications of leveraging LLMs in the mental health
domain, bridging the technical feasibility and their integration into clinical
settings.
|
Taewan Kim, Seolyeong Bae, Hyun Ah Kim, Su-woo Lee, Hwajung Hong, Chanmo Yang, Young-Ho Kim
|
2023-10-08T17:00:04Z
|
http://arxiv.org/abs/2310.05231v2
|
# MindfulDiary: Harnessing Large Language Model to Support Psychiatric Patients' Journaling
###### Abstract
In the mental health domain, Large Language Models (LLMs) offer promising new opportunities, though their inherent complexity and low controllability have raised questions about their suitability in clinical settings. We present MindfulDiary, a mobile journaling app incorporating an LLM to help psychiatric patients document daily experiences through conversation. Designed in collaboration with mental health professionals (MHPs), MindfulDiary takes a state-based approach to safely comply with the experts' guidelines while carrying on free-form conversations. Through a four-week field study involving 28 patients with major depressive disorder and five psychiatrists, we found that MindfulDiary supported patients in consistently enriching their daily records and helped psychiatrists better empathize with their patients through an understanding of their thoughts and daily contexts. Drawing on these findings, we discuss the implications of leveraging LLMs in the mental health domain, bridging the technical feasibility and their integration into clinical settings.
**Human-centered computing \(\rightarrow\) Empirical studies in HCI; Natural language interfaces.**
## 1. Introduction
Journals serve as a written record of an individual's past events, thoughts, and feelings, allowing genuine expression (Steintein et al., 2016; Steintein et al., 2017). Prior work has shown the advantages of journaling in clinical mental health contexts, as journals frequently capture patients' daily experiences, symptoms, and other contextual data that are challenging to gather during brief hospital visits (Han et al., 2017; Kittel et al., 2018). Furthermore, these patient journals can enhance mental health professionals' (MHPs) comprehension of their patients' conditions, leading to improved treatment quality (Steintein et al., 2017). However, writing about one's past feelings and thoughts can be a complex process because people differ in their ability to understand, identify, and verbalize their emotions (Steintein et al., 2019). In addition, patient under psychotherapy struggle with constructing a narrative and understanding their past (Han et al., 2017; Kittel et al., 2018).
Researchers in the field of Human-Computer Interaction (HCI) have shown that chatbots can help individuals articulate and share their daily experiences. For instance, chatbots to elicit people's self-disclosure can ease the process of emotional expression by providing a safe and supportive environment for individuals to share their experiences and emotions (Han et al., 2017; Kittel et al., 2018; Steintein et al., 2017; Steintein et al., 2017). Furthermore, a machine's inherent trait of not showing fatigue can make people more confident to share their stories truthfully and comfortably (Kittel et al., 2018; Steintein et al., 2017).
The recent advancements in large language models (LLMs) have added to the understanding and generation of natural conversations. Unlike traditional chatbot (Boward et al., 2016) were unable to support conversation outside pre-defined context (Kittel et al., 2018; Steintein et al., 2017), chatbot driven by LLMs can offer dynamic responses on diverse topics (Boward et al., 2016; Stein et al., 2017; Stein et al., 2017). However, care needs to be taken when leveraging LLMs, particularly in mental health contexts, as the inherent challenge of LLMs in uncertainty in control could result in generating unintended or inaccurate responses (Kittel et al., 2018; Stein et al., 2017; Stein et al., 2017).
In this work, we present MindfulDiary, a mobile conversational AI to support patients in journaling their daily experiences and thoughts through conversations. MindfulDiary incorporates LLMs to generate a response, prompting patients differently according to the conversational phase. The conversation records are automatically summarized and presented on a clinician dashboard so MHPs can obtain insights about the patient. As a multi-disciplinary research team, which included HCI researchers, AI engineers, and psychiatrists, we iteratively designed MindfulDiary and conducted a four-week field study with 28 psychiatric patients diagnosed with major depressive disorder (MDD). Through this study, we found that the versatility, narrative-building capability, and diverse perspectives provided by MindfulDiary assisted patients in consistently enriching their daily records. Furthermore, MindfulDiary supported patients in overcoming the challenges of detailed record-keeping and expression, often hindered by feelings of apathy and cognitive burdens. Notably, the enriched records from MindfulDiary provided MHPs with deeper insights, enhancing their understanding and empathy toward their patients. Based on the findings, we discuss the potential of LLMs' conversational abilities to aid in patient record-keeping within clinical mental health settings and suggestions for their responsible integration into these environments.
Our work contribute the following:
1. We present MindfulDiary, an LLM-driven journal designed to document psychiatric patients' daily experiences through naturalistic conversations, designed in collaboration with MHPs.
2. From a four-week field study involving 28 patients and five psychiatrists, we provide empirical findings on how MindfulDiary supported patients in keeping their daily logs and assisted psychiatrists in monitoring and comprehending patient states.
3. Drawing from the design process of MindfulDiary and the findings of our deployment study, we outline key considerations and share lessons learned for the design and implementation of LLM's conversational capability in clinical mental health settings.
## 2. Related Work
In this section, we cover related work in two parts: (1) journal as a patient-generated health data, and (2) conversational agent for mental health.
### Journal as a Patient-generated Health Data in Clinical Setting
PGHD\(-\)defined as _"health-related data, such as health history, symptoms, biometric readings, treatment history, lifestyle choices, and other pertinent details, created, recorded, or gathered by patients"_(Wang et al., 2017)\(-\)has increasingly become an essential tool in clinical settings to capture authentic, real-time insights into patients' health. Studies have shown that PGHD can enhance communication between patients and MHPs and offer contextual information about patients, thereby heightening MHPs' awareness of patient health outside regular clinical visits (Han et al., 2017; Wang et al., 2018; Wang et al., 2019).
Within the mental health domain, PGHD range from structured mental health assessments (_e.g._, anxiety, depression) to more unstructured data, including mood-related symptoms and social interactions tracking (_e.g._, social media use, number of calls)(Bahdan et al., 2016; Wang et al., 2018; Wang et al., 2019). Our study places particular emphasis on journals as a PGHD method in clinical mental health settings. Journaling refers to _"the practice of writing down one's symptoms and other information related to one's daily life to discuss them during clinical appointments"_(Wang et al., 2018; Wang et al., 2019). Using a journal as a PGHD can particularly be useful in mental health contexts as it can offer rich, self-documented insights, which could improve MHPs' understanding of their patients (Wang et al., 2019).
However, while the benefits of journaling are clear, the act itself is not always straightforward. Many people struggle with starting their entries, sticking to consistent journaling routines, and structuring their reflections (Wang et al., 2019). Further, writing about emotions and past experiences can be intricate, as individuals vary in their capacity to recognize, interpret, and articulate their feelings (Wang et al., 2019; Wang et al., 2019; Wang et al., 2019). For some, especially in psychotherapy, crafting a narrative that describes one's life journey can be a challenging process (Wang et al., 2019; Wang et al., 2019).
While previous research has extensively established the significance of employing journals as PGHD within clinical contexts, it is essential to acknowledge the inherent complexities tied to the journaling process. This becomes especially critical when contemplating its application within the specific demographic undergoing psychotherapy. With this background, our research explores the potential of LLMs in assisting the journaling process within the clinical mental health setting.
### Conversational Agent for Mental Health
The field of artificial intelligence (AI) has proposed some significant innovations in medical settings, such as aiding clinical decision-making and diagnosis (Wang et al., 2019; Wang et al., 2019). However, AIs have rarely been integrated into mental health contexts, partly because the field has relied on nuanced techniques and subjective data, such as patient relationships and emotion observation (Han et al., 2017; Wang et al., 2019). Recently, there has been a growing technological presence of Natural Language Processing (NLP) applications for supporting mental health, as NLP's capability for understanding human language holds a significant promise in this domain (Wang et al., 2019; Wang et al., 2019).
Conversational agents, also known as chatbots, have particularly stood out in the mental health domain (Bahdan et al., 2016; Wang et al., 2018). Studies have demonstrated the potential of chatbots in facilitating different types of therapy, such as cognitive behavioral
therapy (Luo et al., 2017; Wang et al., 2018; Wang et al., 2019), expressive writing (Zhou et al., 2019), behavioral reinforcement (Wang et al., 2018), and solution-focused therapy (Wang et al., 2018). Prior work has also shown that chatbots could ease the burden of disclosing sensitive information. Studies indicated that individuals may feel more comfortable communicating with chatbots because of the social stigma involved in communicating with human beings (Shen et al., 2019; Wang et al., 2019; Zhou et al., 2019). Furthermore, these approaches can help overcome temporal and spatial constraints, offering mental health support that is accessible at any time and anywhere (Luo et al., 2017).
Recently, chatbots powered by LLMs have gained attention (Zhou et al., 2019). Trained on vast amounts of text data, they demonstrate exceptional performance and versatility in natural open-domain conversation, surpassing previous NLP models in understanding intricate language patterns (Zhou et al., 2019; Wang et al., 2019). Yet, many chatbots used for mental health therapy still employ a retrieval-based approach (Beng et al., 2016; Chen et al., 2016). The basic principle of retrieval-based chatbots is to search for an appropriate response from a database based on input from users. Since the responses and expected flow that the system can provide are already hard-coded in the database, chatbots exhibit consistent and predictable behavior (Luo et al., 2017). However, if a user gives input that does not exist in the database, the chatbot may fail to respond. Thus, these systems cannot flexibly respond to unforeseen situations. In contrast, a generative approach driven by LLM has made it possible to develop chatbots that can produce human-like and contextualized outputs for even unforeseen input, enabling open-domain dialogues (Beng et al., 2016).
However, LLMs, employing a neural network known as a "transformer" to generate outputs, come with inherent challenges tied to their architecture (Zhou et al., 2019). One key issue is the explainability of the model output: it's challenging to discern how this 'black box' model interprets a given input prompt. As a result, designers struggle to predict both the LLM's understanding of the input and the subsequent messages it might produce (Zhou et al., 2019). For instance, a chatbot leveraging GPT-2 for mental health therapy occasionally generated non-word sentences or produced more negative outputs than positive ones (Zhou et al., 2019). Replika, an LLM-based application intended for mental well-being, has occasionally displayed harmful content and exhibited inconsistent conversational styles, undermining its role as a long-term companion (Zhou et al., 2019).
Considering the inherent challenges of LLM and the sensitivity of the mental health domain, we must reflect carefully on how and to what extent the conversational capabilities of these models should be utilized. In this study, we emphasize the need to consider the opportunities and limitations of LLMs, as well as important aspects of the mental health domain, including safety, sensitivity, and experts' empirical knowledge. With this in mind, we designed MindfulDiary's state-based chatbot using both rule-based and generation-based approaches to safe compliance with experts' guidelines while facilitating free-form conversations.
## 3. Formative Study: Focus Group Interview
To inform the design of MindfulDiary, we first conducted a Focus Group Interview (FGI) with MHPs. We recruited six MHPs (E1-6; two males and four females)--four clinical psychologists and two psychiatrists--from a local university hospital in Korea. We held two one-hour remote sessions via Zoom. In the FGI, we first provided an overview of language model technologies and LLM's natural language understanding and generation capabilities. We also described the current challenges of LLMs, such as potential errors and hallucinations. We then asked participants a focused set of questions on (1) the challenges MHPs currently face during patient treatment and counseling sessions and (2) their expectations and envisioned opportunities of LLMs' role in clinical mental health settings. The session was video recorded and later transcribed. We open-coded the transcripts to identify emerging themes. In the following, we cover the findings from the FGI.
### Challenges in Eliciting Responses from Patients with Depression
Participants indicated that eliciting disclosure from patients' inner thoughts during a limited consultation time requires significant effort. Many patients with depression experience difficulty describing and expressing their feelings and thoughts to providers due to a sense of apathy, which is a common psychiatric symptom involved in Major Depressive Disorder: _"In the consultation room, even if they sit like this, they often just remain silent for a long time."_ (E5) Thus, providers often end up spending a substantial amount of time asking standardized and repetitive questions about mood, sleep, and major events to understand patients' current states.
Participants also noted that they had their patients engaged in paper-based diary writing programs but most demonstrated low participation rates and low engagement: _"We tried the diary program on paper(in the inpatient ward), and several patients did write. What we saw was quite trivial, like, 1 just felt bad today! But we learned there were significant events upon consultation, like having a big argument with other patients, which they did not record. Because patients with depression, or those who have had suicidal or self-harming incidents, often have a dulled state in expressing their emotions or feel apathetic, they tend to find such expressions very difficult."_ (E3)
Participants envisioned that language models have significant potential given their natural and flexible conversational abilities. They suggested that this ability could help patients explore their daily lives and engage in deep reflection, making the daily logging process more engaging by providing a sense of companionship.
### The Meaning of Daily Records as Perceived by MHPs
Our participants emphasized that patient records about their daily experiences can serve as crucial data for MHPs to understand them. E5 envisioned that such records could be utilized to understand a patient's automatic thoughts and reaction mechanisms concerning specific events. For instance, in a doctor's office, a patient with social phobia might briefly say, _"I feel scared in crowded places,"_ and the conversation might not delve much deeper. However, the true depth of this fear can be more accurately and vividly captured if the patient records their feelings immediately after exposure to a crowded place or even later that night.
Participants imagined that language model outputs about patient journaling patterns could provide valuable insights about what counseling styles they should use for individual patients: _"Observing how they respond to certain inputs can be valuable data. Some patients feel comforted by mere reassurance, while others prefer more direct, pointed feedback."_
From the FGI, we refined the initial concept of the MindfulDiary program. We leveraged the conversational abilities of LLMs to help patients document their daily experiences between clinical visits. MHPs had access to the collected data to inform their clinical decision-making. Furthermore, both MHPs and the research team concur that LLMs should not act solely as the primary intervention due to their inherent limitations but should function as supportive tools for clinical consultations. The subsequent section outlines the design and development process of our system.
## 4. Mindfulary
Informed by the findings from FGI with MHPs, we designed and developed MindfulDiary, which consists of two main components: (1) a patient mobile app for daily record-keeping and (2) a clinician dashboard that allows professionals to access and use these daily records in a clinical setting (See Figure 1). Below, we present a fictional usage scenario to
demonstrate how the system works.
Jane, diagnosed with chronic anxiety, frequently grapples with panic attacks. To keep track of her daily experiences, her psychiatrist recommends trying MindfulDiary as part of her treatment plan.
Every evening, Jane records her daily activities, emotions, and thoughts using the MindfulDiary app. The AI leads the conversation by asking prompt questions about the day of Jane. Additionally, the system transforms Jane's inputs into a journal-style format, in which she can organize thoughts and reflect on them later. She can easily read the summarized content whenever she wishes to look back at past events or thoughts.
Three weeks later, during a consultation, her psychiatrist uses the expert interface of MindfulDiary to review a data-driven summary of Jane's entries. The data helped the psychiatist identify patterns that Jane's anxiety often spikes during her work commute. Based on this insight, the psychiatist refines advice and introduces specific coping strategies, fostering a more personalized approach to care.
### MindfulDiary App
The MindfulDiary app for patients aims to support people who might have difficulty journaling due to apathy and cognitive load through naturalistic conversation driven by an LLM. The app consists of a home screen containing an introduction and guide to the program (Figure 1(a)), a journal writing screen (Figure 1(b), 1(c)), and a screen to review the diary entries (Figure 1(d)).
#### 4.1.1. Journaling User Interface
Figure 3 illustrates the overall use flow of the journaling session, which begins with a Pre-Journaling Assessment (Figure 4-(i)) that asks to fill out a questionnaire for mental health. The questionnaire comprised the modified PHQ-9 (Zhou et al., 2017) and a custom open-ended question inquiring about recent attempts of self-harm or
Figure 2. Main screens of the MindfulDiary app. (a) The main screen, (b) the journaling screen, (c) the summary screen shown when the user submitted the journal dialogue, and (d) the review screen displaying the user’s past journal.
suicide. This assessment prevents users who provided any clues of suicidal or self-harm from journaling on the same day. (We cover this feature in detail in Section 5.4 Ethical Consideration.)
On the next screen, the user converses with MindfulDiary, documenting the events of the day (See Figure 2b). After three turns, MindfulDiary provides a summary of the conversation as an essay. Users can edit this automatically generated summary any time. When the user ends the session by pressing the end button (Figure 2b, bottom), MindfulDiary displays daily mental health insights alongside the diary content on the summary screen (See Figure 2c). Users can also leave a reflection message there. Lastly, users can browse their past records in the Diary Review menu (See Figure 2d).
#### 4.1.2. Conversation Design
We designed the chatbot's conversational behavior based on insights from psychiatry literature (Shen et al., 2017; Senn et al., 2017), which covers foundational techniques and considerations for conducting clinical interviews. We also incorporated the hands-on clinical experiences of practicing psychiatrists.
As a result, we designed the conversation of a journaling session to follow a sequence of three stages: _Rapport building_, _Exploration_, and _Wrap-up_. The **Rapport Building** state is an ice-breaker, centered on casual exchanges about a user's day. In this state, the assistant also shares bits of information to encourage users' openness. As we progress to the **Exploration** state, the emphasis shifts to a comprehensive understanding of the user's daily events, feelings, and thoughts, facilitated by a mix of open-ended and closed-ended queries that ensure users remain engaged and in control of the dialogue. The conversation then transitions to the **Wrap-up**, emphasizing completion and ensuring users have fully voiced their experiences while the system remains empathetic and receptive to any lingering topics.
Besides the three main stages, we also incorporated the **Sensitive Topic** state that handles the most sensitive subjects, such as self-harm or suicidal ideation. When this state is triggered, psychiatrists receive instant notifications. This allows them to oversee the conversation in real-time and step in to assist the patient if necessary. Here, the system begins by empathizing with the user, recognizing their struggles, and offering a reassuring message. Following this, the system gently probes the depth of their suicidal or self-harm thoughts. If the user expresses intense or specific plans
Figure 3. Use flow of MindfulDiary’s journaling session: (1) Pre-Journaling Assessment: Users undergo a mental health survey using the modified PHQ-9 (Shen et al., 2017) before using MindfulDiary; (2) Users converse with MindfulDiary, documenting their day; (3) Summary Presentation: After three turns, MindfulDiary presents a diary-styled summary of the conversation so far, which can also be edited by the user. Users can continue the conversation as they want. (4) Session Closure: Once all processes are completed, MindfulDiary displays today’s mental health and diary content, concluding the journaling session.
related to self-harm or suicide, the system urges them to seek prompt assistance, either at a hospital emergency room or via the local helpline.
#### 4.1.3. Conversational Pipeline
Lengthy and complex input prompts for LLMs are known to cause poor task performance (Brocker et al., 2017) by partly omitting latent concepts (Stein
a test platform where the participant's clinician monitored the generated messages in real-time, approving them or sending better messages manually.
### Clinician Dashboard
The clinician dashboard (_c.f.,_ Supplementary video) is a desktop application designed to facilitate monitoring patient's journal entries and to provide analysis of the entries to help clinicians identify significant events, reactions, and emotions. The dashboard consists of the following components:
_User Engagement._ This section visualizes the participant's overall engagement with MindfulDiary, including the number of journals written, the date and time they were written, and their length. The modified PHQ-9 scores for each session are also visualized, allowing professionals to track the user's mental health trends using a validated tool.
_Journal._ This section displays the content of the journals written by patients. The information is presented in a card format, where each card offers a summary of the journal, including timestamps, total time taken to write the journal, and associated PHQ-9 score. The interaction logs between the patient and MindfulDiary are also provided in this section.
_Insights._ To assist professionals in browsing through the diary, this section visualizes (1) a word cloud to understand frequent terms that the participant used at a glance, (2) a summary of major events to highlight significant happenings and (3) summary of emotions to gauge the mood based on user input. When a specific period is selected for review, a comprehensive summary is generated. This summary relies on the combined power of _gpt-4_ and a Korean morphological analysis tool named Kiwi (Kiyi et al., 2017). Due to the limitations of language model-driven analysis, there might be occasional inaccuracies in the generated content. First-time users of this interface are alerted about possible inaccuracies. An in-interface tooltip also reminds users that the summarized outcomes might not be accurate.
### Technical Implementation
MindfulDiary's interface is developed using React, a JavaScript-based framework. The server, responsible for interfacing with the LLM and overseeing database operations, is implemented in Python. Google Firebase handles user authentication, data storage, and retrieval tasks. The conversational capabilities of MindfulDiary are powered by _gpt-4_, accessible through OpenAI's API1. We specifically used gpt-4-0613 model. For parameter setting, we consistently set the temperature to 0.7 and both a presence penalty and frequency penalty to 0.5.
Footnote 1: [https://platform.openai.com/docs/guides/gpt/chat-completions-api](https://platform.openai.com/docs/guides/gpt/chat-completions-api)
## 5. Field deployment study
Through the four-week field deployment study with 28 patients undergoing outpatient treatment, we aimed to explore how patients and MHPs utilize MindfulDiary and what opportunities and challenges arise from its real-world use. Figure 5 illustrates the process of field deployment study.
### Recruitment
We targeted outpatients from the Department of Mental Health at a University Hospital. Participants were selected based on specific criteria: (1) those who had been diagnosed with MDD and (2) those who did not exhibit heightened impulsive tendencies or harbor specific intentions towards self-harm or suicide. Eligible participants were identified through evaluations conducted by psychiatrists. Flyers and consent forms were distributed to eligible patients. For minors, the consent form process was adhered to only when they were accompanied by a guardian at the hospital.
We provided a compensation of approximately 60 USD (80,000 KRW) to participants who completed the entire four-week study process. For those who dropped out midway, we provided compensation based on the number of weeks they participated. Participants received approximately 11 USD (15,000 KRW) for every week that they were involved in (_e.g._, if someone participated for ten days, they would receive 11 USD). Participants were informed that consistent non-use of the system would be a criterion for study dropout. Specifically, if a participant did not engage with the MindfulDiary for three consecutive days, they would receive a notification from the researcher encouraging its use. However, continued non-use after this reminder would result in the participant being considered non-compliant with the study protocol, leading to their withdrawal from the study. This decision was made to ensure data reliability.
Through this process, a total of 36 individuals registered, of whom 8 dropped out, resulting in a final count of 28 participants who completed the 4-week field study. The majority of participants were adolescents and adults, with ages ranging from 12 to 28 years, with an average age of 17.6 (\(SD=3.26\)). Among the 28 participants, 11 were male, and 17 were female.
### Data Collection
_Patient-MindfulDiary Interaction Data._ With the consent of the participants, we collected all data from their interactions with the MindfulDiary along with the raw input content and outputs from the language model.
_Interviews with Patients._ We conducted interviews with each patient to delve deeply into their experiences and learn how they used MindfulDiary on a daily basis. At the end of the second and fourth weeks, we conducted 15-minute debriefing interviews with participants. Considering the characteristics of depression patients, who may struggle to focus for long periods of time, the interview session was divided into two shorter sessions.
_Interviews with Psychiatrists._ We interviewed psychiatrists who treated the patient participants to understand how they might use the system's data in actual clinical settings. We further gathered feedback from the psychiatrists on the opportunities and limitations of MindfulDiary, as well as suggestions for improvements.
Figure 5. Procedure of the Four-Week Field Deployment Study: A four-week exploration into the utilization of MindfulDiary by outpatient patients, encompassing daily use, and its integration into clinical decision-making. *Some participants did not have a follow-up visit during the experimental period
### Analysis
To explore participants' usage patterns with MindfulDiary, we first conducted a descriptive statistics analysis. To determine any shifts in participants' adherence over time, we examined weekly writing frequencies using a one-way repeated measures ANOVA (RM-ANOVA) with Greenhouse-Geisser correction. To gain a deeper qualitative understanding of the messages produced by MindfulDiary and interviews with patients and psychiatrists, we used open coding paired with thematic analysis (Beng et al., 2017). All interviews were audio-recorded and later transcribed. The first author open-coded the interview transcripts along with the interaction log data through multiple rounds of iteration. The research team then identified patterns through discussion to formulate overarching themes.
### Ethical Considerations
Our study design obtained the required approval from the Institutional Review Board (IRB) of the affiliated hospital.
Conducting this study, we are fully aware of the inherent risks associated with our research, particularly given the characteristics of participants diagnosed with MDD. To mitigate the risks, we first carefully screened participants, relying on evaluations conducted by psychiatrists. Individuals displaying heightened impulsive tendencies or harboring specific intentions towards self-harm or suicide were excluded from the study. In addition, participants were asked to take the PHQ-9 before interacting with MindfulDiary, along with an additional set of questions probing their recent attempts at self-harm or suicide. If a participant's response to question number 9 of the PHQ-9, regarding suicidal/self-harm thoughts, scored'moderate or higher' or if any recent suicide attempt was verified, the system pivoted to provide content geared towards alleviating anxiety and reducing stress rather than proceeding with the standard program. In such a case, a real-time alert was also sent to psychiatrists. Lastly, if sensitive themes frequently surfaced in a participant's input during the study, their interactions with the program were temporarily halted. Psychiatrists subsequently re-evaluated such participants to assess the viability of their ongoing participation. During our experiment, for the case of P11, mentions of repetitive suicide and self-harm were detected. Consequently, an expert contacted the participant, the experiment was suspended for three days, and after a re-evaluation in an outpatient clinic, we resumed the program use with P11.
Further, to mitigate potential risks from the LLMs' outputs, we embraced an iterative design methodology. The system's interactions underwent repeated assessments to ensure it generated safe, non-harmful outputs. In addition, in the first week of each participant's system use, all interactions between participants and MindfulDiary were observed in real time. To facilitate this process, when a participant started the session, the research team received a notification email. This notification included real-time monitoring links and reports of the survey responses that participants answered before each session. After the first week, user interactions and MindfulDiary were reviewed within a 12-hour window. During the review process, if an interaction contained sensitive content (specifically, terms pre-defined as sensitive by psychiatrists), the psychiatrists on our research team assessed the situation and contacted the affected participants if necessary.
Lastly, given that we were handling the patients' personal and sensitive data, ensuring the secure protection and management of data was critical. Therefore, during the study, we utilized the Google Firebase authentication service to manage the user authentication process for participants. We were thus able to ensure that only authorized personnel had access to the data, and any attempts at unauthorized access could be promptly detected and managed. After the field study, all data was separated from personal identifiers to maintain anonymity.
## 6. Results
In this section, we report the results of the field study in four parts: (1) Journaling adherence, (2) Dialogue patterns, (3) Patients' perspectives on MindfulDiary, and (4) MHPs' perspectives of MindfulDiary for clinical settings.
### Journaling Adherence
Across four weeks, participants submitted 501 journal entries (17.90 entries per participant on average), 0.62 entries on average per day (more than once every two days). Each journaling session lasted an average of 438 seconds (around 7 minutes) but with notable individual variability (\(SD=225.97\)). Each journal dialogue included messages with an average of 105.6 syllable count (\(SD=49.41\)). Our analysis did not reveal significant differences in either the participants' input length (\(F(1.735,46.85)=2.718\), \(p=.084\)) or writing time (\(F(2.417,65.25)=2.549\), \(p=.076\)) across the four different time points, as determined by the RM-ANOVA test. This suggests that users mostly retained a steady level of engagement during the four-week study.
### Dialogue Patterns
Participants and MindfulDiary exchanged a total of 4,410 messages (_i.e._, 2,205 pairs of the AI and participant's messages) during the field study. Each session consisted of 10.82 messages (\(SD=2.70\)). The majority of exchanges between the AI and participants were carried on for an exploration of patients' daily lives and emotions, as well as for casual conversations. In terms of the stage of the conversation, 62% (2,732 messages) of the messages were from Exploration, 30% (1220 messages) for Rapport building, and 6% (282 messages) for Sensitive topic. Only a small amount of messages were accounted for Wrapping up (62 messages) or Not-selected (14 messages).
To understand the contents that MindfulDiary generated, we delved deep into the content it generated. 72% of the AI messages took the form of questions, aiming to elicit responses about users' daily experiences and emotions. We identified and categorized the primary strategies that MindfulDiary employed to assist patients' journaling. There were four strategies employed by the LLM: _Emotional Exploration_, _Activity/Behavior Exploration_, _In-depth Follow-up & Countermeasures_, and _Future Plan Exploration_. For a comprehensive breakdown of these strategies, along with their descriptions and exemplar questions, refer to Table 1.
The average length of participants' responses was 29.42 syllable counts, with a median of 20 (SD=35.9). This suggests a left-skewed distribution, where many participants gave shorter responses and a smaller number provided considerably longer answers, causing a high variation. The minimum response length was one character, and the maximum was 559 syllable counts. We further conducted a qualitative analysis of these responses, seeking to identify the themes present in users' interactions with the LLM. This allowed us to understand the scope and topics of the daily records that MindfulDiary collected from the patients.
Participants interacting with MindfulDiary conveyed a range of topics (see Table 2). They described a spectrum of _emotional states_, from negative feelings like exhaustion and anxiety to positive sentiments of pride and joy. _Events and activities_ were recounted, offering insights into their daily routines, such as walking during school times or decreased activity post-vacation. They also shared _thoughts and beliefs_, sometimes related to current events, revealing patterns linked to mental health, like feelings of exclusion and loneliness. Regarding _perceived health status_, comments spanned from immediate ailments, such as headaches, to long-term health challenges. Distorted perceptions about their body included content on excessive dieting. Specifically, participants frequently discussed medications, revealing not just their physical reactions but also their perceptions and behaviors toward them. Some expressed concerns over the
taste, while others mentioned adverse reactions from intake, like discomfort after swallowing multiple pills at once. Lastly, the realm of _relationships & interactions_ had participants highlighting both the challenges and supports in their interpersonal connections, revealing the significant impact these have on mental well-being, from conflicts and trust issues to moments of affirmation and encouragement.
\begin{table}
\begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline
**Category** & **Description** & **Example** \\ \hline
**Emotional states** & The emotions that participants documented in their daily lives encompassed a broad spectrum, ranging from negative and depressive sentiments & _“I’m so exhausted, 1 feel like 1’m reaching my limit soon,” “I’m very worried, scared, and anxious”_ \\ \hline
**Events and activities** & The mentions of events, tasks, or activities they participated in or witnessed, such as exam periods or travel. & _“When I attended school, I got some walking in, but after vacation, I don’t have much reason to go out, leading to a decreased activity level.”_ \\ \hline
**Thoughts and beliefs** & The thoughts, values, beliefs, and convictions they usually held. Including characteristic thought patterns related to mental health, such as distorted thinking & _“I feel like someone is talking behind my back somewhere; they don’t like my actions and seem to exclude me”_ \\ \hline
**Perceived Health Status** & The physical state and health concerns, misconceptions about their health, reactions to medications, revealing their perceptions, behaviors related to drug intake & _“I’ll stare and exercise to lose weight!”_, _”I just took my medicine, but it seems to be getting tasteless.”_ \\ \hline
**Relationships \& interactions** & The contents about relationships with others. Difficulties stemming from interpersonal relationships, as well as the support and affirmation received from those around them & _“I hated seeing my brother being happy. Forcing a cheeful tone also irked me,”_ \\ \hline \end{tabular}
\end{table}
Table 1: Categorization of LLM’s strategies for generating prompts to encourage user reflection, including descriptions and exemplar questions. Strategies include Emotional Exploration, Activity Exploration, In-depth follow-up, and Future Plan Exploration with associated descriptions and examples
\begin{table}
\begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline
**Category** & **Description** & **Example** \\ \hline
**Emotional states** & Messages that further inquire about the user’s emotional state, mood, and condition. & “How did you feel after meeting her?”, “Can you tell me how this situation makes you feel?” \\ \hline
**Activity Exploration** & Messages exploring the user’s experiences or activities. These questions delve into more objective and factual content. & “What kind of exercise do you do?”, “Lunch with a friend sounds nice. What did you eat?” \\ \hline
**In-depth follow-up** & Messages that help delve into the root causes or reasons related to mood, feeling, or specific situations. & “Since when have you felt this overshadowed mood?”, “If you feel confused, what might be the reason?” \\ \hline
**Future Plan Exploration** & Messages inquiring about how the user reacted to or plans to respond to the events or emotions they mentioned & “That situation must have been challenging. Have you considered any solutions?”, “What have you done to alleviate the loneliness, even just a bit?” \\ \hline \end{tabular}
\end{table}
Table 2: Summary of Participants’ Input Messages: Categories include Emotional States, Events & Activities, Thoughts & Beliefs, Perceived Health Status, and Relationships & Interactions with associated descriptions and examples
### Patients' Perspectives on MindfulDiary
Overall, participants viewed MindfulDiary as a space where they could open up and share their stories, feeling a sense of empathy from the system. Participants particularly found the dialogue-driven interactions with MindfulDiary useful. One participant, P15, mentioned, _"If it was just about recording daily activities or emotions like a regular diary, it might have been less engaging, and I could've found it tedious or might not have persisted for long. But this felt like having a conversation with AI, which added an element of fun and kept me engaged in continuous use."_ Such a dialogue-driven journaling process aided participants in maintaining consistent records and helped in forming a habit consistent with our user engagement analysis. P7 stated, _"I liked chatting with the AI at first, so I kept using it. The more I used it, the more it became a habit."_
#### 6.3.1. Broad Conversational Range: The Versatility in Documenting Diverse Interests
Our participants appreciated the LLM's flexibility and naturalness in responding to various utterances, topics, and situations. Such broad conversational capabilities of the LLM provided participants with a space where they could document a variety of subjects tailored to individual interests and preferences. In our study, participants interacted with the LLM on diverse topics ranging from games, webcomics, novels, and movies to hobbies like making bead bracelets, allowing them to create richer and more personal records. P3 remarked, _"AI systems that I have used in the past could only respond to specific words, but it is amazing how this one can respond to all sorts of things."_
#### 6.3.2. Expanding views: Enriching Entries with Varied Perspectives
Participants also valued the diverse and new perspectives that LLM-generated responses offered, as those helped participants reflect on their daily events and emotions from various angles. This influence helped participants diverge from ruminating on depressive feelings. P12 mentioned, _" Sometimes when you note down emotions, that's the only thought that comes to mind. Beyond that, I don't remember much. Since MindfulDiary uses AI, my thoughts flow more easily, and I like it when it asks about different perspectives or topics."_.
#### 6.3.3. Probing for Depth: Prompt Questions in Detailed Reflection
MindfulDiary's question-driven journaling process was also valued by participants as it assisted them with the process of daily reflections and documentation. Compared to their past experiences of journaling, where they had to reflect on their daily life by themselves, participants appreciated that MindfulDiary made the journaling process less daunting. P27 said, _"Because I have to rely solely on my thoughts when I write alone, I sometimes get stuck. But when I was unsure about how to write, the AI helped me. I liked that part."_. The questions posed by MindfulDiary also guided participants in documenting their daily lives in a more detailed manner by asking their thoughts and feelings about a particular event. Such probing allowed for richer, more in-depth entries. P13 shared, _"I used to write diaries on my own and just wrote whatever came to mind. MindfulDiary, however, helped me write in more detail because of the specific questions."_
#### 6.3.4. Building Narratives: Structuring Daily Reflections with MindfulDiary
MindfulDiary's capabilities, such as generating contextualized follow-up questions and summarizing the conversation, made the process easier for participants who struggled to organize daily thoughts and events underpsychotherapy (Han et al., 2017). In their past experiences, our participants expressed difficulties in journaling because of disjointed thoughts, a lack of clarity in ideas, or inconsistencies in their stories. However, with the support of the LLM in the Mindful Diary, these challenges were addressed, motivating them to record their daily lives persistently. P13 remarked, _"I often had trouble putting sentences In the past, I would worry about writing the next part. But with this tool, I just tell the story of my day, and it seamlessly continues and wraps it up, presenting a well-structured diary entry. That's its biggest advantage."_
### MHPs' Perspectives on MindfulDiary for Clinical Mental Health Settings
#### 6.4.1. Expert Usage of MindfulDiary
In this section, we described how MHPs utilized the expert interface and what expectations and limitations they perceived. For this purpose, we interviewed five psychiatrists who were in charge of outpatient treatment at University Hospital.
All of the psychiatrists emphasized the critical value of an expert interface based on information recorded in the daily lives of patients. Specifically, E3 highlighted the program's value in that MindfulDiary consistently aids in recording daily entries, allowing them to utilize more detailed patient data during outpatient visits. _"Patients, with the support of AI, can logically continue their narratives, ensuring more dialogue than a typical (paper-based) diary. This definitely aids me in my consultations."_ (E3). In this section, we present some examples of how MindfulDiary was integrated into clinical settings.
_Integrating MindfulDiary into Clinical Workflow_. First, we investigate how psychiatrists use MindfulDiary in their workflow. We have not provided guidance on how to use the clinician dashboard. However, we informed psychiatrists that the LLM-based analysis may not be accurate, and therefore, it is necessary to check the interaction log-data. The psychiatrists reviewed MindfulDiary data from their patients every morning during the deployment study, which they typically spent reviewing patient chart data they were going to see that day. Depending on the severity and the focal concerns of the patient, psychiatrists spent about 5-10 minutes per patient reviewing the MindfulDiary data. After observing trends primarily through PHQ-9 in the clinician dashboard first, psychiatrists read summaries about overall events and documented emotions. If there were spikes or drops in the PHQ-9 or events/emotions that needed further verification, they checked the actual interaction log.
_Enhancing Understanding and Empathy toward Patients_. Psychiatrists indicated that MindfulDiary helped them gain a deeper understanding and empathy about their patients. They perceived that MindfulDiary served as a questioner that could elicit more objective and genuine responses from patients. Psychiatrists appreciated that the LLM was able to pose questions that might be sensitive or burdensome for them to ask, such as patients' negative perceptions of their parents. E4 said: _"There are times when it's challenging to counter a patient's narrative or offer an opposing perspective. For example, if a patient speaks very negatively about their mother, and we ask, Didn't she treat you well when you were younger?', the patient might react aggressively, thinking, 'Why is the therapist taking my mother's side?" However, since the LLM is a machine, such concerns are minimized."_.
_Insights from Everyday Perspectives Outside Clinical Visits_. Psychiatrists valued that MindfulDiary provided them with an understanding of patients' conditions that would be difficult to gain during outpatient visits. For instance, E1 appreciated that MindfulDiary provided them with insights into patients' positive feelings and experiences, which is typically difficult to obtain during clinical consultations. _"Usually, when patients come for a consultation, they talk about bad experiences. Few people come to psychiatry to say, 'I've been doing well.' Even if they have good things to say, they usually don't bring them up. But I was happy to see that there were many positive statements in these records, like 'I did that and felt good.' Especially in depression, the presence or absence of positive emotions is crucial. It's a good sign if they show such positive responses."_. E2 envisioned its potential application to medication management, which is another critical aspect of psychiatric care. He thought these records could be used as a window into understanding how patients react to and perceive medications. For patients undergoing drug therapy, _"If the primary treatment method is pills, but they don't seem to have an effective response or there's a decline in medication acceptance, I could potentially understand the reasons for it through this diary."_ (E2).
Understanding Patient Progress Through Consistent Record-KeepingFeedback from patients highlighted that interactions with MindfulDiary made it easier for patients to maintain a consistent record, as it mitigated the challenges associated with recording. Psychiatrists perceived that having consistent daily data offered them opportunities to observe trends in a patient's condition. E2 said: _"From our perspective as clinicians, even though we might only see a patient once a month, having access to a record of how they've been throughout the month would allow us to track their progress, which is highly beneficial."_ In particular, the ability to examine changes not only through quantitative tools like the PHQ-9 but also using a qualitative approach can offer a comprehensive understanding and shed light on the mechanisms influencing a patient's mental health.
#### 6.4.2. Perceived Opportunities and Limitations of MindfulDiary
While mental health MHPs generally appraised the utility of the MindfulDiary positively, they also perceived potential limitations, sharing considerations when integrating MindfulDiary into clinical settings.
Significance of Tone and Manner in Patient Data AnalysisAlthough patient data summarized and extracted in the expert interface effectively aided in understanding the patient, experts thought that the summarized texts would not convey the patient's tone, pace, and other nuances, which are integral to the Mental Status Examination (MSE) that clinicians utilize. However, MHPs identified the opportunity to perform such analysis from the raw data that patients entered. As the MSE measures objective and quantitative aspects, incorporating such an analysis could make significant improvements in understanding the patient. E1 said, _"In the same way as P14, understanding the tone of this patient may also be possible. That's because we use something called psychiatric MSE, where we observe more than just the patient's appearance, such as tone, pace, and more. Even a short analysis of one's linguistic behavior would be great."_
Potential Misuses and Concerns around MindfulDiaryIn our field study, one patient participant perceived the MindfulDiary as a channel to convey their intentions and situations to their psychiatrist. Specifically, the participant, P9, talked to their psychiatrist, _"Have you seen what I wrote?"_, which indicated that the patient was actively attempting to share their current state and situation through MindfulDiary. In spite of the fact that such usage did not seem problematic per se, one psychiatrist raised concerns about the possibility that patients with borderline personality disorders might misuse MindfulDiary as a weapon to manipulate others, such as their providers and parents. _"In some cases, people self-harm out of genuine distress, but others do it to manipulate others, instilling guilt in them so they'll do what they want. There are some patients who write about their distress with sincerity, while there are some who exaggerate their distress in order to get attention."_ For patients exhibiting symptoms of schizophrenia or delusions, there was a concern that MindfulDiary's feature of revisiting past entries could act as a feedback loop, developing and amplifying their delusions. E2 said, _"This diary lets you revisit and organize your past actions. For schizophrenia patients with delusions or unique beliefs, referencing past writings might reinforce their pre-existing delusions. Reaffirming 'Yes, I'm right' can be problematic. The LLM's summaries could exacerbate these delusions if they emphasize distorted content."_
## 7. Discussion
In this study, we present MindfulDiary, an LLM-driven journal designed to document the daily experiences of psychiatric patients through naturalistic conversations. Here, we reflect on the opportunities presented by LLM-driven journaling for psychiatric patients and discuss considerations for integrating an LLM-driven patient system into the clinical setting.
### Guiding patient journaling through conversations offering diverse perspectives
Our study highlighted the potential of MindfulDiary in clinical settings, mainly where adherence to interventions is important (MindfulDiary, 2017). Core symptoms of depression, such as loss of energy, difficulty in carrying out mental processes, and feelings of apathy, often contribute to lower adherence to a professional's advice or intervention (Mindful, 2018). Clinicians who participated in our FGI also highlighted these challenges in motivating patients to utilize the diary writing program. Our findings demonstrated that MindfulDiary helped mitigate these challenges by transforming the conventional journaling process into engaging conversations. Using MindfulDiary, users were able to engage in conversations with the system by answering prompts and questions, which made them feel the journaling process was more accessible and intriguing. This active participation ensures that the users are not overwhelmed by the task and are guided in documenting their feelings and experiences more richly.
Depression often locks patients into negative and rigid thought patterns (Ball et al., 2017). Such patterns, resistance to change established thought paradigms, can severely limit a patient's ability to perceive issues from multiple angles, leading to a harsh self-judgment (Sandel, 2018). Our study highlighted that the varied perspectives offered by LLM-driven chatbots like MindfulDiary could help challenge such fixed viewpoints (Mindful, 2018). By prompting users to revisit their initial evaluations or suggest alternative viewpoints, these chatbots could help break the cycle of cognitive rigidity. While our research underscores the promising role of LLM-driven chatbots in assisting psychiatric patients' journaling process, it's essential to note that these are preliminary findings. More work is needed to substantiate these findings in a clinical context.
### Chatbot as a Mediator for Fostering Patient Understanding
Studies have suggested that sharing the data captured via chatbots with others, such as health professionals and family members, could further serve as an effective mediator that helps convey more truthful information (Sandel, 2018; D'Amico et al., 2018). For instance, patients consistently displayed deep self-disclosure through chatbots, whether or not they intended to share their inputs with health professionals (Sandel, 2018). Aligned with prior work on PGHD (Sandel, 2018; D'Amico et al., 2018), MHPs in our study also perceived that MindfulDiary has shed light on patients' daily events, emotions, and thoughts that might have been difficult to gain through regular clinical visits. This data offered MHPs valuable insights into the patient's experiences and context. However, our study findings point to the need to carefully consider. In our study, we observed that the act of sharing journal content with MHPs through MindfulDiary led some patients to exaggerate their conditions or needs. This case underscores that when integrating a system like MindfulDiary into a clinical context, not only the design of chatbots that lead to patient disclosure behavior but also the complex patient-provider dynamics in clinical settings play a significant influence. The growing prevalence of chatbots in mental health domains emphasizes the need for a holistic approach to their design and implementation. We highlight that engineers and MHPs need to collaborate closely, ensuring that these tools are not only technically sound but also tailored to meet the intricate dynamics of clinical settings (D'Amico et al., 2018).
### Considerations for Integrating LLM-driven Patient Program
In this section, we discuss the consideration for integrating LLM-driven patient programs into clinical mental health settings, drawing insights from the design and evaluation of MindfulDiary.
_Aligning domain experts' expectations of LLMs._ Developing and deploying MindfulDiary, we learned that aligning MHPs' expectations with the capabilities and limitations of LLMs involves significant challenges. The capability of generative language models to improve mental health is difficult to measure in comparison with AI models in other
medical domains, where objective metrics can determine performance. For instance, in medical imaging, AI can be evaluated based on its accuracy in identifying target diseases from MRI scans, using precise numerical percentages of correct identifications (Sen et al., 2016). On the other hand, in the realm of mental health chatbots, gauging success is more nuanced, as it involves subjective interpretations of emotional well-being and psychological improvement, which cannot be easily quantified or compared in the same straightforward manner. This challenge is amplified in mental health, where soft skills like rapport building and emotional observation are important (Sen et al., 2016). The use of LLMs in the mental health field is emerging, but little has been said about evaluating or defining the performance of models that are tailored to mental health. Our iterative evaluation process involving MHPs could inform researchers about how to develop and evaluate LLM-mediated mental health technology. When integrating into the clinical setting, this evaluation is also necessary for anticipating who the system would target and for what purpose it would be used. Hence, we advocate that engineers and researchers should carefully consider how to assist domain experts, who may lack AI expertise, in fully and accurately grasping the role and operation of LLM. It is also crucial for researchers and engineers to collaborate closely with these professionals to ensure the technology aligns with therapeutic needs and best practices (Sen et al., 2016).
_Tailored LLM Evaluation for Clinical Mental Health Domains._ The domain of mental health, which our study addresses, is characterized by the vulnerability of its target user group. The content discussed within this domain is often emotionally charged and sensitive. Therefore, prioritizing user safety becomes even more essential in this domain than in others. Considering the sensitivity of the domain, during our evaluation process, MHPs thoroughly tested the LLM's output by trying out conversations on various sensitive topics in both implicit and explicit ways, drawing upon their clinical experiences. The contents the MHPs input were much more diverse and wide-ranging than what engineers could generate during the development. Additionally, MHPs showed concern that the hallucinations of the language model could reinforce or expand the delusions of patients with delusional disorders. We highlight that developing evidence-based tests or benchmark sets to anticipate the behavior of the language models in collaboration with MHPs is critical when leveraging LLMs for clinical mental health settings.
_Role of MHPs in Integration._ Considering the limitations of current LLMs (Kang et al., 2017), it is critical to involve MHPs when deploying LLM-driven systems for patients in mental health contexts. While planning the field deployment study of MindfulDiary, we identified specific roles that MHPs could play. In the pre-use phase, MHPs should determine the suitability of users and facilitate the onboarding process with patients. During the mid-use phase, they should closely monitor interactions with the LLM and be prepared to intervene in cases of crises or unexpected use scenarios. Furthermore, they can offer or adjust treatments based on long-term data periodically. Additionally, they should regularly re-evaluate the continued use of the system. While some of these tasks should carefully be designed not to burden MHPs too much, it is important that LLMs do not make autonomous decisions about patients (e.g., diagnosis, prescription, or crisis management) but instead operate under professional oversight.
### Limitations and Future Work
Our recruitment method could impact the generalizability of our findings, as we recruited the patient participants for our field study from a single university hospital. Although we aimed to recruit patients with diverse types and levels of symptoms, our participants are not representative samples of psychiatric patients. They were young (mostly adolescents) and consulted by a fixed number of psychiatrists. While this work is just a first step toward designing an LLM-driven journaling app for psychiatric patients, further investigation is necessary with subjects from various backgrounds.
To implement our pipeline, we used OpenAI's GPT API, which provided the most capable LLM at the time of our study and was accessible via commercial API. As GPT models are continually updated, later models may not yield the same conversational behavior. To generalize the performance of our conversational pipeline design, future work is needed to compare multiple versions of MindfulDiary with different underlying LLMs.
## 8. Conclusion
In this paper, we designed MindfulDiary to assist psychiatric patients undergoing outpatient treatment with journaling in their daily lives. Keeping the clinical mental health setting in mind, our system was developed in collaboration with MHPs, from the initial concept building to the design of LLM's conversation flow and evaluation. MindfulDiary leverages a stage-based LLM-driven chatbot, enabling patients to interact through prompt questions and answers, while complying with guidelines based on MHPs and literature. We conducted a field deployment study with 28 patients over 4 weeks. We found that the versatility, narrative-building capability, and diverse perspectives provided by MindfulDiary assisted patients in consistently enriching their daily records. The enriched records from MindfulDiary provided psychiatrists with deeper insights, enhancing their understanding and empathy toward their patients. We hope that this research provides a case study and insight into the development of an LLM-driven chatbot for mental health that is clinically relevant and reflects the needs and experiences of MHPs.
###### Acknowledgements.
We thank our study participants for their time and efforts. We also thank Eunkyung Jo and Yubin Choi for providing feedback on the early draft of this paper. This work was supported as a research internship at NAVER AI Lab.
|
2307.00786
|
An FTP Algorithm for Temporal Graph Untangling
|
Several classical combinatorial problems have been considered and analysed on
temporal graphs. Recently, a variant of Vertex Cover on temporal graphs, called
MinTimelineCover, has been introduced to summarize timeline activities in
social networks. The problem asks to cover every temporal edge while minimizing
the total span of the vertices (where the span of a vertex is the length of the
timestamp interval it must remain active in, minus one). While the problem has
been shown to be NP-hard even in very restricted cases, its parameterized
complexity has not been fully understood. The problem is known to be in FPT
under the span parameter only for graphs with two timestamps, but the
parameterized complexity for the general case is open. We settle this open
problem by giving an FPT algorithm that is based on a combination of iterative
compression and a reduction to the Digraph Pair Cut problem, a powerful problem
that has received significant attention recently.
|
Riccardo Dondi, Manuel Lafond
|
2023-07-03T07:05:25Z
|
http://arxiv.org/abs/2307.00786v1
|
# An FTP Algorithm for Temporal Graph Untangling
###### Abstract
Several classical combinatorial problems have been considered and analysed on temporal graphs. Recently, a variant of Vertex Cover on temporal graphs, called MinTimelineCover, has been introduced to summarize timeline activities in social networks. The problem asks to cover every temporal edge while minimizing the total span of the vertices (where the span of a vertex is the length of the timestamp interval it must remain active in, minus one). While the problem has been shown to be NP-hard even in very restricted cases, its parameterized complexity has not been fully understood. The problem is known to be in FPT under the span parameter only for graphs with two timestamps, but the parameterized complexity for the general case is open. We settle this open problem by giving an FPT algorithm that is based on a combination of iterative compression and a reduction to the Digraph Pair Cut problem, a powerful problem that has received significant attention recently.
Temporal Graphs, Vertex Cover, Graph Algorithms, Parameterized Complexity 10.4230/LIPIcs...
## 1 Introduction
Temporal graphs are emerging as one of the main models to describe the dynamics of complex networks. They describe how relations (edges) change in in a discrete time domain [11, 10], while the vertex set is not changing. The development of algorithms on temporal graphs has mostly focused on finding paths or walks and on analyzing graph connectivity [11, 19, 20, 6, 21, 7, 3, 16, 1, 5]. However, several classical problems in computer science have been recently extended to temporal graphs and one of the most relevant problem in graph theory and theoretical computer science, Vertex Cover, has been considered in this context [2, 9, 18].
In particular, here we study a variant of Vertex Cover, called Network Untangling introduced in [18]. Network Untangling has application in discovering event timelines and summarizing temporal networks. It considers a sequence of temporal interactions between entities (e.g. discussions between users in a social network) and aims to explain the observed interactions with few (and short) _activity intervals_ of entities, such that each interaction is covered by at least one of the two entities involved (i.e. at least one of the two entities is active when an interaction between them is observed).
Network Untangling can be seen as a variant of Vertex Cover, where we search for a minimum cover of the interactions, called temporal edges. The size of this temporal vertex cover is based on the definition of _span_ of a vertex, that is the length of vertex activity. In particular, the span of a vertex is defined as the difference between the maximum and minimum timestamp where the vertex is active. Hence, if a vertex is active in exactly one timestamp, it has a span equal to 0.
Four combinatorial formulations of Network Untangling have been defined in [18], varying the definition of vertex activity (a single interval or \(h\geq 2\) intervals) and the objective function (minimization of the sum of vertex spans or minimization of the maximum vertex span). Here we consider the formulation, denoted by MinTimelineCover, where vertex activity is defined as a single interval and the objective function is the minimization of the sum of vertex spans. Hence, given a temporal graph, MinTimelineCover searches for a cover of the temporal edges that has minimum span and such that each vertex is active in one time interval.
The MinTimelineCover problem is known to be NP-hard [18]. The problem is hard also in very restricted cases when each timestamp contains at most one temporal edge [4], when each vertex has at most two incident temporal edges in each timestamp and the temporal graph is defined over three timestamps [4], and when the temporal graph is defined over two timestamps [8]. Note that, since the span of a vertex activity in exactly one timestamp is equal to 0, MinTimelineCover is trivially in P when the temporal graph is defined on a single timestamp, since in this case any solution of the problem has span 0. Furthermore, deciding whether there exists a solution of MinTimelineCover that has span equal to 0 can be decided in polynomial time via a reduction to 2-SAT [18].
MinTimelineCover has been considered also in the parameterized complexity framework. The definition of span leads to a problem where the algorithmic approaches applied to Vertex Cover cannot be easily extended for the parameter span of the solution. Indeed, in Vertex Cover for each edge we are sure than at least one of the endpoints must be included in the solution, thus at least one of the vertex contributes to the cost of the solution. This leads to the textbook FPT algorithm of branching over the endpoints of any edge. For MinTimelineCover, a vertex with span 0 may cover a temporal edge, as the vertex can be active only in the timestamp where the temporal edge is defined. This makes it more challenging to design FPT algorithms when the parameter is the span of the solution. In this case, MinTimelineCover is known to admit a parameterized algorithm only when the input temporal graph is defined over two timestamps [8], with a parameterized reduction to the Almost 2-SAT problem. However, the parameterized complexity of MinTimelineCover for parameter the span of the solution on general instances has been left open [8, 4]. The authors of [8] have also analyzed the parameterized complexity of the variants of Network Untangling proposed in [18], considering other parameters in addition to the span of the solution: the number of vertices of the temporal graph, the length of the time domain, and the number of intervals of vertex activity.
**Our contributions.** We solve the open question on the parameterized complexity of MinTimelineCover by showing that the problem is FPT in parameter \(k\), the span of a solution, even if the number of timestamps is unbounded. Our algorithm takes time \(O^{*}(2^{5k\log k})\), where the \(O^{*}\) notation hides polynomial factors. Our algorithm is divided into two phases, each using a different technique. First, given a temporal graph \(G\), we use a variant of iterative compression, where we start from a solution \(S\) of span at most \(k\) on a subgraph of \(G\) induced by a subset of vertices (taken across all timestamps), and then try to maintain such a solution after adding a new vertex of \(G\) to the graph under consideration. This requires us to reorganize which vertices involved in \(S\) should be in the solution or not, and in which timestamps. One challenge is that since the number of such timestamps is unbounded, there are too many ways to decide how to include or not include the vertices that are involved in \(S\). We introduce the notion of a _feasible assignment_, which allows us to compute how the vertices in \(S\) can be reorganized (see text for definition). There are only \(2^{O(k\log k)}\) ways of reorganizing the vertices in \(S\). We try each such feasible assignments \(X\)
and we must then find a temporal cover of the whole graph \(G\) that "agrees" with \(X\).
This leads to the second phase of the algorithm, which decides if such an agreement cover exists through a reduction to a variant of a problem called Digraph Pair Cut. In this problem, we receive a directed graph and forbidden pairs of vertices, and we must delete at most \(k\) arcs so that a specified source vertex does not reach both vertices from a forbidden pair. It is known that the problem can be solved in time \(O^{*}(2^{k})\). In this work, we need a version where the input specifies a set of deletable and undeletable arcs, which we call Constrained Digraph Pair Cut. The Digraph Pair Cut problem and its variants have played an important role in devising randomized kernels using matroids [15] and, more recently, in establishing a dichotomy in the complexity landscape of constraint satisfaction problems [12, 14]. Here, the problem is useful since it can model the implications of choosing or not a vertex in a solution and, in a more challenging way, allows implementing the notion of cost using our definition of span. We hope that the techniques developed for this reduction can be useful for other variants of temporal graph cover.
**Overview of the algorithm.** Our approach is loosely inspired by some ideas from the FPT algorithm for two timestamps, which is a reduction to Almost 2-SAT [8]. In the latter, one is given a set of clauses with at most two variables and must delete a minimum number of them so that those remaining are satisfiable. We do not use Almost 2-SAT directly, but its usage for two timestamps may help understanding the origins of our techniques and the relevance of our reduction to Digraph Pair Cut.
The reduction from MinTimelineCover on two timestamps to Almost 2-SAT associates each vertex \(v_{i}\) with a variable \(x(v_{i})\), which is true when one should include \(v_{i}\) in a vertex cover and false otherwise; each edge \(u_{i}v_{i}\) is associated with a clause \(x(u_{i})\lor x(v_{i})\) (here, \(v_{i}\) represents the occurrence of vertex \(v\) at timestamp \(i\in\{1,2\}\)). This corresponds to enforcing the inclusion of \(u_{i}\) or \(v_{i}\) in our vertex cover, and we can include enough copies of this clause to make it undeletable. Since our goal is to minimize the number of base vertices \(v\) with both \(v_{1}\) and \(v_{2}\) in the cover, we also add a clause \(\neg x(v_{1})\vee\neg x(v_{2})\). Then there is a temporal cover of \(G\) of span at most \(k\) if and only if one can delete at most \(k\) clauses of the latter form to make all remaining clauses satisfiable. Even though this reduction produces clauses with only positive or negative clauses, MinTimelineCover does not appear to be much simpler than Almost 2-SAT in terms of FPT algorithms, and studying the SAT formulation seems more approachable.
For \(T\geq 3\) timestamps, the clauses of the form \(x(u_{i})\lor x(v_{i})\) can still be used model the vertex cover requirements, but there seems to be no obvious way to model the span of a cover. One would need to devise a set of clauses of size two such that choosing an interval of \(t\) vertices in a cover corresponds to deleting \(t-1\) negative clauses. Our idea is to extend current FPT algorithms for Almost 2-SAT to accommodate our cost function. In [17], the authors propose an iterative compression FPT algorithm that starts from a solution that deletes \(k+1\) clauses, and modifies it into a solution with \(k\) clauses, if possible. The algorithm relies on several clever, but complicated properties of the dependency graph of the clauses (in which vertices are literals and arcs are implications implied by the clauses). This algorithm seems difficult to adapt to our problem. To our knowledge, the only other FPT algorithm for Almost 2-SAT is that of [15]. This is achieved through a parameterized reduction to Digraph Pair Cut. At a high level, the idea is to start from an initial guess of assignment for a well-chosen subset of variables, then to construct the dependency graph of the clauses. A certain chain of implications is enforced by our initial guess, the vertex pairs to separate correspond to contradictory literals, and deleting arcs corresponds to deleting clauses. It turns out that, with some work, we can skip the Almost 2-SAT formulation and
reduce MinTimelineCover to (a variant of) Directed Pair Cut directly by borrowing some ideas from this reduction. This is not immediate though. The first challenge is that the aforementioned "well-chosen initial guess" idea cannot be used in our context, and we must develop new tools to enumerate a bounded number of initial guesses from a partial solution (which we call feasible assignment). The second challenge is that our reduction to our variant of Directed Pair Cut needs a specific gadget to enforce our cost scheme, while remaining consistent with the idea of modeling the dependency graph of the Sat instance corresponding to the vertex problem at hand.
## 2 Preliminaries
For an integer \(n\), we denote \([n]=\{1,\ldots,n\}\) and for two integers \(i\), \(j\), with \(i<j\), we denote \([i,j]=\{i,i+1,\ldots,j-1,j\}\). Temporal graphs are defined over a discrete time domain \(\mathcal{T}\), which is a sequence \(1,2\ldots,T\) of timestamps. A temporal graph is also defined over a set of vertices, called _base vertices_, that do not change in the time domain and are defined in all timestamps, and are associated with _vertices_, which are base vertices defined in specific timestamps. We use subscripts to denote the timestamp to which a vertex belongs to, so, for a base vertex \(v\) and \(t\in[T]\), we use \(v_{t}\) to denote the occurrence of \(v\) in timestamp \(t\). A _temporal edge_ connects two vertices, associated with distinct base vertices, that belong to the same timestamp.
A temporal graph \(G=(V_{B},E,\mathcal{T})\) consists of
1. A time domain \(\mathcal{T}=\{1,2\ldots,T\}\);
2. A set \(V_{B}\) of _base vertices_; \(V_{B}\) has a corresponding set \(V(G)\) of _vertices_, which consists of base vertices in specific timestamps, defined as follows: \[V(G)=\{v_{t}:v\in V_{B}\wedge t\in[T]\}.\]
3. A set \(E=E(G)\) of temporal edges, which satisfies: \[E\subseteq\{u_{t}v_{t}:u,v\in V_{B},t\in[T]\wedge u\neq v\}.\]
For a directed (static) graph \(H\), we denote by \((u,v)\) an arc from vertex \(u\) to vertex \(v\) (we consider only directed static graphs, not directed temporal graphs).
Given a temporal graph \(G=(V_{B},E,\mathcal{T})\) and a set of base vertices \(B\subseteq V_{B}\), we define the set \(\tau(B)\) of all vertices of \(B\) across all times:
\[\tau(B)=\{v_{t}:v\in B\wedge t\in[T]\}.\]
If \(B=\{v\}\), we may write \(\tau(v)\) instead of \(\tau(\{v\})\).
Given a set \(W\subseteq V(G)\), we denote by \(G[W]\) the subgraph induced by vertices \(W\), i.e. \(V(G[W])=W\) and \(E(G[W])=\{u_{t}v_{t}\in E:u_{t},v_{t}\in W\}\). For a subset \(W_{B}\subseteq V_{B}\) of base vertices, we denote \(G[W_{B}]=G[\tau(W_{B})]\). We also use the notation \(G-W_{B}=G[V_{B}\setminus W_{B}]\). Observe that \(G[W_{B}]\) and \(G-W_{B}\) are temporal graphs over the same time domain as \(G\).
In order to define the problem we are interested in, we need to define the _assignment_ of a set of base vertices.
Consider a temporal graph \(G=(V_{B},E,\mathcal{T})\) and a set \(W_{B}\subseteq V_{B}\) of base vertices. An _assignment_ of \(W_{B}\) is a subset \(X\subseteq\tau(W_{B})\) such that if \(u_{p}\in X\) and \(u_{q}\in X\), with \(p,q\in[T]\), then \(u_{t}\in X\), for each \(t\in[T]\) with \(p\leq t\leq q\). For a base vertex
such that there exists \(t\in[T]\) with \(u_{t}\in X\), we denote by \(\delta(u,X)\), \(\Delta(u,X)\), respectively, the minimum and maximum timestamp, respectively, such that \(u_{\delta(u,X)},u_{\Delta(u,X)}\in X\). If \(u_{t}\) does not exist, then \(\delta(u,X)=\Delta(u,X)=0\)._
If \(W_{B}\) is clear from the context or not relevant, then we may say that \(X\) is an assignment, without specifying \(W_{B}\). Note that, given an assignment \(X\) and a set \(\tau(v)\), for some \(v\in V_{B}\), then \(X\cap\tau(v)=\{v_{t}:v_{t}\in X\wedge v_{t}\in\tau(v)\}\) contains vertices for \(v\) that belong to a contiguous interval of timestamps. Consider a set \(I\subseteq[T]\) of timestamps. An assignment \(X\)_intersects_\(I\) if there exists \(v_{t}\in X\) such that \(t\in I\).
Now, we give the definition of _temporal cover_.
Given a temporal graph \(G=(V_{B},\mathcal{T},E)\) a temporal cover of \(G\) is an assignment \(X\) of \(V_{B}\) such that the following properties hold:
1. For each \(v\in V_{B}\) there exists at least one \(v_{t}\in X\), for some \(t\in\mathcal{T}\).
2. For each \(u_{t}v_{t}\in E\), with \(t\in[T]\), at least one of \(u_{t}\), \(v_{t}\) is in \(X\).
For a temporal cover \(X\) of \(G\), the _span_ of \(v\) in \(X\) is defined as: \(\text{{sp}}(v,X)=\Delta(v,X)-\delta(v,X)\). Note that if a temporal cover \(X\) contains, for a base vertex \(v\in V_{B}\), a single vertex \(v_{t}\), then \(\text{{sp}}(v,X)=0\). The span of \(X\), denoted by \(\text{{sp}}(X)\), is then defined as:
\[\text{{sp}}(X)=\sum_{v\in V_{B}}\text{{sp}}(v,X).\]
Now, we are able to define MinTimelineCover (an example is presented in Fig. 1).
(MinTimelineCover)
**Input:** A temporal graph \(G=(V_{B},\mathcal{T},E)\).
**Question:** Does there exist a temporal cover of \(G\) of span at most \(k\)?
A temporal cover \(S\subseteq V(G)\) of span at most \(k\) will sometimes be called a _solution_. Our goal is to decide whether MinTimelineCover is FPT in parameter \(k\).
## 3 An FPT Algorithm
In this section we present our FPT algorithm, which consists of two parts:
1. The iterative compression technique.
2. A reduction to the Constrained Digraph Pair Cut problem.
Before presenting the main steps of our algorithm, we present the main idea and some definitions. Recall that our parameter, that is the span of a solution of MinTimelineCover, is denoted by \(k\).
Consider a temporal graph \(G\) and assume we have a temporal cover \(S\) of span at most \(k\) of the subgraph \(G-\{w\}\), for some base vertex \(w\in V_{B}\). The idea of the iterative compression step is, starting from \(S\), to show how to decide in FPT time whether there exists a solution of MinTimelineCover for \(G\). This is done by solving a subproblem, called Restricted Timeline Cover, where we must modify \(S\) to consider \(w\). A solution to this subproblem is computed by branching on the assignments of base vertices having a positive span in \(S\) and on \(w\), and then reducing the problem to Constrained Digraph Pair Cut. Restricted Timeline Cover is defined as follows.
**Problem 2**.: (Restricted Timeline Cover)
_Input: A temporal graph \(G=(V_{B},\mathcal{T},E)\), a vertex \(w\in V_{B}\), a temporal cover \(S\) of \(G-\{w\}\) of span at most \(k\). Output: Does there exist a temporal cover of \(G\) of span at most \(k\)?_
For technical reasons that will become apparent later, we will assume that the temporal graph contains no edge at timestamps \(1\) and \(T\), i.e. \(G[\{v_{1},v_{T}:v\in V_{B}\}]\) is an edgeless graph (as in Fig. 1). It is easy to see that if this is not already the case, we can add two such "dummy" timestamps, where \(G\) does not contain any temporal edge. Indeed, since there are no temporal edges in these two timestamps, then \(G\) has a temporal cover of span at most \(k\) if and only the same graph with dummy timestamps has a temporal cover of span at most \(k\).
Informally, if we are able to solve Restricted Timeline Cover in FPT time, then we can obtain an FPT algorithm for MinTimelineCover as well. Indeed, we can first compute a temporal cover on a small subset of base vertices (for example a single vertex), and then we can add, one at a time, the other vertices of the graph. This requires at most \(|V_{B}|\) iterations, and each time a vertex is added, we compute a solution of Restricted Timeline Cover to check whether it is possible to find a temporal cover of span at most \(k\) after the addition of a vertex.
### Iterative Compression
We now present our approach based on iterative compression to solve the Restricted Timeline Cover problem. Given a solution \(S\) for \(G-\{w\}\), we focus on the vertices of \(V_{B}\) that have a positive span in \(S\) and vertex \(w\). An example of our approach, that illustrates the sets of base vertices and vertices used by the algorithm, is presented in Fig. 2.
Consider the input of Restricted Timeline Cover that consists of a temporal graph \(G=(V_{B},\mathcal{T},E)\), a vertex \(w\in V_{B}\), and a temporal cover \(S\) of \(G-\{w\}\) of span at most \(k\)
Figure 1: An example of MinTimelineCover on a temporal graph \(G\) consisting of four base vertices and six timestamps. For each timestamp, we draw the temporal edges of \(G\), for example for \(t=2\), the temporal edges are \(v_{2}u_{2}\), \(v_{2}w_{2}\), \(u_{2}w_{2}\), \(z_{2}w_{2}\). Also note that in \(t=1\) and \(t=6\) no temporal edge is defined. A temporal cover \(X=\{v_{5},u_{2},u_{3},u_{4},z_{3},z_{4},w_{2}\}\) is represented with grey rectangles. Note that \(\delta(v,X)=\Delta(v,X)=5\), \(\delta(u,X)=2\), \(\Delta(u,X)=4\), \(\delta(z,X)=3\), \(\Delta(z,X)=4\), \(\delta(w,X)=\Delta(w,X)=2\). It follows that \(sp(X)=3\).
Define the following sets associated with \(S\):
\[V_{S}=\{v\in V_{B}:\exists p,q\in[T],p<q,\text{ such that }v_{p},v_{q} \in S\ \}\cup\{w\}\] \[V^{\prime}_{S}=\{v_{t}:v_{t}\in S,v\in V_{S}\setminus\{w\}\}\cup\{w_ {t}:t\in[T]\}.\]
The set \(V_{S}\) is defined as the set of base vertices having span greater than \(0\) in \(S\), plus vertex \(w\). \(V^{\prime}_{S}\) contains the vertices in \(V(G)\) associated with \(V_{S}\), in particular: (1) the vertices corresponding to the base vertices in \(V_{S}\setminus\{w\}\) that are included in \(S\) and (2) vertices corresponding to the base vertex \(w\) in every timestamp.
Define the following set \(I_{S}\) of timestamps associated with \(V_{S}\setminus\{w\}\):
\[I_{S}=\{t\ \in[T]:u_{t}\in V^{\prime}_{S}\text{ for some }u\in V_{S}\setminus\{w\}\ \}.\]
Essentially, \(I_{S}\) contains those timestamps where the base vertices of \(V_{S}\setminus\{w\}\), that is of span greater than zero, have associated vertices in \(S\). These timestamps are essential for computing a solution of Restricted Timeline Cover, that is to compute whether there exists a temporal cover of \(G[V_{B}]\) of span at most \(k\) starting from \(S\). We define now the sets of base vertices and vertices associated with \(S\) and having a span equal to \(0\):
\[Z_{S}=V_{B}\setminus V_{S}\qquad Z^{\prime}_{S}=S\setminus V^{\prime}_{S}.\]
First, we show two easy properties of \(S\) and \(I_{S}\) on the temporal graph \(G-\{w\}\).
Let \(S\) be a solution of MinTimelineCover on instance \(G-\{w\}\) and let \(I_{S}\) be the associated set of timestamps. Then \(|I_{S}|\leq 2k\).
Let \(S\) be a solution of MinTimelineCover on instance \(G-\{w\}\). Then, \(sp(Z^{\prime}_{S})=0\). Moreover, \(Z^{\prime}_{S}\) covers each temporal edge of \(G-\{w\}\) not covered by \(V^{\prime}_{S}\setminus\tau(w)\).
Now, we introduce the concept of feasible assignment, which is used to "guess" how \(S\) is rearranged in a solution of Restricted Timeline Cover. Recall that an assignment \(X\) intersects a set \(I_{S}\) of timestamps if there exists \(v_{t}\in X\) such that \(t\in I_{S}\).
[Feasible assignment] Consider an instance of Restricted Timeline Cover that consists of a temporal graph \(G=(V_{B},\mathcal{T},E)\), a vertex \(w\in V_{B}\), a temporal cover \(S\) of \(G-\{w\}\) of span at most \(k\), and sets \(V_{S},V^{\prime}_{S}\) and \(I_{S}\) associated with \(S\). We say that an assignment \(X\subseteq\tau(V_{S})\) of \(V_{S}\) is a _feasible assignment_ (with respect to \(G,S\), and \(I_{S}\)) if all of the following conditions hold:
1. the span of \(X\) is at most \(k\);
2. every edge of \(G[V_{S}]\) is covered by \(X\);
3. \(X\cap\tau(w)\) is a non-empty assignment of \(\{w\}\);
4. for every \(v\in V_{S}\setminus\{w\}\), at least one of the following holds: (1) \(X\cap\tau(v)\) is empty; (2) \(X\cap\tau(v)\) is an assignment of \(\{v\}\) that intersects with \(I_{S}\); or (3) \(X\cap\tau(v)\) contains a vertex \(v_{t}\) such that \(v_{t}w_{t}\in E\) and \(w_{t}\notin X\cap\tau(w)\).
Given a feasible assignment \(X\), we denote
\[M_{S}(X)=\{v\in V_{S}:X\cap\tau(v)\neq\emptyset\}\qquad\ N_{S}(X)=\{v\in V_{S} :X\cap\tau(v)=\emptyset\}\]
Informally, notice that point \(4\) considers the possible cases for a feasible assignment of the vertices of a base vertex \(v\in V_{S}\setminus\{w\}\) : none of the associated vertices in \(I_{S}\) belongs to the computed solution (case 4.(1)), or some of its associated vertices in \(I_{S}\) belongs to the
solution, case 4.(2) and case 4.(3), where the latter case is forced by the need of covering temporal edge \(v_{t}w_{t}\), with \(t\in I_{s}\), not covered by \(w_{t}\).
Note that \(M_{S}(X)\) and \(N_{S}(X)\) form a partition of \(V_{S}\). Also note that \(G,S\), and \(I_{S}\) are fixed in the remainder, so we assume that all feasible assignments are with respect to \(G,S\), and \(I_{S}\) without explicit mention. We now relate feasible assignments to temporal covers.
Let \(X^{*}\) be a temporal cover of \(G\) and let \(X\) be a feasible assignment. We say that \(X^{*}\)_agrees_ with \(X\) if:
* for each \(v\in M_{S}(X)\), \(X^{*}\cap\tau(v)=X\cap\tau(v)\);
* for each \(v\in N_{S}(X)\) and each \(t\in I_{S}\), \(X^{*}\) contains every neighbor \(u_{t}\) of \(v_{t}\) such that \(u_{t}\in\tau(Z_{S})\).
The intuition of \(X^{*}\) agreeing with \(X\) is as follows. For \(v\in M_{S}(X)\), \(X\) "knows" which vertices of \(\tau(v)\) should be in the solution, and we require \(X^{*}\) to contain exactly those. For \(v\in N_{S}(X)\), we interpret that \(X\) does not want any vertex \(v_{t}\) with \(t\in I_{S}\). Thus, to cover the edges incident to \(v_{t}\) that go outside of \(V_{S}\), we require \(X^{*}\) to contain the other endpoint. Note an important subtlety: we act "as if" \(X^{*}\) should not contain \(v_{t}\) or other vertices of \(N_{S}(X)\) with timestamp in \(I_{S}\), but the definition does not forbid it. Hence, \(X^{*}\)_can_ contain a vertex of \(N_{S}(X)\) in some timestamps of \(I_{S}\), as long as \(X^{*}\) contains also its neighbors (in \(I_{S}\)) outside \(V_{S}\).
The main purpose of feasible assignments and agreement is as follows.
Let \(X^{*}\) be a temporal cover of \(G\) of span at most \(k\). Then there exists a feasible assignment \(X\) such that \(X^{*}\) agrees with \(X\).
Proof.: Construct \(X\subseteq X^{*}\) as follows: add \(X^{*}\cap\tau(w)\) to \(X\), and for \(v\in V_{S}\setminus\{w\}\), add \(X^{*}\cap\tau(v)\) to \(X\) if and only if \(X^{*}\cap\tau(v)\) intersects with the set \(I_{S}\), or if it contains a vertex \(v_{t}\) incident to an edge \(v_{t}w_{t}\in E\) such that \(w_{t}\notin X^{*}\cap\tau(w)\). Note that since \(X^{*}\) is an assignment of \(V_{B}\), \(X\) is an assignment of \(V_{S}\).
We first focus on arguing that \(X\) satisfies each condition of a feasible assignment (Definition 6). For Condition 1, since \(X^{*}\) has span at most \(k\) and \(X\subseteq X^{*}\), it is clear that \(X\) also has span at most \(k\). For Condition 3, \(X^{*}\cap\tau(w)\) is non-empty by the definition of a temporal cover, and we added \(X^{*}\cap\tau(w)\) to \(X\). For Condition 4, we explicitly require in our construction of \(X\) that for each \(v\in V_{S}\setminus\{w\}\), if \(X\cap\tau(v)\) is non-empty, then it is equal to \(X^{*}\cap\tau(v)\) and it either intersects with \(I_{S}\) or covers an edge not covered by \(X\cap\tau(w)=X^{*}\cap\tau(w)\).
Let us focus on Condition 2. Let \(u_{t}v_{t}\in E(G[V_{S}])\). If \(u=w\), then if we did not add \(w_{t}\) to \(X\), \(X^{*}\) must contain \(v_{t}\) and we added \(X^{*}\cap\tau(v)\) to \(X\), thereby covering the edge. The same holds if \(v=w\). Assume \(u\neq w,v\neq w\), and suppose without loss of generality that \(X^{*}\) contains \(u_{t}\) to cover the edge. Suppose for contradiction that \(X\) does not cover \(u_{t}v_{t}\). Then we did not add \(X^{*}\cap\tau(u)\) to \(X\), which implies that \(X^{*}\cap\tau(u)\) does not intersect with \(I_{S}\). In particular, \(t\notin I_{S}\). Recall that \(S\), the temporal cover of \(G-\{w\}\), only intersects with \(\tau(u)\) and \(\tau(v)\) in timestamps contained in \(I_{S}\). Hence, \(S\) cannot cover \(u_{t}v_{t}\), a contradiction. We deduce that \(X\) covers every edge. Therefore, \(X\) is a feasible assignment.
It remains to show that \(X^{*}\) agrees with \(X\). For \(v\in M_{S}(X)\), \(X^{*}\cap\tau(v)=X\cap\tau(v)\) by the construction of \(X\). For \(v\in N_{S}(X)\), there is no \(v_{t}\in X^{*}\) with \(t\in I_{S}\), as otherwise we would have added \(X^{*}\cap\tau(v)\) to \(X\). For every such \(v_{t}\), \(X^{*}\) must contain all of its neighbors in \(\tau(Z_{S})\) to cover the edges, as required by the definition of agreement.
It remains to show that the number of feasible assignments has bounded size and can be enumerated efficiently. We first show the latter can be achieved through the following steps. Start with \(X\) as an empty set and then apply the following steps:
1. Branch into every non-empty assignment \(X_{w}\) of \(\{w\}\) of span at most \(k\). In each branch, add the chosen subset \(X_{w}\) to \(X\);
2. For every edge \(v_{t}w_{t}\in E(G[V_{S}])\) such that \(w_{t}\notin X_{w}\), add \(v_{t}\) to \(X\);
3. For every \(v\in V_{S}\setminus\{w\}\), such that \(X\cap\tau(v)=\emptyset\) at this moment, branch into \(|I_{S}|+1\) options: either add no vertex of \(\tau(v)\) to \(X\), or choose a vertex \(v_{t}\) and add it to \(X\), where \(t\in I_{S}\);
4. For every \(v\in V_{S}\setminus\{w\}\) such that \(X\cap\tau(v)\neq\emptyset\) at this moment, branch into every assignment \(X_{v}\) of \(\{v\}\) of span at most \(k\) that contains every vertex of \(X\cap\tau(v)\) (if no such assignment exists, abort the current branch). For each such branch, add every vertex of \(X_{v}\setminus X\) to \(X\).
The above steps enumerate every feasible assignment in time \(O(2^{4k\log k}T^{3}n)\), where \(n=|V_{B}|\).
### Reducing to Constrained Digraph Pair Cut
Our objective is now to list every feasible assignment and, for each of them, to verify whether there is a temporal cover that agrees with it. More specifically, consider a feasible assignment \(X\subseteq\tau(V_{S})\). Our goal is to decide whether there is a temporal cover \(X^{*}\) of span at most \(k\) that agrees with \(X\). Since we branch over every possible feasible assignment \(X\), if there is a temporal cover \(X^{*}\) of \(G\) of span at most \(k\), then by Theorem 4.2 our enumeration will eventually consider an \(X\) that \(X^{*}\) agrees with, and hence we will be able to decide of the existence of \(X^{*}\).
We show that finding \(X^{*}\) reduces to the Constrained Digraph Pair Cut problem, as we define it below. For a directed graph \(H\), we denote its set of arcs by \(A(H)\) (to avoid confusion with \(E(G)\), which is used for the edges of an undirected graph \(G\)). For \(F\subseteq A(H)\), we write \(H-F\) for the directed graph with vertex set \(V(H)\) and arc set \(A(H)\setminus F\).
[Constrained Digraph Pair Cut]
**Input:**\(A\) directed graph \(H=(V(H),A(H))\), a source vertex \(s\in V(H)\), a set of vertex pairs \(P\subseteq\binom{V(H)}{2}\) called forbidden pairs, a subset of arcs \(D\subseteq A(H)\) called _deletable arcs_, and an integer \(k^{\prime}\).
**Output:**\(Does\) there exist a set of arcs \(F\subseteq D\) of \(H\) such that \(|F|\leq k^{\prime}\) and such that, for each \(\{u,v\}\in P\), at least one of \(u\), \(v\) is not reachable from \(s\) in \(H-F\)?
It is known that Constrained Digraph Pair Cut can be solved in time \(O^{*}(2^{k^{\prime}})\)[15], but a few remarks are needed before proceeding. In [15], the authors only provide an algorithm for the _vertex-deletion_ variant, and do not consider deletable/undeletable arcs. It is easy to make an arc undeletable by adding enough parallel paths between the two endpoints, and we show at the end of the section that our formulation of Constrained Digraph Pair Cut reduces to the simple vertex-deletion variant. The vertex-deletion variant also admits a randomized polynomial kernel, and other FPT results are known for weighted arc-deletion variants [13].
So let us fix a feasible assignment \(X\) for the remainder of the section. We will denote \(M_{S}=M_{S}(X)\) and \(N_{S}=N_{S}(X)\). We also consider the following set of vertices associated with \(N_{S}\):
\[N_{S}^{\prime}=\{v_{2}:v\in N_{S}\}\hskip 28.452756ptN_{S}^{\prime\prime}=\{v_ {t}\in\tau(N_{S}):t\in I_{S}\}.\]
For each base vertex \(v\in N_{S}\), we need \(N_{S}^{\prime}\) to contain any vertex of \(\tau(v)\) in time \([2,T-1]\), so we choose \(v_{2}\) arbitrarily. Then, \(N_{S}^{\prime\prime}\) contains those vertices \(v_{t}\), with \(t\in I_{S}\), not chosen by the
feasible assignment \(X\). Note that according to our definition of agreement, a solution \(X^{*}\) should contain all the neighbors of \(N_{S}^{\prime\prime}\) vertices that are in \(Z_{S}\). Recall that we have defined \(Z_{S}=V_{B}\setminus V_{S}\) and \(Z_{S}^{\prime}=S\setminus V_{S}^{\prime}\). By Lemma 5 we know that \(Z_{S}^{\prime}\) covers each temporal edge of \(G[V_{B}\setminus\{w\}]\) not covered by \(S\cap V_{S}^{\prime}\), and that \(sp(Z_{S}^{\prime})=0\). We may assume that for each \(v\in Z_{S}\), there is exactly one \(t\in[T]\) such that \(v_{t}\in Z_{S}^{\prime}\) (there cannot be more than one since \(Z_{S}^{\prime}\) has span \(0\), and if there is no such \(t\), we can add any \(v_{t}\) without affecting the span). Furthermore, we will assume that for each \(v\in Z_{S}\), the vertex \(v_{t}\) in \(Z_{S}^{\prime}\) is not \(v_{1}\) nor \(v_{T}\). Indeed, since we assume that the first and last timestamps of \(G\) have no edges, if \(v_{t}=v_{1}\) or \(v_{t}=v_{T}\), then \(v_{t}\) covers no edge and we may safely change \(v_{p}\) to another vertex of \(\tau(v)\).
The following observation will be useful for our reduction to Constrained Digraph Pair Cut.
Let \(u_{t}v_{t}\in E(G)\) such that \(u\in N_{S}\) and \(v\notin M_{S}\). Then \(v\in Z_{S}\) and, if \(u_{t}\notin N_{S}^{\prime\prime}\), we have \(v_{t}\in Z_{S}^{\prime}\).
Now, given a feasible assignment \(X\subseteq V_{S}^{\prime}\), sets \(M_{S}\), \(N_{S}\), \(N_{S}^{\prime}\), \(N_{S}^{\prime\prime}\), \(Z_{S}\), and \(Z_{S}^{\prime}\), we present our reduction to the Constrained Digraph Pair Cut problem. We construct an instance of this problem that consists of the directed graph \(H=(V(H),A(H))\), the set of forbidden (unordered) pairs \(P\subseteq\binom{V(H)}{2}\), and the deletable arcs \(D\subseteq A(H)\) by applying the following steps. The second step in the construction is the most important and is shown in Figure 3. The intuition of these steps is provided afterwards.
1. add to \(H\) the source vertex \(s\);
2. for each \(v\in Z_{S}\cup N_{S}\), let \(v_{i}\) be the vertex of \(Z_{S}^{\prime}\cup N_{S}^{\prime}\), where \(i\in[2,T-1]\). Add to \(H\) the vertices \(v_{1}^{+},\ldots,v_{i-1}^{+},v_{i}^{-},v_{i+1}^{+},\ldots,v_{T}^{+}\), the vertices \(b_{v,j},c_{v,j},d_{v,j}\), for \(j\in[T]\setminus\{i\}\), and the set of arcs shown in Figure 3, that is there are arcs \((v_{j}^{+},b_{v,j})\), \((v_{j}^{+},c_{v,j})\), \((e_{v,j},d_{v,j})\), \((d_{v,j},v_{j}^{-})\), for each \(j\in[T]\setminus\{i\}\) and four directed paths: (1) from \(b_{v,i-1}\) to \(b_{v,1}\), (2) from \(c_{v,1}\) to \(c_{v,i-1}\), (3) from \(b_{v,i+1}\) to \(b_{v,T}\) and (4) from \(c_{v,T}\) to \(c_{v,i+1}\). Add to \(D\) the set of deletable arcs \((c_{v,j},d_{v,j})\), for \(j\in[T]\setminus\{i\}\). Then add the following pairs to \(P\): 1. \(\{d_{v,h},b_{v,j}\}\), with \(1\leq h<j\leq i-1\); 2. \(\{d_{v,h},b_{v,j}\}\), with \(i+1\leq j<h\leq T\);
Figure 2: An example of application of iterative compression (timestamps 1 and 6 are not shown as they are edgeless). In the left part, we represent solution \(S=\{v_{2},v_{3},u_{3},u_{4},z_{4}\}\), where the vertices in \(S\) are highlighted with grey rectangles. Note that \(I_{S}=\{2,3,4\}\), \(V_{S}=\{v,u\}\), \(V_{S}^{\prime}=\{v_{2},v_{3},u_{3},u_{4}\}\), \(Z_{S}=\{z\}\), \(Z_{S}^{\prime}=\{z_{4}\}\). In the right part, we represent in grey a feasible assignment \(X\) associated with \(S\), \(X=\{u_{2},u_{3},u_{4}\}\); in light grey we highlight \(N_{S}^{\prime}=\{v_{2}\}\). The sets associated with \(S\) and \(X\) are: \(M_{S}=\{u\}\), \(N_{S}=\{v\}\), \(N_{S}^{\prime}=\{v_{2}\}\), \(N_{S}^{\prime\prime}=\{v_{2},v_{3},v_{4}\}\). The reduction to Constrained Digraph Pair Cut eventually leads to the solution of MinTimelineCover represented in Fig. 1.
3. \(\{c_{v,h},d_{v,j}\}\), with \(1\leq h\leq i-1\leq i+1\leq j\leq T\);
4. \(\{c_{v,h},d_{v,j}\}\), with \(1\leq j\leq i-1\leq i+1\leq h\leq T\). Note that we have created \(T+3(T-1)=4T-3\) vertices in \(H\) in this step. The subgraph of \(H\) induced by these vertices will be called the _gadget corresponding to_\(v\).
3. for each temporal edge \(u_{t}v_{t}\in E(G)\) such that \(u_{t},v_{t}\in\tau(Z_{S})\cup(\tau(N_{S})\setminus N_{S}^{\prime\prime})\), there are three cases. First note that at least one of \(u_{t}\) or \(v_{t}\) is in \(Z_{S}^{\prime}\). Indeed, if \(u,v\in Z_{S}\), this is because an element of \(Z_{S}^{\prime}\) must cover the temporal edge, and if \(u\in N_{S}\), then \(v_{t}\in Z_{S}^{\prime}\) by Observation 10 (or if \(v\in N_{S},u_{t}\in Z_{S}^{\prime}\)). The subcases are then: 1. if \(u_{t},v_{t}\in Z_{S}^{\prime}\cup N_{S}^{\prime}\), add the pair \(\{u_{t}^{-},v_{t}^{-}\}\) to \(P\); 2. if \(u_{t}\in Z_{S}^{\prime}\cup N_{S}^{\prime},v_{t}\notin Z_{S}^{\prime}\cup N_{S}^ {\prime}\), add the arc \((u_{t}^{-},v_{t}^{+})\) to \(H\); 3. if \(v_{t}\in Z_{S}^{\prime}\cup N_{S}^{\prime},u_{t}\notin Z_{S}^{\prime}\cup N_{S}^ {\prime}\), add the arc \((v_{t}^{-},u_{t}^{+})\) to \(H\);
4. for each temporal edge \(u_{t}v_{t}\in E(G)\) such that \(u_{t}\in(\tau(M_{S})\setminus X)\cup N_{S}^{\prime\prime}\) and \(v_{t}\in\tau(Z_{S})\), there are two cases: 1. if \(v_{t}\notin Z_{S}^{\prime}\), add the arc \((s,v_{t}^{+})\) to \(H\); 2. if \(v_{t}\in Z_{S}^{\prime}\), add the pair \(\{s,v_{t}^{-}\}\) to \(P\). Define \(k^{\prime}=k-sp(X)\). This concludes the construction. We will refer to the elements 1, 2, 3, 4 of the above enumeration as the _Steps_ of the construction. Note that the only deletable arcs in \(D\) are the arcs \((c_{v,j},d_{v,j})\) introduced in Step 2. From here, the interpretation of \(H\) is that if we delete arc set \(F\), then
1. For \(v_{t}\notin Z_{S}^{\prime}\cup N_{S}^{\prime}\) we should include \(v_{t}\) in \(X^{*}\) if and only if \(s\) reaches \(v_{t}^{+}\) in \(H-F\);
2. For \(v_{t}\in Z_{S}^{\prime}\cup N_{S}^{\prime}\) we should include \(v_{t}\) in \(X^{*}\) if and only if \(s\) does _not_ reach \(v_{t}^{-}\) in \(H-F\).
The idea behind the steps of the construction is then as follows (and is somewhat easier to describe in the reverse order of steps). Step 4 describes an initial set of vertices that \(s\) is
forced to reach, which correspond to vertices that are forced in \(X^{*}\). A vertex \(v_{t}\) in \(\tau(Z_{S})\) is forced in \(X^{*}\) if it is in an edge \(u_{t}v_{t}\) and \(u_{t}\in\tau(M_{S})\) but \(u_{t}\notin X\). By our definition of agreement, \(v_{t}\) is also forced if \(u_{t}\in N^{\prime\prime}_{S}\). Step 4 handles both situations: if \(v_{t}\notin Z^{\prime}_{S}\), we force \(s\) to reach \(v_{t}^{+}\) with the arc \((s,v_{t}^{+})\), which is not deletable. If \(v_{t}\in Z^{\prime}_{S}\), then \(v_{t}^{-}\in V(H)\), and \(s\) is forced to _not_ reach \(v_{t}^{-}\) by adding \(\{s,v_{t}^{-}\}\) to \(P\). By (p1) and (p2), both cases correspond to including \(v_{t}\) in \(X^{*}\). Then, Step 3 ensures that each temporal edge is "covered": for a temporal edge \(u_{t}v_{t}\), a pair of the form \(\{u_{t}^{-},v_{t}^{-}\}\) in \(P\) requires that \(s\) does not reach one of the two, i.e. that we include one in \(X^{*}\), and an undeletrable arc of the form \((u_{t}^{-},v_{t}^{+})\) enforces that if \(s\) reaches \(u_{t}^{-}\) (i.e. \(u_{t}\notin X^{*}\)), then \(s\) reaches \(v_{t}^{+}\) (i.e. \(v_{t}\in X^{*}\)). The reason why \(Z^{\prime}_{S}\) is needed in our construction is that each edge has at least one negative corresponding vertex, so that no other case needs to be considered in Step 3.
Finally, Step 2 enforces the number of deleted arcs to correspond to the span of a solution. That is, it ensures that if we want to add to \(X^{*}\) a set of \(h\) vertices of base vertex \(v\in Z_{S}\) to our solution of Restricted Timeline Cover (so with a span equal to \(h-1\)), then we have to delete \(h-1\) deletable arcs of the corresponding gadget of \(H\) in order to obtain a solution to Constrained Digraph Pair Cut (and vice-versa). Indeed, consider the gadget in Fig. 3. If \(v_{i}\) is not included in \(X^{*}\), then in the gadget \(s\) reaches \(h\) positive vertices \(v_{l}^{+},\ldots,v_{r}^{+}\) (and \(v_{i}^{-}\)). It follows that vertices \(b_{v,l},\ldots,b_{v,r}\), \(c_{v,l},\ldots,c_{v,r}\) and \(d_{v,l},\ldots,d_{v,r}\) are all reachable from \(s\). The pairs \(\{d_{v,x},b_{v,y}\}\) defined at Step 2, where either \(l\leq x\leq y\leq r-1\) if \(r<i\), or \(l+1\leq x\leq y\leq r\) if \(l>i\), ensures that arcs \((c_{v,j},d_{v,j})\), with \(j\in[l,r-1]\) in the former case or with \(j\in[l+1,r]\) in the latter case, are deleted.
If \(v_{i}\) is included in \(X^{*}\), then in the gadget \(s\) reaches \(h-1\) positive vertices \(v_{l}^{+},\ldots,v_{r}^{+}\), with \(i\in[l,r]\), and must not reach negative vertex \(v_{i}^{-}\). It follows that vertices \(b_{v,l},\ldots,b_{v,r}\), \(c_{v,l},\ldots,c_{v,r}\) and \(d_{v,l},\ldots,d_{v,r}\) are all reachable from \(s\). Then \(h-1\) arcs \((c_{v,j},d_{v,j})\), with \(j\in[l,r]\setminus\{i\}\), must be deleted, due to the pairs \(\{d_{v,x},b_{v,y}\}\), \(\{c_{v,x},d_{v,y}\}\) defined at Step 2.
Note that Step 2 is the reason we added dummy timestamps \(1\) and \(T\). If \(v_{1}\) or \(v_{T}\) were allowed to be in \(Z^{\prime}_{S}\cup N^{\prime}_{S}\), we would need a different gadget for these cases as they behave a bit differently, along with more cases in the proofs. Adding the edgeless timestamps lets us bypass these cases. We now proceed with the details.
There exists a solution of Restricted Timeline Cover that agrees with \(X\) if and only if there is \(F\subseteq D\) with \(|F|\leq k^{\prime}\) such that \(s\) does not reach a forbidden pair in \(H-F\). Moreover, given such a set \(F\), a solution of Restricted Timeline Cover can be computed in polynomial time.
Sketch of the proof. (\(\Rightarrow\)) Suppose that there exists a solution \(X^{*}\) of Restricted Timeline Cover that agrees with \(X\). By definition of Restricted Timeline Cover, \(X^{*}\) has span at most \(k\). Note that for \(v\in M_{S}\), the agreement requires that \(X^{*}\cap\tau(v)=X\cap\tau(v)\), and so the span of \(v\) in \(X^{*}\) is the same as the span of \(v\) in \(X\). Thus
\[\sum_{v\in Z_{S}\cup N_{S}}sp(v,X^{*})\leq k-sp(X)=k^{\prime}.\]
We may assume that for every \(v\in V_{B}\), at least one of \(v_{2},\ldots,v_{T-1}\) is in \(X^{*}\), as otherwise we add one arbitrarily without affecting the span (if only \(v_{1}\) or \(v_{T}\) is in \(X^{*}\), remove it first). For each \(v\in Z_{S}\cup N_{S}\), consider the gadget corresponding to \(v\) in \(H\) and delete some of its dashed arcs as follows (we recommend referring to Figure 3).
First, if only one of \(\tau(v)\) is in \(X^{*}\), no action is required on the gadget. So assume that \(X^{*}\cap\tau(v)\) has at least two vertices; in the following we denote \(v_{l}=v_{\delta(v,X^{*})}\) and \(v_{r}=v_{\Delta(v,X^{*})}\) the vertices associated with \(v\) having minimum and maximum timestamp,
respectively, contained in \(X^{*}\). We assume that \(l,r\in[2,T-1]\) and \(l<r\). Note that \(X^{*}\cap\tau(v)=\{v_{l},v_{l+1},\ldots,v_{r}\}\).
Let \(v_{i}\in Z_{S}^{\prime}\cup N_{S}^{r}\), where \(i\in[2,T-1]\). Then
* suppose that \(l,r\in[2,i-1]\), then: delete every arc \((c_{v,q},d_{v,q})\), with \(l\leq q\leq r-1\)
* suppose that with \(l,r\in[i+1,T-1]\), then: delete every arc \((c_{v,q},d_{v,q})\), with \(l+1\leq q\leq r\)
* suppose that \(l\in[2,i]\) and \(r\in[i,T-1]\), then: delete every arc \((c_{v,q},d_{v,q})\), with \(l\leq q\leq i-1\), and delete every arc \((c_{v,q},d_{v,q})\), with \(i+1\leq q\leq r\).
We see that by construction for all \(v\in Z_{S}\cup N_{S}\), the number of arcs deleted in the gadget corresponding to \(v\) is equal to the number of vertices in \(X^{*}\cap\tau(v)\) minus one, that is the span of \(v\) in \(X^{*}\). Since these vertices have span at most \(k^{\prime}\), it follows that we deleted at most \(k^{\prime}\) arcs from \(H\). Denote by \(H^{\prime}\) the graph obtained after deleting the aforementioned arcs. We argue that in \(H^{\prime}\), \(s\) does not reach a forbidden pair. To this end, we claim the following.
\(\rhd\) Claim 12. For \(v\in Z_{S}\cup N_{S}\) and \(t\in[T]\), if \(s\) reaches \(v_{t}^{+}\) in \(H^{\prime}\), then \(v_{t}\in X^{*}\), and if \(s\) reaches \(v_{t}^{-}\) in \(H^{\prime}\), then \(v_{t}\notin X^{*}\).
Now, armed with the above claim, we can prove that in \(H^{\prime}\), \(s\) does not reach both vertices of a forbidden pair \(q\in P\), thus concluding this direction of the proof.
(\(\Leftarrow\)) Suppose that there is a set \(F\subseteq D\) with at most \(k^{\prime}\) arcs such that \(s\) does not reach a forbidden pair in \(H-F\). Denote \(H^{\prime}=H-F\). We construct \(X^{*}\) from \(F\), which will also show that it can be reconstructed from \(F\) in polynomial time. Define \(X^{*}\subseteq V(G)\) as follows:
* for each \(v\in M_{S}\), add every element of \(X\cap\tau(M_{S})\) to \(X^{*}\);
* for each \(v_{t}\in V(G)\setminus\tau(M_{S})\), we add \(v_{t}\) to \(X^{*}\) if and only if one of the following holds: (1) \(v_{t}^{+}\in V(H)\) and \(s\) reaches \(v_{t}^{+}\) in \(H^{\prime}\); or (2) \(v_{t}^{-}\in V(H)\), and \(s\) does _not_ reach \(v_{t}^{-}\) in \(H^{\prime}\);
* for each \(v_{j},v_{h}\in X^{*}\) with \(j<h\), add \(v_{t}\) to \(X^{*}\) for each \(t\in[j+1,h-1]\).
Note that \(X^{*}\) agrees with \(X\). Indeed, for \(v\in M_{S}\), there is no gadget corresponding to \(v\) in the construction and thus we only add \(X\cap\tau(v)\) to \(X^{*}\). For \(u\in N_{S}\), consider \(u_{t}\in N_{S}^{\prime\prime}\) and a neighbor \(v_{t}\) of \(u_{t}\) in \(\tau(Z_{S})\). If \(v_{t}\notin Z_{S}^{\prime}\), Step 4 adds an undeletable arc from \(s\) to \(v_{t}^{+}\), hence \(s\) reaches that vertex and we put \(v_{t}\) in \(X^{*}\). If \(v_{t}\in Z_{S}^{\prime}\), Step 4 adds \(\{s,v_{t}^{-}\}\) to \(P\), and thus \(s\) does not reach \(v_{t}^{-}\) in \(H^{\prime}\), and again we add \(v_{t}\) to \(X^{*}\). Therefore, we add all the \(\tau(Z_{S})\) neighbors of \(u_{t}\) to \(X^{*}\), and so it agrees with \(X\). We can prove that \(X^{*}\) covers every temporal edge of \(G\) and that \(sp(X^{*})\leq k\).
### Wrapping up
Before concluding, we must show that we are able to use the results of [15] to get an FPT algorithm for Constrained Digraph Pair Cut, as we have presented it. As we mentioned, the FPT algorithm in [15] studied the vertex-deletion variant and does not consider undeletable elements, but this is mostly a technicality. Roughly speaking, in our variant, it suffices to replace each vertex with enough copies of the same vertex, and replace each deletable arc \((u,v)\) with a new vertex, adding arcs from the \(u\) copies to that vertex, and arcs from that vertex to the \(v\) copies. Deleting \((u,v)\) corresponds to deleting that new vertex. For undeletable arcs, we apply the same process but repeat it \(k^{\prime}+1\) times.
The Constrained Digraph Pair Cut problem can be solved in time \(O^{*}(2^{k})\), where \(k\) is the number of arcs to delete.
We are able now to prove the main result of our contribution.
**Theorem 14**.: _MinTimelineCover on a temporal graph \(G=(V_{B},E,\mathcal{T})\) can be solved in time \(O^{*}(2^{5k\log k})\)._
Proof.: First, we discuss the correctness of the algorithm we presented. Assume that we have an ordering on the base vertices of \(G\) and that \(v\) is the first vertex of this ordering. A solution \(S\) of MinTimelineCover on \(G[\{v\}]\) is equal to \(S=\emptyset\).
Then for \(i\), with \(i\in[2,|V_{B}|]\), let \(G_{i}\) be the temporal graph induced by the first \(i\) vertices and let \(w\) be the \(i+1\)-th vertex. Given a solution \(S\) of MinTimelineCover on instance \(G_{i}\) of span at most \(k\), we can decide whether there exists a solution of MinTimelineCover on instance \(G_{i+1}\) by computing whether there exists a solution \(X^{*}\) of the Restricted Timeline Cover problem on instance \(G_{i}\), \(w\), \(S\). By Lemma 8 and by Theorem 9 if there exists such an \(X^{*}\), then there exists a feasible assignment \(X\) that agrees with \(X^{*}\). By Lemma 11 we can compute, via the reduction to Constrained Digraph Pair Cut, whether there exists a solution of Restricted Timeline Cover on instance on instance \(G_{i}\), \(w\), \(S\), and if so obtain such a solution (if no such solution \(X^{*}\) exists, then Lemma 11 also says that we will never return a solution, since every feasible assignment \(X\) that we enumerate will lead to a negative instance of Constrained Digraph Pair Cut). Thus the Restricted Timeline Cover subproblem is solved correctly, and once it is solved on \(G_{|V_{B}|}\), we have a solution to MinTimelineCover.
Now, we discuss the complexity of the algorithm. We must solve Restricted Timeline Cover \(|V_{B}|\) times. For each iteration, by Theorem 9 we can enumerate the feasible assignments in \(O(2^{4k\log k}T^{3}n)\) time. For each such assignment, the reduction from Restricted Timeline Cover to Constrained Digraph Pair Cut requires polynomial time, and each generated instance can be solved in time \(O^{*}(2^{k})\). The time dependency on \(k\) is thus \(O^{*}(2^{4k\log k}\cdot 2^{k})\), which we simplify to \(O^{*}(2^{5k\log k})\).
## 4 Conclusion
We have presented a randomized FPT algorithm for the MinTimelineCover problem, a variant of Vertex Cover on temporal graph recently considered for timeline activities summarizations. We point out some relevant future directions on this topic: (1) to improve, if possible, the time complexity of MinTimelineCover by obtaining a single exponential time algorithm (of the form \(O^{*}(c^{k})\)); (2) to establish whether MinTimelineCover admits a polynomial kernel, possibly randomized (which it might, since Constrained Digraph Pair Cut famously admits a randomized polynomial kernel); and (3) to extend the approach to other variants of Network Untangling.
|
2305.07848
|
Meta-Polyp: a baseline for efficient Polyp segmentation
|
In recent years, polyp segmentation has gained significant importance, and
many methods have been developed using CNN, Vision Transformer, and Transformer
techniques to achieve competitive results. However, these methods often face
difficulties when dealing with out-of-distribution datasets, missing
boundaries, and small polyps. In 2022, Meta-Former was introduced as a new
baseline for vision, which not only improved the performance of multi-task
computer vision but also addressed the limitations of the Vision Transformer
and CNN family backbones. To further enhance segmentation, we propose a fusion
of Meta-Former with UNet, along with the introduction of a Multi-scale
Upsampling block with a level-up combination in the decoder stage to enhance
the texture, also we propose the Convformer block base on the idea of the
Meta-former to enhance the crucial information of the local feature. These
blocks enable the combination of global information, such as the overall shape
of the polyp, with local information and boundary information, which is crucial
for the decision of the medical segmentation. Our proposed approach achieved
competitive performance and obtained the top result in the State of the Art on
the CVC-300 dataset, Kvasir, and CVC-ColonDB dataset. Apart from Kvasir-SEG,
others are out-of-distribution datasets. The implementation can be found at:
https://github.com/huyquoctrinh/MetaPolyp-CBMS2023.
|
Quoc-Huy Trinh
|
2023-05-13T06:27:33Z
|
http://arxiv.org/abs/2305.07848v3
|
# Meta-Polyp: a baseline for efficient Polyp segmentation
###### Abstract
In recent years, polyp segmentation has gained significant importance, and many methods have been developed using CNN, Vision Transformer, and Transformer techniques to achieve competitive results. However, these methods often face difficulties when dealing with out-of-distribution datasets, missing boundaries, and small polyps. In 2022, MetaFormer was introduced as a new baseline for vision, which not only improved the performance of multi-task computer vision but also addressed the limitations of the Vision Transformer and CNN family backbones. To further enhance segmentation, we propose a fusion of MetaFormer with UNet, along with the introduction of a Multi-scale Upsampling block with a level-up combination in the decoder stage to enhance the texture, also we propose the Conformer block base on the idea of the MetaFormer to enhance the crucial information of the local feature. These blocks enable the combination of global information, such as the overall shape of the polyp, with local information and boundary information, which is crucial for the decision of the medical segmentation. Our proposed approach achieved competitive performance and obtained the top result in the State of the Art on the CVC-300 dataset, Kvasir, and CVC-ColonDB dataset. Apart from Kvasir-SEG, others are out-of-distribution datasets. The implementation link can be found at: [https://github.com/huyuquctrinh/MetaPolyp-CBMS2023](https://github.com/huyuquctrinh/MetaPolyp-CBMS2023)
MetaFormer, Multi-scale Upsampling, UNet, polyp segmentation
## Acknowledgement
This research is supported by research funding from the Faculty of Information Technology, University of Science, Vietnam National University - Ho Chi Minh City.
## I Introduction
Colorectal cancer is a significant health problem that poses a serious threat to human health and society. Polyps are growths that form in the colon or rectum, and they can develop into cancer over time. Early diagnosis of polyps is a crucial aspect of preventive healthcare, as it can significantly improve the prognosis and treatment outcomes of patients with colorectal cancer [1]. Detecting and removing polyps before they become cancerous is essential in preventing the development of the disease. Therefore, early polyp diagnosis is very crucial. It can prevent the progression of colorectal cancer and its widespread impact on society. As polyps can develop over time and some can become cancerous, early detection and removal are critical in preventing the progression of the disease. By identifying and removing polyps early, patients have a much higher chance of a successful outcome, and the overall impact of colorectal cancer can be reduced [1].
In recent years, early diagnosis plays a crucial role in the treatment of polyps and the prevention of colorectal cancer. However, despite its importance, the accuracy of early diagnosis is still limited by various external factors [2]. Therefore, polyp segmentation has become an integral part of the diagnostic process. In recent years, several Deep Learning approaches have demonstrated their effectiveness in segmenting polyp images, with some achieving competitive results in state-of-the-art performance. These approaches include UNet [3], PraNet [4], UNet++ [5], and ResUNet [5]. However, these methods often face a challenge in capturing the global information of polyp objects. While CNN models excel at capturing local information, they struggle to capture the overall shape of polyp objects, which is critical for accurate segmentation. This deficiency is a significant factor in the missed segment areas that are essential outputs for segmentation tasks [6]. To address these problems, many approaches of Vision Transformer [7] performed promising results while they can capture global information, loss of deep supervision is also a promising result for improving the boundary feature for the segmentation result. However, the deficiency of previous methods is the parameters, also the lack of local information and also global information [8] that the model learned can lead to the oversize of polyps or the missing texture in segmentation masks and it is still a challenge for the segmentation problem [9]. Moreover, the texture is not captured effectively in the preceding Upsampling layer [3] due to the loss of resolution in the upsampled output.
In late 2022, a new approach called MetaFormer was proposed as a baseline for combining CNN [9] and Transformer models [8]. MetaFormer [7] allows for the capture of both local and global information by utilizing downsampling via convolution to capture local features and a Transformer encoder to capture global features in later stages. This approach has been shown to improve performance in various tasks.
In our paper, we propose a Polyp MetaFormer that combines MetaFormer and UNet, a Multi-scale Upsampling block with our Level-up UpSampling technique. Our technique enhances the quality of texture in the decoder stage of UNet, which addresses the weakness of UNet in texture missing in the Up
Sampling stage and improves the segmentation results of the entire architecture. Our proposed method shows competitive results on state-of-the-art datasets, benchmarking our model against the weaknesses of other approaches.
To summarize our contribution, there are 3 main ideas:
- We propose the MetaFormer Baseline with our proposed Convformer block as 4 stages as MetaFormer for capturing the fusion of global features and local features from the encoder stage.
- The Level-up Upsampling technique is proposed to enhance the texture missing in the Decoder stage of UNet.
- Demonstrate the effectiveness of the method on out-of-distribution datasets, and also get competitive results on the state-of-the-art.
## II Related Work
### _Early diagnosis_
Colorectal cancer, arising from polyp growth in the colon or rectum, is a significant health concern with severe implications [1]. Early identification of polyps is vital in improving prognosis and treatment outcomes [1]. Detecting and removing polyps prior to malignancy is crucial for preventing disease progression. Early diagnosis of polyps is paramount in averting the extensive consequences of colorectal cancer [10]. However, current black-box methods lack explanatory transparency, posing challenges in the medical imaging field [1].
### _Polyps segmentation_
Endoscopic image segmentation is a well-studied research field [10]. Early research relied on handcrafted descriptors and a machine learning (ML) classifier that distinguished lesions from the background based on attributes like color, shape, texture, and edges [11]. In recent years, deep learning and convolutional neural networks have led to many new segmentation techniques, such as UNet [3]. The UNet [3] model is considered groundbreaking as it was the first to introduce skip connections in the encoder-decoder architecture for medical segmentation tasks. This innovative technique allows for the combination of both shallow and deep features to improve the accuracy and reliability of the segmentation process. Since its inception, numerous studies and research have been conducted to further enhance the performance of this technique in the segmentation field. As a result, many advancements have been made, and the UNet [3] model has become a crucial tool for medical imaging professionals and researchers in their pursuit of more accurate and efficient segmentation methods [12, 13].
### _MetaFormer_
It has been observed that the abstracted architecture of the Transformer, known as MetaFormer [7], plays a crucial role in achieving high levels of performance. This innovative architecture has demonstrated its effectiveness in various applications, particularly in natural language processing (NLP) and image recognition. By leveraging the powerful capabilities of MetaFormer [7], researchers and developers have been able to achieve competitive results and make significant advancements in their respective fields.
## III Methods
### _General architecture_
We have developed a network that builds on Encoder-Decoder architecture with modifications such as the combination of the ConvFormer and Transformer blocks from the MetaFormer baseline in the encoder stage. In addition, we have proposed the use of the Level-up Upsampling stage and use the Multi-scale Upsampling block to improve the performance of the Up-Sampling layer in the decoder stage. The full architecture is described in the 1. The input \(X\in R^{Width\times Height\times 3}\) of the architecture has shape \(Width\times Height\times 3\), and the encoder extracts the feature \(X_{i}\in R^{Width\times Height}\times\frac{height}{2^{i+1}}\times F_{i}\) where \(Filters_{i}\in\{64,128,320,512\};i\in\{1,2,3,4\}\) is the filter at step \(i\) of the encoder and the decoder stage. Whereas, in the decoder stage, although the feature is decoded 2 times in each step through Convolution Transpose 2D, the feature at the \(i\) step is also decoded 4 times by our Multi-scale Upsampling block and then it is merged to the \(i+2\) step for enhancing the feature while Upsampling the previous features. From then, the decoder stage with generate the mask with the shape \(Width\times Height\times Filters\), then a _Convolution layer_\(1\times 1\) is applied to map the feature map from 64 filters to 1 filter. In the initial two stages, the emphasis lies on acquiring significant local features, which is why the Convformer Encoder is employed. Conversely, in the subsequent stages, the overall information pertaining to the object becomes more crucial. Therefore, the Transformer Encoder is incorporated in the last two stages of the model to capture the global context effectively.
### _ConvFormer Encoder_
The MetaFormer baseline [7] investigates how existing token mixers can achieve exceptional performance. Rather than inventing new token mixers, our work relies on the MetaFormer architecture. The ConvFormer encoder in the MetaFormer follows a four-step process. The first step involves generating token mixers, achieved through Depthwise Convolution and Separable Convolution for the creation of these mixers.
\[Convolutions(X)=Conv_{pw2}(Conv_{dw}(\sigma(Conv_{pw1}(X)))) \tag{1}\]
\[X^{\prime}=X^{\prime}+Convolutions(Norm(X)) \tag{2}\]
\[X^{\prime\prime}=X^{\prime\prime}+\sigma(Norm(X^{\prime})W1)W2 \tag{3}\]
The equation.1 which is mentioned in the [7], the \(Conv_{pw}\) at \(i\) is the Convolution pointwise, while \(Conv_{dw}\) denotes the Depth-wise Convolution. Then for the next stages, the output of the equation.1 is normalized before the skip connection is applied to the output, and the demonstration follows by
the equation.2. The output then is learnable by \(W1\) and \(W2\) through the Channel MLP layer, which is demonstrated by the equation.3.In addition, in the equation.3, the \(\sigma(.)\) denotes the activation function that is used in the ConvFormer block. The use of the Convformer concept in the encoder is helping the model focus on learning the important texture
### _Convformer Block_
In the Convformer block, by the idea of MetaFormer [7], we create our module Convformer block (in fig.1), which is different from the Convformer encoder, but have the idea of the previous Convformer block from MetaFormer [7], that can capture the global information, also we include the local feature by the Pointwise Convolution, then the Self-Attention mechanism [14] is applied, in this case, the weights \(W\in R^{Width\times Height\times Filter}\) is kept, this weight helps to generate the attention mask, which helps model learn the crucial information from the local information, therefore, the local information is added with the attention mask, and a Channel MLP layer is applied to help model learn the fusion of the local information and the crucial information of the local information.
### _Transformer Encoder_
The Transformer encoder shares a similar concept with the ConvFormer encoder, but with a different token mixer. Instead of using Convolution Block to create the token mixer, the Transformer block uses a classic self-attention mechanism to create an attention mask, which is used as the token mixer. The self-attention mechanism allows the model to attend to different parts of the input sequence and identify relevant features. The attention mask is generated based on the similarity between the input tokens and is used to weight the contribution of each token to the final output. This mechanism allows the model to capture long-range dependencies and contextual information.
\[X^{\prime}=X^{\prime}+SelfAttention(Norm(X)) \tag{4}\]
\[X^{\prime\prime}=X^{\prime\prime}+\sigma(Norm(X^{\prime})W1)W2 \tag{5}\]
In the equation.4, Self Attention is presented as the self-attention mechanism, and the output will follow the equation.5, which is the skip connection applied. Then the output is also learnable by two parameters \(W1\) and \(W2\) from the Channel MLP layer.
### _Level-Upsampling technique_
For the Upsampling block, there are 2 stages in this block, the first stage is the Feature extraction stage, and the second stage is the Upsampling stage.
\[X_{decoded}=Conv(UpSampling(X)) \tag{6}\]
\[X^{\prime}=\sigma((X^{\prime}+X_{decoded})) \tag{7}\]
Equation.[6, 7] describes the Multi-scale block with \(X\in R^{W\times H\times Filters}\) as the input tensor. In this block, we utilize the Convolution module feature by extracting the input tensor by convolution layers, this one can be used with the others convolution kernel size or the others convolution components, then, the skip connection is used, the \(\sigma(.)\) is the activation for the output of the step.
## IV Experimental evaluation
### _Dataset_
In the experiment, we follow the merged dataset between the ClinicDB dataset and the Kvasir-SEG dataset [15] which is mentioned in the PraNet [4] experiment setup, and also
Fig. 1: General architecture of Polyp MetaFormer block
this training set is widely used in various experiments on the later methods. This dataset contains 2 subsets: Kvasir-SEG [15] (900 train samples) and CVC-ClinicDB [16] (550 train samples).
For benchmarking, we choose 4 datasets: Kvasir-SEG [15], ColonDB [17], CVC300 [16] and the Etis [18] dataset for the benchmarking. In those 4 datasets, apart from the Kvasir-SEG [15] dataset, others are out of the distribution datasets.
For research and study purposes, we split the merged dataset into 3 parts, one for training, one for validating, and one for testing; and do experiments on all these 3 parts. The training, validating, and testing make up 60%, 20%, and 20%, respectively, this data split method is used for evaluating our model before benchmarking on various datasets.
### _Augmentation_
While training the model, we also use the Augmentation technique to improve the number of data in the dataset. Augmentation also creates a beneficial impact on the domain of the dataset by making the distribution of the data more complex [19]. To enrich the dataset, we propose some augmentation methods. We use Center Crop [20], Random Rotate [20], GridDistortion [20], Horizontal [20], and Vertical Flip to improve the quantity of the dataset. Moreover, some advanced augmentation methods were applied to improve the distribution of the feature in the data sample.
CutOut augmentation [21] is also applied in our experiments, this advanced method adds the noise, which is the area with all pixel values are 0, to the image randomly and then the mask has also added that area in the same position. Moreover, the CutMix augmentation is used, in this case, the original image is added with a patch from another image, and their corresponding masks are also combined in the same way.
### _Loss function_
We utilize the Jaccard Loss Function [22] with the following formula
\[JaccardLoss(y,\hat{y})=\alpha\times(1-\frac{\alpha+\sum_{c}^{C}y_{c}\times \hat{y}_{c}}{\alpha+\sum_{c}^{C}y_{c}+\hat{y}^{c}-y_{c}\times\hat{y}_{c}}) \tag{8}\]
This loss function enables the segmentation process better and can control the model's performance on the pitch of the tissues. The Jaccard Loss [22] is also known as the IOU metric, with y as the true label and the predicted label being \(\hat{y}\), these two labels are demonstrated in the one-hot vector to present classes \(C\) being their length. However, to prevent the exploding gradient, there is a smoothing factor called alpha \(\alpha\), which helps stabilize the training result.
### _Implementation details_
All architectures were implemented using the Keras framework with TensorFlow as the backend. The input volumes are normalized to [-1, 1]. All networks were trained with described augmentation methods. We used Adam optimization [23] with an initial learning rate of 1e-4. After that, we use the Cosine Annealing learning rate schedule to stable the training process. The smoothing factor alpha \(\alpha\) in the Jaccard Loss is \(0.7\). We performed our experiment on a single NVIDIA Tesla A100 40GB. The batch size is 128, and it takes around 6 hours to train the entire dataset. Finally, we trained all the models for 300 epochs.
### _Metrics_
We use IOU and Dice-Coefficient metrics to evaluate our method's performance. The metrics evaluate the ground truth mask with the predicted mask from the test dataset.
The following is the formula of mIOU [24]:
\[IOU=\frac{AreaofOverlap}{AreaofUnion} \tag{9}\]
The Area of overlap is the common area of two predicted masks, and the Area of Union is all of the areas of two masks.
The Dice Coefficient [25] which calculates the division between the common area of two masks and the union area of two masks has the following formula:
\[DiceCoefficient=\frac{2*|X\cap Y|}{|X\cup Y|} \tag{10}\]
## V Result
### _Qualitative Result_
For the relevance of our results, we also benchmark our model with previous methods from UNet [3] in 2015 to the newest method which is on the top of the state-of-the-art in early 2023 is FCB-SwinV2 Transformer [26]. Below is the result on the Kvasir-SEG dataset [15] and CVC-300 dataset [16] with results that are competitive on state-of-the-art as shown in Table.I, and Table.II. In addition, to evaluate the deficiency of our approach, we propose to do experiments on
Fig. 3: Example of CutOut augmentation for Kvasir-SEG
Fig. 2: Polyps and corresponding masks from Kvasir-SEG
the Etis dataset [18], and the Colon-DB dataset [17]. From the experiment, results are observed in Table.III and Table.IV.
Overall, the Table.II, Table.I, and Table.IV, our method achieves the state-of-the art on the Kvasir-SEG [15], CVC-ColonDB dataset [17], and CVC-300 [16] dataset. These results evaluate the effectiveness of our proposed Convformer and Multiscale Upsampling block. However, the weakness of the method is on the small object which is illustrated by the result from the Table.III from the Etis dataset [18].
### _Qualitative visualization_
The fig.4 visualizes masks that are generated from our method when compared with other methods each year. The dataset that is used for this visualization is the Kvasir dataset [32]. From the results, our method is demonstrated to improve the weakness of the previous method to identify the shape of some difficult polyp objects, however, with many polyps in the image, our method also shows a worse result than some methods in the previous year.
### _Ablation study_
The experiments to compare the MetaFormer UNet with the combination of Meta-UNet and our proposed block are done to evaluate the effectiveness of our method. It seems that the Multi-scale Upsampling block has led to significant improvements in the performance of the MetaFormer UNet on the Kvasir dataset, while the Convformer block can help the model learn the local feature and also can learn the captured
Fig. 4: Comparison of results from various methods
global feature. The increase in mean Intersection over Union (mIOU) score from 0.877 to 0.921 with the addition of the Multi-scale Upsampling block and Convformer block indicates that this technique has effectively improved the model's ability to capture fine details and produce more accurate segmentation results. Furthermore, the best-proposed model achieving a 0.921 mIOU score on the Kvasir dataset suggests that the MetaFormer UNet with Multi-scale Upsampling block and Convformer block is capable of achieving state-of-the-art performance in this task. It would be important to consider the computational cost and other practical considerations of the Multi-scale Upsampling block, as well as potential trade-offs in other performance metrics when deciding whether to incorporate it into other models or applications.
## VI Conclusion
In conclusion, we propose the MetaFormer baseline with UNet and our Multi-scale Upsampling block with the Level-up augmentation technique for the segmentation. Our approach is evaluated to solve the problem from the previous methods which is the lack of local features with global features that the model learned, which can help the model capture the full shape and the texture inside the mask. Moreover, our results achieve the state-of-the-art of Kvasir-SEG dataset, the CVC-ColonDB dataset, and the CVC300 dataset. This result demonstrates that our method enhances the weakness of previous methods with our proposed modules. On the other hand, there are some limitations of the MetaFormer that need to be improved such as small polyps in the segmentation or multiple polyps also make our method obtains lower performance than usual. However, this is a promising method for the medical segmentation task and can be improved in the future.
|
2301.03101
|
Massive MIMO and NOMA Bits-per-Antenna Efficiency under Power Allocation
Policies
|
A comparative resource allocation analysis in terms of received
bits-per-antenna spectral efficiency (SE) and energy efficiency (EE) in
downlink (DL) single-cell massive multiple-input multiple-output (mMIMO) and
non-orthogonal multiple access (NOMA) systems considering a BS equipped with
many ($M$) antennas, while $K$ devices {operate} with a single-antenna, and
{the loading of devices} $\rho = \frac{K}{M}$ ranging in $0<\rho\leq 2$ is
carried out under three different \ac{PA} strategies: the inverse of the
channel power allocation (PICPA), a modified water-filling ($\Delta$-WF)
allocation method, and the equal power allocation (EPA) reference method. Since
the two devices per cluster are overlapped in the power domain in the NOMA
system, the channel matrix requires transformation to perform the zero-forcing
(ZF) precoding adopted in mMIMO. Hence, NOMA operating under many antennas can
favor a group of devices with higher array gain, overcoming the mMIMO and
operating conveniently in the higher loading range $0.6<\rho<2.0$. In such a
scenario, a more realistic and helpful metric consists of {evaluating} the area
under SE and EE curves, by measuring the bit-per-antenna and
bit-per-antenna-per-watt efficiency, respectively. Our numerical results
confirm a superiority of NOMA w.r.t. mMIMO of an order of 3x for the SE-area
and 2x for the EE-area metric.
|
Thiago Alves Bruza Alves, Taufik Abrão
|
2023-01-08T20:33:42Z
|
http://arxiv.org/abs/2301.03101v1
|
# Massive MIMO and NOMA
###### Abstract
A comparative resource allocation analysis in terms of received bits-per-antenna spectral efficiency (SE) and energy efficiency (EE) in downlink (DL) single-cell massive multiple-input multiple-output (mMIMO) and non-orthogonal multiple access (NOMA) systems considering a BS equipped with many \((M)\) antennas, while \(K\) devices operate with a single-antenna, and the loading of devices \(\rho=\frac{K}{\rho}\) ranging in \(0<\rho\leq 2\) is carried out under three different Power Allocations (PA) strategies: the inverse of the channel power allocation (PUCPA), a modified water-filling (\(\lambda\)-WF) allocation method, and the equal power allocation (EPA) reference method. Since the two devices per cluster are overlapped in the power domain in the NOMA system, the channel matrix requires transformation to perform the zero-forcing (ZF) precoding adopted in mMIMO. Hence, NOMA operating under many antennas can favor a group of devices with higher array gain, overcoming the mMIMO and operating conveniently in the higher loading range \(0.6<\rho<2.0\). In such a scenario, a more realistic and helpful metric consists in evaluating the area under SE and EE curves, by measuring the bit-per-antenna and bit-per-antenna-per-watt efficiency, respectively. Our numerical results confirm a superiority of NOMA w.r.t. mMIMO of an order of 3x for the SE-area and 2x for the EE-area metric.
Non-Orthogonal Multiple Access (NOMA); massive Multiple-Input Multiple-Output (mMIMO); Energy Efficiency (EE); Spectral Efficiency (SE).
## I Introduction
The beyond Fifth Generation (5G) of wireless communication systems must allow ultra-dense connections with vastly heterogeneous requirements. The challenges in networks persist, including the Spectral Efficiency (SE) and the Energy Efficiency (EE) joint improvement, the increase in the SE-EE trade-off, and Quality of Service (QoS), always aiming to meet the growing number of devices connected to the network. Among the proposals to solve these challenges, the massive Multiple-Input Multiple-Output (mMIMO) system is the primary proposed system that allows the increase of the link capacity, exploring the propagation of multiple paths with the use of a large number of antennas at the Base Station (BS) [1, 2]. Another relevant enabling technology is the Non-Orthogonal Multiple Access (NOMA), which explores the power domain as an alternative way in terms of multiple access technology, helping to mitigate the spectrum exhaustion problem and serving more than one device per resource block [3].
Although in many works mMIMO is classified as an orthogonal technique, allocating the signal from devices in the same resource block, possible by spatial diversity, allows us to classify it as a non-orthogonal technique too [4]. There is a vast literature demonstrating the superior performance of the Spectral Efficiency of NOMA when compared to Orthogonal Multiple Access (OMA) techniques [5]. Previous aims to improve the communication system performance by combining MIMO (with a small number of antennas \(M\)) and NOMA have been discussed in [6, 7, 8, 9].
Studies comparing NOMA and mMIMO in a single cell are proposed in [4, 10, 11]. The acquisition of Channel State Information (CSI) through pilot acquisition to NOMA system is proposed in [10]. In [11], the application of NOMA in the mMIMO scheme is proposed, and better results are achieved in the proposed comparative. Moreover, in [4] is analyzed the performance of NOMA and mMIMO in line of sight and non-line of sight.
The canonical mMIMO refers to the systems with BSs formed by a large number of antennas \(M\) when compared to the number of actives devices, \(K\), succinctly \(M\gg K\) is considered a mMIMO setup. The typical NOMA improves the SE by superposing the signals of the selected devices to form a cluster in the power domain, multiplexing it over the same signal and served by the same beamforming. Nonetheless, the success of NOMA depends on the Successive Interference Cancellation (SIC).
Power-domain NOMA can be a candidate technology in dense networks [12]. To improve performance and minimize the impact to assume the perfect SIC [13], devices are divided into two groups. After grouping in pairs and forming a cluster, each pair forms a cluster with a high difference between channel conditions. The device with a higher channel condition can decode the signal sent to the device with the lower channel condition. The interference can thus be eliminated by SIC. The use of NOMA in BS equipped with a large number of antennas was investigated in terms of SE [4, 10] we propose in a similar configuration system increasing the loading up to 2 times the number of antennas in BS and analyze the SE, EE, and SE-EE trade-off.
The EE metric is a popular figure of merit employed to analyze the balance between power consumption and data rate. The EE is the ratio between the effectively transmitted data rate and the total power expended during the transmission process, including instantaneous and static components. With the EE metric, it is possible to evaluate the efficiency with which a system uses the limited energy resource to communicate data and optimize this ratio. Can show the tendency of energy consumption in the case of seeking justice among devices.
The Zero Forcing (ZF) is simple and popular alternative interference suppression beamforming under perfect CSI condition and achieving a satisfactory condition in real situations when imperfect CSI, in this work, we adopt perfect CSI, for
that the pilots are needed. The adoption of NOMA system with a large number of antennas requires a defined equivalent channel to be deployed for interference mitigation; and according to the NOMA principle, makes the equivalent channel matrix smaller than the original one due to the exploration of power domain in NOMA.
Various transmission topologies already deal with the EE problem in mMIMO, finding the optimal number of antennas, number of devices in a cell, and the maximal EE [2, 14]. The EE analysis in the NOMA system is carried out in [15], and its superiority is demonstrated when compared with conventional orthogonal multiple access (OMA) systems. Recent researches seek to improve the NOMA performance, _e.g._ in [16], the minimum pairing distance is defined and compared to the OMA, while in [17] it's presented a comparison between OMA and cell-free system equipped with mMIMO-NOMA. An EE analysis in Terahertz (THz)-NOMA-Multiple-Input Multiple-Output (MIMO) was proposed in [18]. Still, the number of active devices is much smaller than the number of antennas in the BS, and [19] is a survey about Power Domain NOMA and makes clear the vacuum of EE analysis and comparison between NOMA with many antennas and mMIMO.
Recent works propose the deployment of NOMA combined with other techniques a more effective transmission scheme; e.g., in [20] NOMA and mMIMO are jointly considered in a two-tier network for accommodating colossal traffic. Furthermore, in [21], authors apply NOMA in _Distributed Antenna Systems_ (DAS), aiming to achieve better performance when compared to the conventional NOMA or DAS technique alone. While [22] shows an in-depth survey of the state-of-the-art of power-domain NOMA variants; moreover, several open issues and research challenges of NOMA-based applications are systematized. The NOMA system presents drawbacks, such as hardware (including SIC) complexity, channel feedback, receiver design, and careful power and pilot allocation strategies [12, 19, 23].
This work focus on revealing the advantages of applying the mMIMO scheme _versus_ NOMA scheme with a massive number of BS antennas, and varying the loading of devices, _i.e._, the ratio of the number of mobile devices to the number of BS antennas, \(\rho=\frac{K}{M}\), while we change the PA strategy. Besides, we adopt a realistic model for the system's power consumption as in [2] but adapted to our needs, aiming at providing a suitable analysis of the system resource allocation.
_Contributions:_ the contributions of this work are fourfold. **a)** an extensive and comparative analysis on the spectral efficiency (SE) performance of mMIMO system against NOMA system, varying the system loading under specific (three different) power allocation methods and making use of the area under the SE (\(\mathcal{S}^{\rm{system}}\)) curve of the system as an effective, useful and fair metric of performance and efficiency; **b)** we develop an energy efficiency (EE) analysis using a detailed model of energy consumption, with fixed and variable terms related to circuitry power consumption with number of antennas and devices, respectively, providing an extensive and comparative analysis on both the NOMA and mMIMO systems under realistic operation scenarios and making use of the area under the EE (\(\mathcal{E}^{\rm{system}}\)) curve of system; **c)** an analysis on the SE-EE trade-off is developed considering a wide range of loading of devices, verifying the fairness between devices; **d)** finally, under mild conditions, we provide evidences for the NOMA's ability to serve a greater number of devices than mMIMO system.
The remainder of the paper is organized as follows. Section II describes the system models for NOMA and mMIMO adopted in this work. In Section III we present the proposed EE-SE formulation for NOMA and massive MIMO systems. Numerical results are analyzed in IV. Section V concludes the paper.
_Notation._ In this work, boldface lower case and upper case characters denote vectors and matrices, respectively. The operator \((x)^{+}=\max(0,x)\). The operators \([\cdot]^{\rm T}\), \(\mathbb{E}[\cdot]\) and \(|\cdot|\) denote transpose, expectation and cardinality, respectively. A random vector \(\mathbf{x}\sim\mathcal{CN}\left\{0,\mathbf{I}_{m}\right\}\) is circularly symmetric Gaussian distributed with mean \(0\) and covariance matrix \(\mathbf{I}_{m}\). \(\mathbf{I}_{m}\) is \(m\times m\) identity matrix.
## II System Models
Let us consider a multi-user single-cell Downlink (DL) transmission operating in a Time Division Duplex (TDD) with \(K\) single-antenna actives devices, communicating with one BS, which is equipped with \(M\) transmit antennas in Non-Line-Of-Sight (NLOS). The set \(\mathcal{K}\) is formed by \(K\) devices, these devices are randomly distributed in a radius disk \(d_{\max}\), the disk area is formed by two sub-disk with the same number of devices in each sub-area; both subsets are identified as \(\mathcal{K}_{H}\) and \(\mathcal{K}_{L}\). In the first subset, \(\mathcal{K}_{H}\) represents devices' indexes having the higher channel coefficient and sort in descending order, while the other subset \(\mathcal{K}_{L}\) are formed by the devices with lower channel coefficient and sort in ascending order; the indexes \(k\in\mathcal{K}_{H}\) and \(k\in\mathcal{K}_{L}\) such that:
\[\mathcal{K}=\mathcal{K}_{H}\cup\mathcal{K}_{L},\quad\text{where} \tag{1}\] \[\mathcal{K}_{H}=\{1,...,K/2\}\;\text{ and }\mathcal{K}_{L}=\{K/2+1,...,K\}.\]
The channel vector modeling of device \(k\) can be described liked as:
\[\mathbf{h}_{k}=\sqrt{\beta_{k}}\mathbf{h}_{k}^{\prime},\quad k=1,...,K, \tag{2}\]
where \(\beta_{k}\) is the large-scale fading coefficient and satisfy
\[\beta_{j}>\beta_{i},\quad\forall j\in\mathcal{K}_{H},\;\;\forall i\in \mathcal{K}_{L}. \tag{3}\]
Herein, the pathloss model in [dB] is defined as:
\[\beta_{k}=\beta_{0}+10\cdot\xi\cdot\log_{10}(d_{k}), \tag{4}\]
where \(d_{k}\) is the distance of user \(k\) to BS, \(\xi\) is the pathloss coefficient, and \(\beta_{0}\) is the attenuation at the distance of reference.
In each coherence interval, \(\mathbf{h}_{k}^{\prime}\) in (2) for device \(k\) is an independent random small-scale fading realization from an independent Rayleigh fading distribution, \(\mathbf{h}_{k}^{\prime}\sim\mathcal{CN}(0,\mathbf{I}_{M}),k=1,...,K\). The transmitted signal \(\mathbf{x}_{k}\in\mathbb{C}^{M}\) is the beamformed data symbol of device \(k\):
\[\mathbf{x}_{k}=\mathbf{g}_{k}\sqrt{p_{k}}s_{k}, \tag{5}\]
where \(\mathbf{g}_{k}\) is a normalized beamforming vector, \(p_{k}\) normalized transmission power and \(s_{k}\sim\mathcal{CN}(0,1)\) is the data symbol of device \(k\), and period \(T_{s}\). The signal received at the \(k\)-th device:
\[y_{k} =\sum_{k^{\prime}=1}^{K}\mathbf{h}_{k}^{\prime}\mathbf{x}_{k^{ \prime}}+n_{k}, \tag{6}\] \[=\sqrt{\beta_{k}}\mathbf{h}_{k}^{\prime}\mathbf{f}_{k}^{\prime} \mathbf{g}_{k}s_{k}+\sqrt{\beta_{k}}\mathbf{h}_{k}^{\prime}\sum_{k^{\prime} \neq k}^{K}\mathbf{g}_{k^{\prime}}\sqrt{p_{k^{\prime}}}s_{k^{\prime}}+n_{k},\]
where \(n_{k}\sim\mathcal{CN}(0,1)\) is the additive noise. Notice that this modeling applies to both NOMA and mMIMO systems, but beamforming is selected differently, and this topic will be addressed in the next sections.
### _Prior Actions_
Because the BS needs to know _a priori_ crucial information related to the channel and devices distributed in the cell, including device location, rate demanded, and channel coefficient, such required _a priori_ information may differ depending on the multiple access scheme considered [12].
The option for the mMIMO and NOMA systems was carried out with the guarantee that the needs of the devices would be met, and this initial step was carried out successfully. Subsection III-E briefly discusses the preliminary information required to proceed with different Power Allocation (PA) procedures in both mMIMO and NOMA systems.
### _Pilot Overhead for Channel State Information_
Fig. 1 compares the pilot-data transmission structure along the one-channel coherence interval for both mMIMO and NOMA systems considered. Notice that \(T_{\mathrm{s}}\) is the time required to transmit a data symbol (Data). Moreover, the channel coherence time interval \(\mathrm{T}\) is assumed to be a multiple of the Data symbol period, \(\mathrm{T}=\iota\cdot T_{\mathrm{s}}\). The power allocated to each pilot in the training step is enough. In contrast, the number of pilots, and the dedicated portion of coherence interval for data transmission are assumed to be the same in both systems.
Notice that in NOMA transmission, the pilot transmission step is split into two portions, half for the UL transmission pilots and receive the DL pilot confirmation; this happens because to perform SIC, the cell-center devices need to learn the effective channels that are established by the beamforming. Additionally, the beamforming vectors are based on cell-center devices, producing limited rates achieved in cell-edge devices. On the other hand, in the mMIMO scheme, a significant advantage is that there is no need for DL pilots since the effective channels created by the beamforming are highly predictable, _i.e._ nearly deterministic gain and phase due to channel hardening effect [4].
**Assumption 1**: _In the NOMA system, the power allocated to each downlink pilot is sufficient to reach the destination device._
### _Beamforming for NOMA and mMIMO systems_
At the mMIMO system, each device is served by a single beamforming vector. The ZF technique is a popular interference-suppressing beamforming scheme in the mMIMO system since it eliminates all inter-user interference using individual beamforming for each device, while the favorable propagation facilitates such interference suppressing in massive MIMO configurations. Besides, to perform ZF precoding in NOMA system, it is essential to understand the NOMA _user-pairing_ concept.
_User-pairing:_ Inherent to the NOMA system, user clustering can be performed in several ways after the _user-sorting_ and the user classification in center-users and edge-users subsets. Because we know that the SE of NOMA is directly proportional to the difference between the pathloss of the users, a natural choice consists in pairing users with as higher as possible pathloss differences [6]:
\[\Delta\beta_{k}=\beta_{k}-\beta_{K+1-k}, \tag{7}\]
forming the cluster \(k\) for \(k=1,...,K/2\). With the pair formed, carefully beamforming vectors selection is required. Hence, in NOMA we assume that the _beamforming vector_ for paired users is the same, _i.e._, \(\mathbf{g}_{k}=\mathbf{g}_{K+1-k}\) for all \(k=1,...,K/2\).
**Assumption 2**: _In user-pairing procedure, we assume that the paired users are aligned with the BS so that the same beamforming can serve all paired users simultaneously. Hence, by admitting that each pair of devices is spatially aligned with the BS, and using localizing tools described, for instance, in [23, 24], one should assume further a priori user-pairing step in NOMA systems._
**Assumption 3**: _In NOMA system, beamforming serves more than one aligned device simultaneously; specifically, in this paper, two aligned devices per cluster are admitted according to the user-pairing step, while eliminating the inter-cluster interference (favorable propagation) under adopted perfect CSI conditions._
In this work we adopt the linear ZF precoding as defined by the vector:
\[\mathbf{g}_{k}=\mathbf{h}^{\prime}_{k}(\mathbf{h}^{\prime\mathsf{T}}_{k} \mathbf{h}^{\prime}_{k})^{-1}, \tag{8}\]
and satisfying \(\mathbf{h}^{\prime\mathsf{T}}_{i}\mathbf{g}_{k}=0,\,\forall\;i\neq k\), _i.e._, the favorable propagation effect between users belonging to distinct clusters.
## III SE-EE in NOMA and mMIMO systems
We discuss the SE and EE configurations in the NOMA and mMIMO systems. The operation of the NOMA system requires pairing devices so that the channel coefficients of the devices in the same cluster must be appropriately different, enabling power domain usage. As already mentioned, the interference cancellation process via beamforming presents problems that we will demonstrate below.
### _Data Rates in NOMA with ZF_
Devices are divided into two sets like described in Eq. (1), and these groups are represented by Eq. (9) by their large-scale fading coefficient and are grouped into pairs forming a cluster as Fig. 2, the cluster \(k\) is formed by one device in cell center set \(\mathcal{K}_{H}\) and one device in cell edge set \(\mathcal{K}_{L}\). Hence, the devices are grouped into two subsets:
\[\mathcal{K}_{H}= \{\beta_{1}>\beta_{2}>...>\beta_{k/2}\},\quad\text{(center devices set)} \tag{9}\] \[\mathcal{K}_{L}= \{\beta_{K}<\beta_{k-1}<...<\beta_{k/2+1}\}.\quad\text{(edge devices set)}\]
The _user-pairing_ adopted in Eq. (9) is the same as proposed in [6], creating the largest possible difference in channel coefficients for devices not yet paired.
**Assumption 4**: _In this paper, we assume perfect SIC, and only one perfect SIC stage per cluster is performed, since just 2 devices per cluster are admitted._
Figure 1: Coherence time interval structure: the training and data transmission structure for mMIMO and NOMA schemes under TDD NLOS setup.
The instantaneous Signal to Interference plus Noise Ratio (SINR) of devices in cluster \(k\) is defined as:
\[\mathsf{SINR}_{k}=\frac{\beta_{k}p_{k}|\mathbf{h}_{i}^{\mathsf{T}}\mathbf{g}_{k} |^{2}}{\beta_{k}\sum_{k^{\prime}\neq k}^{K}p_{k^{\prime}}|\mathbf{h}_{i}^{ \mathsf{T}}\mathbf{g}_{k^{\prime}}|^{2}+1}. \tag{10}\]
In each cluster, the cell-edge devices treat the interference as noise and decode their data symbols, whereas the cell-center device can decode the data symbols of the cell-edge device and perform SIC, hence effectively removing the interference due to the cell-edge device under _Assumption 3_.
To perform SIC, the cell-center device needs to be able to decode data signal intended for the cell-edge device, _i.e._, the ergodic SINR of the cell-edge device, \(\mathrm{SINR}_{K+1-k}\), at device \(k\), defined as \(\mathrm{SINR}_{k,K+1-k}\), must be greater than or equal to the ergodic SINR of the \(k\)-th cell-center device. Hence, given the uplink (mMIMO and NOMA) and downlink (NOMA) pilot overhead and assuming perfect CSI in all receivers, and admitting _Assumption 4_, the following condition must be satisfied [4, 10]:
\[\mathbb{E}[\mathsf{SINR}_{k,K+1-k}]\;\geq\;\mathbb{E}[\mathsf{SINR}_{k}], \tag{11}\]
where
\[\mathsf{SINR}_{k,K+1-k}=\frac{\beta_{k}p_{K+1-k}|\mathbf{h}_{i}^{\mathsf{T}} \mathbf{g}_{k}|^{2}}{\beta_{k}\sum_{k^{\prime}\neq K+1-k}^{K}p_{k^{\prime}}| \mathbf{h}_{i}^{\mathsf{T}}\mathbf{g}_{k^{\prime}}|^{2}+1}. \tag{12}\]
Herein, the condition in (11) must be satisfied by selecting the transmit powers appropriately.
The achievable ergodic rate of devices in cluster \(k\), _i.e._ device \(k\) in \(\mathcal{K}_{H}\) subset and device \(K+1-k\) in \(\mathcal{K}_{L}\) subset, under Assumptions 1-4, is given by the ergodic rate contribution of user-center device:
\[\mathsf{R}_{k}^{\mathrm{NOMA}}=\tau\mathbb{E}\left[\log_{2}\left(1+\mathsf{ SINR}_{k}\right)\right],\qquad\forall k\in\mathcal{K}_{H} \tag{13}\]
in \([\mathrm{bits}/\mathrm{s}/\mathrm{Hz}]\), and for the user-edge device:
\[\mathsf{R}_{K+1-k}^{\mathrm{NOMA}}=\tau\mathbb{E}\left[\log_{2}\left(1+ \mathsf{SINR}_{K+1-k}\right)\right],\qquad\forall k\in\mathcal{K}_{L} \tag{14}\]
where \(\tau=(1-\frac{K\cdot T_{i}}{1})\), is the portion of each channel coherence interval (T) that is used for data transmission.
Assuming perfect channel state information, ZF precoding for inter-clusters interference elimination, and using random matrix theory results [25], the \(k\)-th cluster NOMA achievable rate is obtained plugging eq. (10), (13) and (14):
\[\mathsf{R}_{\mathrm{d}k}^{\mathrm{NOMA}}=\tau\mathbb{E}\left[ \log_{2}\left(1+\bar{M}\beta_{k}p_{k}\right)\right]+ \tag{15}\] \[\tau\mathbb{E}\left[\log_{2}\left(1+\frac{\beta_{K+1-k}p_{K+1-k} }{\beta_{K+1-k}p_{k}+1}\right),\right]\]
\(\forall k\in\mathcal{K}\) and \(\bar{M}=M+1-K/2\). Hence, the NOMA system can operate until \(K<2M-1\). A detailed derivation of the expressions on this section can be found in [4] and [10].
### _Data Rates in mMIMO with ZF_
In the mMIMO system with ZF precoding the ergodic achievable rate for device \(k\) is given by:
\[\mathsf{R}_{k}^{\mathrm{M-MIMO}}=\tau\mathbb{E}\left[\log_{2}\left(1+\mathsf{ SINR}_{k}\right)\right],\quad[\mathrm{bits}/\mathrm{s}/\mathrm{Hz}] \tag{16}\]
where \(\mathsf{SINR}_{k}\) is defined in (10). Hence, the above mMIMO achievable rate equation becomes:
\[\mathsf{R}_{k}^{\mathrm{M-MIMO}}=\tau\mathbb{E}\left[\log_{2}\left(1+(M-K)p _{k}\beta_{k}\right)\right],\quad[\mathrm{bits}/\mathrm{s}/\mathrm{Hz}], \tag{17}\]
where \((M-K)\) is obtained using random matrix theory, representing the coherent array gain of the received signal [25]. Under linear precoding and combiners, the mMIMO system operates consistently when \(K<M\). Finally, the _average system sum-rate_ (avg-sum-rate) is defined simply by:
\[\mathcal{R}^{\mathrm{M-MIMO}}=\sum_{k=1}^{K}\mathsf{R}_{k}^{\mathrm{M-MIMO}} \qquad\text{and}\qquad\mathcal{R}^{\mathrm{NOMA}}=\sum_{k=1}^{K_{H}}\mathsf{ R}_{\mathrm{d}k}^{\mathrm{NOMA}}.\]
The mMIMO system equations have been thoroughly investigated in literature and can be found in [25] and [26].
### _Energy Efficiency_
EE metric is the ratio of the number of effective bits of information received over the total energy consumed by the overall system to transmit and receive/decode such information. The system data rate can determine the number of effective information bits received at the destination. Power consumption required for processing the signal at the transmitter and receiver side is often neglected; in this sense, it is calculated just as proportional to the radiated transmitted power. The growth in the number of antennas in the BS and the increased number of devices in 5G systems can lead to unattainable EE goals. In general, the average EE can be expressed as:
\[\mathsf{EE}=\frac{\sum_{k=1}^{K}R_{k}}{P_{\mathrm{Tor}}},\qquad[\mathrm{bits}/ \mathrm{Joule}/\mathrm{Hz}], \tag{18}\]
where \(P_{\mathrm{Tor}}\) is the total power consumption across the communication system. It should consider transmission power consumption, such as RF power amplifier inefficiency, base-band signal processing, and cooling, among others. Therefore, a more realistic and detailed energy consumption model is required.
Based on [2], the adopted power consumption model in our work considers two power terms: _a)_ fixed-term; _b)_ terms scaled with the number of antennas \(M\) and the number of devices \(K\). The scaled terms occur because of the transceiver chains, coding/decoding, channel estimation, and precoding. Let the computational efficiency be \(L\) operations per joule in BS. We describe it as follows:
_RF Power_: \(P_{\mathrm{Tor}}\) is the power consumed to transmit the signal to active devices achieved the SINR target and \(0<\varpi\leq 1\) is the efficiency of the power amplifier.
Figure 2: System Model indicating the Paring formation in NOMA system. Both mMIMO and NOMA systems deploy the same massive number of antennas at base-station, \(M\).
_Fixed consumption_: \(P_{\text{\tiny{fixed}}}\) is the power consumed at the BS which is independent of the number of transmit antennas and devices in the cell, is formed of term \(P_{0}\) including the power consumption of backhaul infrastructure, control signaling, baseband processor, and term \(P_{\text{\tiny{NN}}}\) a single oscillator used in all BS.
_Dependence only on K_: \(P_{\text{\tiny{K}}}\) is formed by the consumption to coding and modulation of information symbols to devices, represented by \(P_{\text{\tiny{COD}}}\), the consumption to BS decoded the \(K\) sequences of information symbols, defined by \(P_{\text{\tiny{REC}}}\), and the received power, represented by \(P_{\text{\tiny{RX}}}\), still composes this term, multiplied by \(K\) as well. In addition, a portion of the ZF precoding cost [27] depends only on \(K^{3}\).
\[P_{\text{K}}=K(P_{\text{\tiny{COD}}}+P_{\text{\tiny{REC}}}+P_{\text{\tiny{RX} }})+K^{3}\frac{2}{3LT}\]
_Dependence only on M_. The term \(P_{\text{M}}\) represents the transmitted power (\(P_{\text{\tiny{TX}}}\)), hence
\[P_{\text{M}}=MP_{\text{\tiny{TX}}}\]
_Dependence on K and M_: the term \(P_{\text{\tiny{KM}}}\) is the cost of the ZF precoding (due to LU-based matrix inversion) [27], which depends on the number of devices, the number of antennas, and the vector information symbol.
\[P_{\text{\tiny{KM}}}=MK\frac{3+T}{TL}+MK^{2}\frac{2}{TL}\]
Adding the portions, we obtain the overall power consumption of the system:
\[P_{\text{\tiny{TOT}}}=\frac{P_{\text{\tiny{RF}}}}{\varpi}+P_{\text{\tiny{FIELD}}} +P_{\text{\tiny{K}}}+P_{\text{\tiny{M}}}+P_{\text{\tiny{KM}}}\quad[\text{W}]. \tag{19}\]
### _Power Allocation Strategies_
In the sequel, we present three well-known and frequently applied strategies for power allocation. Still, due to the inherent characteristics of NOMA, we propose modifications on the classical water-filling (WF) algorithm to enable application in the NOMA system. Such modifications, namely \(\Delta\)-WF, ensure that none of the paired devices are dropped-out without undoing the pairing of devices. To guarantee a certain level of power disparities in each paired device, the power allocation \(\Delta\)-WF procedure in the NOMA system has two steps: a) first, we allocated power for the clusters; b) we allocate power between paired devices. Thereby, we could analyze the behavior of the systems and compare their results.
Notice that both mMIMO and NOMA systems deploy the same massive number of antennas at base-station, \(M\). Hence, due to the _channel hardening_ effect [1, 25] inherent to massive MIMO configuration, the small-scale fading vanishes across the \(M\) antennas equipped with a linear ZF precoding with vector as eq. (8). Hence, one can consider just the _pathloss coefficients_\(\beta_{k}\) as the main parameter in the power allocation policies of systems based on a massive number of antennas.
#### Iii-D1 Equal Power Allocation (EPA)
Equal Power Allocation (EPA) power allocation is deployed as a simple, naive strategy, where all devices are served with the same power. In mMIMO, all devices are served with the same transmission power regardless of their distance from the BS. In NOMA, power allocation has two steps. In the first step, the power is allocated equally between the pairs, and then we allocate each device's power equally. The EPA strategy applied to mMIMO can be defined by:
\[p_{k}=\frac{P_{\text{\tiny{RF}}}}{K}\quad[\text{W}],\quad\forall\,k\in\mathcal{ K}. \tag{20}\]
In the case of EPA procedure applied to NOMA, it is composed of two steps: in the first step, power reference to each cluster can be defined simply as:
\[p_{\text{ref}}^{\text{el}}=\frac{2\cdot P_{\text{\tiny{RF}}}}{K}\quad[\text{ W}],\quad\forall\,k\in\mathcal{K}_{H}. \tag{21}\]
In the second step, the power allocation among the devices in the same cluster is defined as:
\[p_{k}^{\text{el-k}}=p_{K+1-k}^{\text{el-k}}=\frac{p_{\text{ref}}^{\text{el}}}{ 2}. \tag{22}\]
#### Iii-D2 Proportional Channel Inversion Power Allocation (PICPA)
is another power allocation technique adopted in this study. Unlike the EPA technique, which applies the same power to all devices, PICPA applies more power to devices with the worst channel conditions, favoring fairness across the devices. Such power allocation penalizes the average sum rate in favor of fairness among all users.
The Proportional to the Inverse of the Channel Power Allocation (PICPA) strategy applied to mMIMO can be defined as:
\[p_{k}=P_{\text{\tiny{RF}}}\frac{\beta_{k}^{-1}}{\sum_{k=1}^{K}\beta_{k}^{-1}} \quad[\text{W}],\quad\forall k\in\mathcal{K}, \tag{23}\]
while the PICPA procedure applied to NOMA follows two steps; in the first step, the power is allocated equally among the \(K/2\) clusters:
\[p_{\text{ref}}^{\text{el}}=\frac{2\cdot P_{\text{\tiny{RF}}}}{K}\quad[\text{W}],\quad\forall k\in\mathcal{K}_{H}, \tag{24}\]
after that, the power of each device within the \(k\)-th cluster is defined by allocating more power to the device with smaller large-scale fading \(\beta_{k}\):
\[p_{k}^{\text{el-k}}=\,p_{\text{ref}}^{\text{el}}\frac{\beta_{K+1-k}}{\beta_{k} -\beta_{K+1-k}},\quad\text{and}\quad p_{K+1-k}^{\text{el-k}}=p_{\text{ref}}^{ \text{el}}-p_{k}^{\text{el-k}}, \tag{25}\]
where \(p_{k}^{\text{el-k}}\) is the power allocated to the device \(k\) in the cl-\(k\) cluster, and \(p_{K+1-k}^{\text{el-k}}\) is the power allocated to the device \((K+1-k)\), also belonging to the \(k\)-th cluster.
#### Iii-D3 Classical Water-Filling (WF) Algorithm
The application of the Water-Filling (WF) algorithm in mMIMO system results in an optimal (maximum) system sum-rate solution. However, some devices are dropped out of the service due to the deteriorated channel condition. The WF power allocation strategy for mMIMO is described as:
\[\mu= \frac{1}{|\mathcal{K}|}\left(P_{\text{\tiny{RF}}}+\sum_{k=1,k\in \mathcal{K}}^{|\mathcal{K}|}\frac{1}{\beta_{k}}\right), \tag{26}\] \[P_{\text{\tiny{RF}}}=\sum_{k=1,k\in\mathcal{K}}^{|\mathcal{K}|}p_{k }\,,\quad\text{where}\quad p_{k}=\left(\mu-\frac{1}{\beta_{k}}\right)^{+}, \forall k\in\mathcal{K}\] \[\text{and}\quad\mathbf{p}=[p_{1},p_{2},...,p_{|\mathcal{K}|}],\]
with the operator \((z)^{+}=\max(0,z)\). Notice that the constrained value for the total power available is set to \(P_{\text{\tiny{RF}}}\)\([\text{W}]\). The Algorithm 1 describes the classical WF procedure.
On the other hand, the direct application of WF algorithm in the NOMA system implies harming the pair formation, _i.e._, devices present in the \(\mathcal{K}_{L}\) set are effectively dropped-out of the service, undoing the pair. We propose a modification in classical WF like the following to allow some comparison.
#### Iii-D4 \(\Delta\)-WF for NOMA
The application of classical WF in NOMA in the same way as it is applied to mMIMO causes some formed pairings to be broken, since after WF algorithm application, some devices are dropped-out from the system, making the NOMA power difference (\(\Delta\)) in the devices of the same cluster vanish. Hence, we suggest modifying the classical WF procedure to be applied to NOMA accordingly. The steps of the \(\Delta\)-WF algorithm are described as follows.
In NOMA, the power allocation has two steps, in the first step, the allocation is between clusters. Hence, to prevent the formed pairs from being broken, we propose the application of WF based on the large-scale fading differences of the paired devices, as defined in eq. (7): \(\Delta\beta_{k}=(\beta_{k}-\beta_{K+1-k})\) inside each \(\mathcal{K}_{L}\) and \(\mathcal{K}_{H}\) subsets, eq. (9). In the second step of the procedure, the power is allocated between the devices inside the group, assuming perfect successive interference cancellation (SIC); for this to be possible, the condition in Eq. (11) must be satisfied. The new water-level in the modified \(\Delta\)-WF power allocation strategy for NOMA is defined by
\[\mu= \frac{2}{\mathcal{K}_{H}}\left(P_{\text{\tiny WF}}+\sum_{k=1,k \in\mathcal{K}_{H}}^{|\mathcal{K}_{H}|}\frac{1}{\Delta\beta_{k}}\right), \tag{27}\] \[P_{\text{\tiny WF}}=\sum_{k=1,k\in\mathcal{K}_{H}}^{|\mathcal{K} _{H}|}p_{\text{\tiny GD}k},\qquad\forall k\in\mathcal{K}_{H}\] \[\text{where}\quad p_{\text{\tiny GD}k}=\left(\mu-\frac{1}{\Delta \beta_{k}}\right)^{+},\] \[\text{and}\quad\mathbf{p}_{\text{\tiny GD}k}=[p_{1},p_{2},...,p_ {|\mathcal{K}_{H}|}],\]
In the second step, the power allocation to both devices in the \(k\)-th cluster is defined as:
\[p_{K+1-k}^{\text{\tiny GD}k}=p_{k}^{\text{\tiny GD}k}=\frac{p_{\text{\tiny GD }k}}{2} \tag{28}\]
Algorithm 2 summarize the proposed \(\Delta\)-WF power allocation procedure, aiming to improve the SE of NOMA systems.
```
Input:\(\mathcal{K}_{H}\), \(\mathcal{K}_{L}\), \(\mathsf{P}_{\text{\tiny WF}}\)
1NP \(\neq\varnothing\); while(NP \(\neq\varnothing\))do
2 solve Eq. (27) \(\rightarrow\)\(\mathbf{p}_{\text{\tiny GD}k}\);
3 NP \(\leftarrow\) identify null position in \(\mathbf{p}_{\text{\tiny GD}k}\);
4\(\mathcal{K}_{H}/\{k\}_{\text{\tiny NP}}\leftarrow\) exclude from \(\mathbf{p}_{\text{\tiny GD}k}\) devices labeled as NP
5 end while Output:\(\mathbf{p}_{\text{\tiny GD}k}=[p_{1},p_{2},\ldots,p_{|\mathcal{K}_{H}|}]\)
```
**Algorithm 2**\(\Delta\)-WF (modified) for NOMA systems
**Complexity analysis:** In a comparative analysis of complexity, the \(\Delta\)-WF algorithm for power allocation in NOMA system (Algorithm 2) performs two simple additional operations compared to the classical WF procedure (Algorithm 1): a) in eq. (27) the subtraction in \((\beta_{k}-\beta_{K+1-k})\); and b) the division by two in (28). Besides, NOMA vs. mMIMO systems require different _a priori_ information to proceed accordingly with the PA procedure.
### _Prior Information for Power Allocation Step_
For implementing the PA policies, prior information is required at the BS, as defined in Table I. Some of this necessary information can be obtained through the dedicated _pilot_ transmission step, at the cost of some overhead, as described in Section II-B. Moreover, a preliminary step is known, in which the _spatial localization_ and _path loss_ estimation of all devices must be realized. With such a _prior_ information availability, the _user-sorting_ and _user-pairing_ steps can be performed.
## IV Numerical Results
The numerical evaluations for the proposed analyses of NOMA _and_ mMIMO systems are presented in this section. The simulation system and channel parameter values deployed along this section are depicted in Table II. The BS is located at the center of cell and equipped with massive \(M\) BS transmit antennas in typical non-line-of-sight (NLOS) channel propagation scenario. At the same time, the devices are randomly distributed in the cell area and split into two subsets, \(\mathcal{K}_{L}\) and \(\mathcal{K}_{H}\), as illustrated in Fig. 2. In all simulations, we consider a block fading model where the time-frequency resources are divided into coherence time intervals (T), in which the channels remain constant and frequency flat, and it is measured in multiple of symbol transmit period (\(T_{s}\)). The system and channel scenarios have been simulated using Matlab 2019 software running under one Intel HD Graphics 6000 GPU, Intel(R) Dual-Core(TM) I5 CPU @ 1.6 GHz, and 8 GB RAM.
\begin{table}
\begin{tabular}{l|l|l} \hline
**Parameter** & **Value** \\ \hline BS antennas & \(M=64,128\) and \(256\) \\ Max. \# Devices in the cell & \(K=\cdot M\) (NOMA) \\ Cell loading & \(\rho=K/M\) \\ Total RF power available & \(P_{\text{\tiny WF}}=1\)W \\ Pairs NOMA / Clusters & \(N=K/\zeta=K/2\) \\ NOMA devices per cluster & \(\zeta=2\) \\ \# antennas per device & 1 \\ Cell edge length & \(d_{\text{max}}=350\) m \\ Strong device position & \([d_{\text{min}};d_{1}]\in[50;100]\) m \\ Weak device position & \([d_{\text{2}};d_{\text{max}}]\in[150;350]\) m \\ Array gain MIMO device & \(M-K\) \\ Array gain NOMA \(k_{H}\) & \(M+1-K/2\) \\ Data symbol period & \(T_{s}\) \\ Coherence time interval & \(\mathbf{T}=512\cdot T_{s}\), \(t=512\) \\ \hline \hline \multicolumn{2}{l}{**Channel**} \\ \hline \hline Pathloss exponent & \(\xi=3.78\) \\ Attenuation at a \(d_{0}\) reference \# Monte-Carlo realizations & \(\hat{\beta}_{0}=130\) [dB] \\ \hline \end{tabular}
\end{table}
Table II: Simulation Parameters
### _Spectral Efficiency Comparison_
The mMIMO and NOMA SE performance analysis is carried out in this subsection, by increasing the number of devices two by two until the loading limit \(\rho=2\). The results consider \(M=64\), \(128\) and \(256\) BS antennas. In Fig. 3. (a) the results of SE are achieved when the RF power available is allocated following the EPA strategy, where each device receives the same PA values. The mMIMO system overcame NOMA in all situations when the loading \(\rho<0.6\). However, the NOMA system achieves a higher SE than the mMIMO for each \(M\) scenario when the loading of devices increases, \(\rho>0.6\). The maximum avg-SE is \(373\) [bits/s/Hz], being attained with ZF-NOMA \(M=256\) antennas and \(\rho\approx 0.76\). Besides, one can infer that the mMIMO does not work with a loading higher than \(1\), due to the array gain reaching 0 at full loading of devices, while NOMA works suitably until the loading attains \(M\cdot\zeta\), where \(\zeta\) is number of devices per cluster.
Fig. 3.(b) depicts the results of SE achieved in the mMIMO and NOMA systems when the PICPA method is applied to allocate the available RF power per device along the BS antennas. The maximum avg-SE in mMIMO system overcomes the NOMA counterpart until the loading \(\rho\) exceeds \(\approx 0.62\) for the three BS antenna configurations, \(M=64,128\), and \(256\). This PA technique provides more power to devices with the worst channel condition, making the SE result reach maximum values below the EPA.
Fig. 3.(c) depicts the conventional WF algorithm applied to mMIMO. Under such a power allocation approach, we highlight that forming pairs is unfeasible in the NOMA system. Indeed, the WF algorithm can maximize the system SE since it allocates more power to devices with better channel conditions. In contrast, devices under bad channel conditions (below the water level) are dropped out of the service.
The classical WF algorithm has been adapted to the NOMA system dropped-out always a pair of devices. Such adaptation reveals substantial improvements of avg-sum-rate when \(M\) is low compared to classical WF PA in mMIMO. The \(\Delta\)-WF power allocation procedure preserves the pairs clustering formation in the NOMA system, allocating more power to the cluster with a higher difference between coefficients of large-scale fading. Fig. 3.(c) shows that the maximum avg-SE \(\approx 361\) [bits/s/Hz], which is achieved under \(\rho=1\) (\(K\approx 256\) devices) when the modified WF is deployed in NOMA system. Moreover, when the number of BS antennas is lower (\(M=64\) or \(128\)), the NOMA achieved a peak higher than mMIMO, e.g., for \(M=64\) antennas, the peak of SE mMIMO occurs at loading \(\rho\approx 0.7\), while the NOMA SE peaks at \(\rho\approx 1.2\). However, as the number of BS antennas grows, the NOMA SE advantage decreases.
**Number of active devices after PA procedure**. Fig. 4 shows the number of actives device after applying PA methods: in the EPA and PICPA algorithms, all devices remain activated. However, in classical WF mMIMO system when the number of device increases beyond \(\rho\approx 0.25\), half of the devices are dropped-out; while in \(\Delta\)-WF NOMA PA, the percentage of active devices is always higher, e.g. higher than 70% for \(\rho\approx 1.1\) and \(M=256\) antennas, the worst case.
Fig. 5 summarizes avg-sum-rate surfaces in terms of SE \(\times\rho\times M\) results achieved by NOMA with EPA, mMIMO with WF, and NOMA with modified \(\Delta\)-WF. In the initial loading part, \(\rho<0.65\), the classical WF PA in ZF-mMIMO achieves better results until the loading (pink surface). When the number of antennas is low as \(M=64\) and \(\rho\) is in between \(0.7\) and \(1.8\), the EPA PA applied to NOMA (green
Fig. 3: The average sum-rate with the loading of devices \(0<\rho\leq 2\), considering four power allocation methods: EPA, PICPA, WF, and \(\Delta\)-WF The average is obtained over 1000 random devices locations.
surface) achieved superior results. Moreover, when \(M=128\) and \(0.8<\rho<1.6\), the ZF-NOMA-EPA achieve superior results (green surface). For a higher number of antennas in BS, _i.e._, \(M=256\) only in a short loading of devices range, \(0.86<\rho<0.97\), the ZF-NOMA-EPA achieves superior SE results. Finally, when \(\rho>0.97\), the modified \(\Delta\)-WF achieves competitive results (blue surface).
### _Jain's Fairness Index_
Another critical analysis developed was to analyze the fairness between the devices, _i.e._, to know the difference in the transmission rate achieved by active devices in the cell. For this measure, we use the Jain's Fairness index like described in [28] and can be defined as:
\[\mathcal{F}_{\text{M}}^{\text{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{ \tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{ \tiny{\tiny{ }}}}}}}}}}}}}}}}}= \frac{\left(\sum_{k=1}^{M}R_{k}\right)^{2}}{M\sum_{k_{1}}^{M}R_{k}^{2}}. \tag{29}\]
The Fig. 6. depicts the fairness curves attainable by NOMA and mMIMO systems with EPA, PICPA, WF, and \(\Delta\)-WF PA procedures when the loading of devices grows until \(\rho=2\). Fig. 6.(a) shows the Jain's Fairness Index when EPA policy is used, the mMIMO system performs better \(\mathcal{F}\) results than NOMA for \(\rho<1\), on the other hand, NOMA can attain \(\mathcal{F}_{\text{M}}^{\text{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{ \tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{{\tiny{ }}}}}}}}}}}}}}}} \approx 0.5\) in almost every loading of devices, independent of M.
Fig. 6.(b) reveals the Jain's Fairness Index when the PICPA method is applied, despite the SE result being lower in this PA method, the mMIMO obtains the best fairness result, keeping the \(\mathcal{F}_{\text{M}}^{\text{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{ \tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\tiny{\left.}}}}}}}}}}}}}}}}}\) consistently above 0.85, still the NOMA under 0.5.
The Jain's Fairness Index when WF and \(\delta\)-WF are depicted in Fig.6(c), It is intrinsic to these algorithms to allocate more power to devices with better channel conditions, which causes fairness between devices to be impaired. In this method, it is possible to observe a significant influence of the number of antennas \(M\) in the BS and the fairness result.
### _Energy Efficiency Comparison_
Energy efficiency (EE) is another important figure of merit used to compare systems' performance. In this section, a power consumption model based on fixed circuitry power part and that one varying according to the number of antennas \(M\) and the number of devices \(K\) has been adopted, following eq. (19).
Table III[2] presents the adopted parameter values for the EE analysis and comparison discussed in this subsection. Fig. 7 depicts the performance of EE with EPA, PICPA, WF, and \(\Delta\)-WF PA procedures, considering the exact three quantities of antennas. Fig. 7.(a) shows the EE performance with EPA. In this method, all devices receive the same power. The avg-EE mMIMO overcomes the NOMA around 13% to 20%. It was possible to observe that adding antennas in the BS increases the power consumption, harming the EE result. Again, under loading of devices \(0.6<\rho\leq 2.0\), the NOMA overcomes the mMIMO system.
Fig. 7.(b) reveals the EE performance with PICPA. In this method, more power is allocated to devices with poor channel coefficients, resulting in reduced poor EE performance for both NOMA and M-MIMO systems, attaining a maximum of 0.25 and 0.16 [bits/W] for mMIMO and NOMA, respectively. The maximum EE attained by mMIMO is generally around 50% higher than NOMA. However, for loading of devices \(\rho\geq 0.65\), NOMA overcomes mMIMO EE performance.
Fig. 7.(c) depicts the EE performance in the mMIMO with classical WF and in the NOMA with \(\Delta\)-WF algorithm. It is possible to confirm the superiority of energy efficiency of mMIMO within the range where it operates consistently, _i.e._, \(0<\rho<1\). Notice that the maximum EE achieved by mMIMO is about 70 % higher than NOMA for different BS antennas. Finally, NOMA becomes more energy efficient than mMIMO only when the loading of devices is high, \(\rho>0.95\).
In all analyzed system scenarios, the mMIMO equipped with classical WF PA procedure achieves higher maximum EE. The mMIMO attains better EE results than NOMA for \(\rho<1\). On the other hand, NOMA can serve a more significant number of devices (twice) than mMIMO.
Fig. 8 summarizes the best EE results in a surface plotting for the mMIMO with WF overcoming NOMA across the entire
\begin{table}
\begin{tabular}{l|l} \hline
**Parameter** & **Value** \\ \hline Backhaul Infrastructure & \(P_{b}\) = 2 W \\ \hline Single oscillator & \(P_{syn}\) = 2 W \\ \hline Coding and modulation & \(P_{COD}\) = 4 W per device \\ \hline Decoding and demodulation & \(P_{DEC}\) = 0.5 W per device \\ \hline Receive power & \(P_{RX}\) = 0.3 W per device \\ \hline Transmitted power & \(P_{TX}\) = 1 W per antenna \\ \hline Efficiency of Power Amplifier & \(\tau=0.3\) \\ \hline Operations/Joule & L = \(10^{9}\) oper. per joule \\ \hline \end{tabular}
\end{table}
Table III: Adopted Parameters values for EE analysis[2]
Figure 4: The average of active devices after PA procedure _versus_ loading of devices in the range \(0<\rho\leq 2\). The average is obtained over 1000 random devices locations.
Figure 5: Average sum rate with loading \(\rho\) and M.
Figure 6: The fairness of NOMA _and_ mMIMO system under three power allocation procedures: (a) EPA; b) PICPA; (c) WF and \(\Delta\)-WF. The average is obtained over 1000 random device locations.
Figure 7: EE for NOMA _vs._ mMIMO under three power allocation procedures: (a) EPA; b) PICPA; (c) WF and \(\Delta\)-WF. Average EE obtained over 1000 random devices locations.
loading range where it operates consistently. For device loading \(\rho>1\), the NOMA operates under lower EE until the loading \(\rho=2\). Moreover, considering the smallest number of antennas in the BS, the NOMA with EPA overcame the NOMA with modified \(\Delta\)-WF; despite that, as the number of antennas in the BS increases, the NOMA with \(\Delta\)-WF achieves marginal superior EE results.
### _Area Under Curves SE and EE_
For a fair comparison, one can consider a wide range of average SE and EE along the loading of devices, and normalized per antenna, which can be attainable by NOMA and mMIMO systems. Hence, let us consider the corresponding areas under the SE and EE curves in Fig. 3 and Fig. 7, such that:
\[\mathcal{S}_{\mathrm{M}}^{\mathrm{SVT}}=\frac{1}{M}\cdot\int_{0}^{\rho=2} \overline{\mathrm{SE}}(\rho)\ d\rho\qquad\left[\frac{\mathrm{bits/antenna}}{ \mathrm{s}\cdot\mathrm{Hz}}\right]\]
and
\[\mathcal{E}_{\mathrm{M}}^{\mathrm{SVT}}=\frac{1}{M}\cdot\int_{0}^{\rho=2} \overline{\mathrm{EE}}(\rho)\ d\rho\qquad\left[\frac{\mathrm{bits/antenna}}{ \mathrm{Joule}\cdot\mathrm{Hz}}\right],\]
respectively, where \(\overline{\mathrm{SE}}(\rho)\) is the average overall system sum-rate, and \(\overline{\mathrm{EE}}(\rho)\) is the average overall system energy efficiency achieved under specific loading of devices \(\rho\). Hence, comparing the values of corresponding areas under the SE and EE curves of Fig. 3 and Fig. 7, we obtained Fig. 9.
From the SE perspective, and considering EPA policy, the higher area-under-SE-curve ratio gain is achieved when the number of BS antennas is \(M=64\):
\[\mathcal{S}_{\mathrm{M=64}}^{\mathrm{NOMA}}\approx 2.7\cdot\mathcal{S}_{ \mathrm{M=64}}^{\mathrm{mMIMO}}.\]
Notice that when the number of antennas \(M\) grows, the ratio above decreases. In the same way, considering WF policy, the gain trend remains. In contrast, considering the PICPA policy, the ratio practically remains the same.
Furthermore, considering now the EE perspective, under EPA policy, the higher ratio is achieved when the BS is equipped with \(M=256\) antennas:
\[\mathcal{E}_{\mathrm{M=256}}^{\mathrm{NOMA}}\approx 1.8\cdot\mathcal{E}_{ \mathrm{M=256}}^{\mathrm{mMIMO}}.\]
As one can conclude, in almost all scenarios, NOMA is more spectrally and energetically efficient than mMIMO over an extensive range of loading of devices \(0<\rho\leq 2\), roughly, in average, 80% in terms of energy efficiency, and 170% more efficient in terms of spectral efficiency.
### _Resource Efficiency (SE-EE Trade-off)_
The NOMA and mMIMO are analyzed in terms of SE and EE trade-off, namely _resource efficiency_ (RE), considering loading of devices increasing up to 2. From Fig. 10, one can find graphically the best loading of devices range that maximizes the SE-EE trade-off for each BS antenna configuration \(M\). The left y-axis depicts avg-SE, and the right y-axis shows the avg-EE. Table IV summarizes the optimal loading of devices that maximizes the SE-EE trade-off and shows the percentage of active users after Power Allocations and Jain's Fairness Index. Fig. 10.(a) reveals the results when \(M=64\), NOMA with EPA achieved SE-EE trade-off with the highest loading of devices and the highest SE in trade-off; on the other hand, mMIMO with classical WF achieved the highest SE-EE trade-off; however, the percentage of actives devices is around 0.5. Fig. 10.(b) depicts results when \(M=128\), NOMA with EPA achieved SE-EE trade-off with the highest loading of devices, in contrast, mMIMO with classical WF with lower loading of devices achieved higher values of SE and EE in trade-off with half of the active devices. Fig 10.(c) showed the results when M=256, NOMA with \(\Delta\)-WF achieved SE-EE trade-off with the highest loading of devices, and one more time mMIMO with WF achieved higher SE-EE trade-off with 47% of active devices. It is possible to demonstrate that the
Figure 8: EE with loading \(\rho\) and M.
Figure 9: The Area Under curve of NOMA _and_ mMIMO system under three power allocation procedures: (a) SE curves; b) EE curves. The average is obtained over 1000 random devices locations.
increase of antennas in the BS improves the SE result, on the other hand, it worsens the EE result.
## V Conclusion and Future Works
This work proposes a comparative SE and EE analysis in DL single-cell between mMIMO and NOMA with BS equipped with three antenna configurations. Under the SE perspective, mMIMO with the classical WF algorithm achieved better low- and medium-loading results. On the other hand, when the system loading is higher as \(\rho>0.6\) the NOMA achieves better results in the range \(0.6<\rho\leq 2\).
The analyzed PA methods applied to the NOMA system, (EPA, PICPA, and \(\Delta\)-WF) result in different SE performance. Indeed, when the channel hardening condition is fully attained, and the amount of BS antennas increases(\(M=128\) and \(256\)), the best SE results are attained with the proposed \(\Delta\)-WF algorithm, but, as expected, the fairness index is harmed.
Under the EE perspective, the mMIMO achieved better results when employing the three EPA, PICPA, and WF PA methods under \(K<M\). However, the NOMA can operate under higher system loading, _i.e._, \(K<2M-1\).
In terms of area-under-SE-curve and EE-curve metrics, \(\mathcal{S}\) and \(\mathcal{E}\), respectively, the NOMA system attained better results, due to its ability to serve a larger number of users than mMIMO. Such numerical results confirm NOMA's ability to operate with high loading of devices. On the other hand, achieving high fairness with NOMA is impossible.
From the perspective of SE-EE trade-off, mMIMO achieved the best results, because of the superiority in EE; always achieved in loading of devices \(\rho=0.6\) in all \(M\) setups.
NOMA systems present exciting features and have been intensively investigated as a promising technique in devising future wireless generations. As future works, hybrid NOMA systems and alternative techniques such as _rate-splitting multiple access_ (RSMA) can improve the overall EE of massive MIMO systems.
## Acknowledgement
This work was partly supported by The National Council for Scientific and Technological Development (CNPq) of Brazil under Grants 310681/2019-7, partly by the CAPES- Brazil - Finance Code 001, and the Londrina State University - Parana State Government (UEL).
|
2304.09594
|
Formality of Sphere Bundles
|
We study the formality of orientable sphere bundles over connected compact
manifolds. When the base manifold is formal, we prove that the formality of the
bundle is equivalent to the vanishing of the Bianchi-Massey tensor introduced
by Crowley-Nordstr\"{o}m. As an example, this implies that the unit tangent
bundle over a formal manifold can only be formal when the base manifold has
vanishing Euler characteristic or a rational cohomology ring generated by one
element. When the base manifold is not formal, we give an obstruction to the
formality of sphere bundles whose Euler class is reducible.
|
Jiawei Zhou
|
2023-04-19T11:59:55Z
|
http://arxiv.org/abs/2304.09594v3
|
# Formality of Sphere Bundles
###### Abstract
We study the formality of orientable sphere bundles over connected compact manifolds. When the base manifold is formal, we prove that the formality of the bundle is equivalent to the vanishing of the Bianchi-Massey tensor introduced by Crowley-Nordstrom. As an example, this implies that the unit tangent bundle over a formal manifold can only be formal when the base manifold has vanishing Euler characteristic or a rational cohomology ring generated by one element. When the base manifold is not formal, we give an obstruction for the formality of sphere bundles whose Euler class is reducible.
## 1 Introduction
In rational homotopy theory, the equivalence relation between topological spaces is defined by a continuous map \(f:X\to Y\) inducing an isomorphism \(\pi_{*}(X)\to\pi_{*}(Y)\). If the fundamental groups of \(X\) and \(Y\) are simple enough (for example, trivial), this equivalence is the same as \(f^{*}:H^{*}(Y,\mathbb{Q})\to H^{*}(X,\mathbb{Q})\) is an isomorphism. Hence, we can use commutative differential graded algebra (CDGA) as a model of these spaces. Such CDGAs are composed of polynomial differential forms, or just differential forms if the space is a smooth manifold. The equivalence between CDGAs are also defined by quasi-isomorphisms, which are the induced isomorphisms between cohomologies. A CDGA is called formal if it is equivalent to its cohomology, and a topological space is called formal if its model CDGA is formal. In short, the rational homotopy type of a formal space can be described just by its cohomology ring, a much simpler structure.
Several examples of formal spaces stand out, such as H-spaces, symmetric spaces, products of formal spaces, and \(k\)-connected compact manifolds whose total dimension does not exceed \(4k+2\)[11]. Additionally, compact Kahler manifolds are formal, which was first shown by Deligne, Griffiths, Morgan, and Sullivan [5]. In general, a compact complex manifold is formal if it satisfies the \(dd^{c}\)-lemma. However, the analogous version for symplectic manifolds that satisfy the hard Lefschetz property does not imply formality, even if the manifold is simply-connected [3].
For general spaces, Sullivan in [12] (see also [5]) used a special class of algebras to classify CDGAs. It is a free graded algebra, with a well-ordered set of generators. The differential of each generator belongs to the subalgebra generated by elements with smaller indices. Such an algebra is referred to as the Sullivan algebra. It is called minimal if the differential is reducible. Every connected CDGA is equivalent to a unique minimal Sullivan algebra up to quasi-isomorphism, which is called the Sullivan minimal model.
Adding an odd degree generator to the Sullivan algebra of a manifold is equivalent to constructing an orientable bundle with odd-dimensional sphere fiber over it. So we are interested in investigating how this process affects formality, and hope that it contributes to study the formality of general fiber bundles. In this paper, manifolds are usually assumed to be smooth, connected and compact, sphere bundles are assumed to be orientable, and the cohomologies are assumed to be over a field \(\mathbb{K}\) of characteristic \(0\). Most proofs should also work for general topological manifolds and spherical fibrations, as they do not rely on the smooth or bundle structures.
The relationship of formalities between the base and total space was studied by Lupton [10] and Amann-Kapovitch [1]. For a fibration of simply connected spaces of finite type, if the fiber is elliptic, formal, and satisfies the Halperin conjecture, then the base is formal if and only if the total space is formal. This condition is satisfied by bundles with even-dimensional sphere fibers.
On the other hand, there are simple examples of a non-formal bundle over a formal manifold with an odd-dimensional sphere fiber, such as an orientable circle bundle over a torus with a non-trivial Euler class. However, adding one generator does not break formality dramatically. Biswas, Fernandes, Munoz and Tralle [2, Proposition 4.5] proved that the higher order (greater than \(3\)) Massey products of a Boothby-Wang fibration vanish, if the base is a formal symplectic manifold satisfying the hard Lefschetz property. In an earlier work [15], we have also shown that a sphere bundle over a formal manifold has an \(A_{\infty}\)-minimal model where the only non-trivial operations are \(m_{2}\) and \(m_{3}\).
An \(A_{\infty}\)-algebra is said to be formal if it has an \(A_{\infty}\)-minimal model with only \(m_{2}\) non-trivial. The information of \(m_{3}\) is encoded in the Bianchi-Massey tensor introduced by Crowley and Nordstrom [4], which is a linear map from a subspace of \((H^{*})^{\otimes 4}\) to \(H^{*}\). More precisely, a compact manifold whose Bianchi-Massey tensor vanishes has an \(A_{\infty}\)-minimal model with \(m_{3}=0\). Therefore, it is natural to conjecture that a sphere bundle over a formal manifold is formal if its Bianchi-Massey tensor vanishes.
Unlike the possible different representatives of \(A_{\infty}\)-minimal models and Massey products, the Bianchi-Massey tensor is uniquely defined without ambiguity. So once we proved the above conjecture in Section 3, it follows that the formality of a sphere bundle over a formal manifold can be determined by finite calculation.
**Theorem 1.1**.: _Suppose \(M\) is a compact formal manifold, and \(\pi:X\to M\) is an orientable \(S^{k}\)-bundle. Then \(X\) is formal if and only if the Bianchi-Massey tensor of \(\Omega^{*}(X)\) vanishes. Moreover, when \(k\) is even, \(X\) is always formal._
Also, a trivial Euler class is sufficient for the formality of sphere bundles over formal manifolds. For non-trivial Euler classes, we can consider the special case that the Euler class is that of the volume form. This requires the manifold to be even-dimensional. We prove that such bundles are formal only when the rational cohomology rings of the base manifolds are generated by one element.
**Theorem 1.2**.: _Suppose that \(M\) is an even-dimensional compact orientable formal manifold. Let \(X\) be a sphere bundle such that the volume form of \(M\) is a representative of the Euler class. If \(X\) is formal, then \(H^{*}(M)=\mathbb{K}[x]/(x^{p})\) is a quotient of the polynomial ring with a single variable._
It immediately follows that if the unit tangent bundle of a formal manifold \(M\) is formal, then either the Euler characteristic \(\chi(M)=0\), or \(H^{*}(M)\) is generated by one element.
It is also interesting to consider the more general case when the base \(M\) is not necessarily formal. As \(\Omega^{*}(M)\) can no longer be represented by the cohomology, we consider its Sullivan minimal model \(\mathcal{M}\) instead. Depending on the reducibility of the representative of the Euler class in \(\mathcal{M}\), the Sullivan minimal model of the sphere bundles can have two types. In this paper we mainly consider the reducible case, and give an obstruction under certain conditions.
**Theorem 1.3**.: _Let \((M,\omega)\) be a connected symplectic manifold satisfying the hard Lefschetz property. Suppose \([\omega]\) is an integral and reducible cohomology class, i.e. there exists some \(x_{i},y_{i}\in H^{1}(M)\) such that \([\omega]=\sum[x_{i}]\wedge[y_{i}]\). Then the Boothby-Wang fibration of \(M\), i.e. the circle bundle with Euler class \([\omega]\), is non-formal._
This obstruction can be generalized to \(S^{4k+1}\)-bundles, as long as the Euler class \([\omega]\) can be written as a sum of products of \((2k+1)\)-cohomology classes, and the hard Lefschetz property can be weakened in the following way: Taking the product of \([\omega]\) is an isomorphism from a non-trivial space \(H^{s}(M)\) to \(H^{s+4k+2}(M)\) for some \(s\), and is injective from \(H^{s-2k-1}(M)\) to \(H^{s+2k+1}(M)\). The condition of the sphere dimension \(4k+1\) is difficult to relax. We will give a simple example of a formal \(S^{4k+3}\)-bundle satisfying other requirements.
This paper is organized as follows. In Section 2, we review the algebraic tools that play a central role in this paper, including the Bianchi-Massey tensor, \(A_{\infty}\)-algebra and the Sullivan minimal model. We give the proof of Theorem 1.1 and Theorem 1.2 in Section 3. And in Section 4, we discuss the formality of the general sphere bundles, and give the proof of Theorem 1.3.
**Acknowledgement.** The author thank Ruizhi Huang, Si Li, Jianfeng Lin, Li-Sheng Tseng and Jie Wu for helpful discussions and valuable suggestions. The author would like to acknowledge the support of the National Key Research and Development Program of China No. 2020YFA0713000.
## 2 Preliminary
### Bianchi-Massey tensor
Let \(V\) be a graded vector space over a field \(\mathbb{K}\) of characteristic \(0\). We let \(\mathcal{G}^{k}V\) denote the \(k\)-th graded symmetric power of \(V\), i.e. the quotient space of \(V^{\otimes k}\) by the relations of graded commutativity. We will use \((x_{1}\cdot x_{2})\) denote the graded symmetric product of \(x_{1}\) and \(x_{2}\). Hence, \((x_{1}\cdot x_{2})=(-1)^{|x_{1}||x_{2}|}(x_{2}\cdot x_{1})\), where \(|x_{1}|,|x_{2}|\) are the degree of \(x_{1}\) and \(x_{2}\) respectively. We can define \((x_{1}\cdot x_{2}\cdot\ldots\cdot x_{k})\) similarly, and such elements generate \(\mathcal{G}^{k}V\) when all \(x_{i}\in V\).
**Remark 2.1**.: In our setting, \(\mathcal{G}^{k}V\) is isomorphic to the space of graded commutative \(k\)-tensors of \(V\), although this is not true if the characteristic of the ground field is nonzero or \(V\) is replaced by an abelian group.
Let \(K[\bullet]\) denote the kernel of a tensor space under full graded symmetrisation. For example, if \(V\) is a graded vector space, \(K[\mathcal{G}^{2}\mathcal{G}^{2}V]\) is the kernel of the following symmetrisation
\[\mathcal{G}^{2}\mathcal{G}^{2}V\rightarrow\mathcal{G}^{4}V,\quad\big{(}(x \cdot y)\cdot(z\cdot w)\big{)}\mapsto(x\cdot y\cdot z\cdot w).\]
Thus, \((x\cdot y)(z\cdot w)-(-1)^{|y||z|}(x\cdot z)(y\cdot w)\in K[\mathcal{G}^{2} \mathcal{G}^{2}V]\).
Now suppose \(\mathcal{A}\) is a CDGA (commutative differential graded algebra). Let
\[c:\mathcal{G}^{2}H^{*}(\mathcal{A})\to H^{*}(\mathcal{A}),\quad(x \cdot y)\mapsto xy\]
denote the product map, and \(E^{*}(\mathcal{A})=\ker c\), which we will simply write as \(E^{*}\). Then we set
\[\mathcal{B}^{*}(H^{*}(\mathcal{A}))=\mathcal{G}^{2}E^{*}\cap K[\mathcal{G}^{2 }\mathcal{G}^{2}H^{*}(\mathcal{A})].\]
For simplicity, we will use \(\mathcal{B}^{*}(\mathcal{A})\) to denote \(\mathcal{B}^{*}(H^{*}(\mathcal{A}))\).
Let \(\mathcal{Z}^{*}\subset\mathcal{A}^{*}\) be the subspace of \(d\)-closed elements. Pick a right inverse \(\alpha:H^{*}(\mathcal{A})\rightarrow\mathcal{Z}^{*}\) for the projection to cohomology. Then the map
\[\alpha^{2}:\mathcal{G}^{2}H^{*}(\mathcal{A})\rightarrow\mathcal{A}^{*},\quad( x\cdot y)\mapsto\alpha(x)\alpha(y)\]
takes exact values on \(E^{*}\). So there exists a map \(\gamma:E^{*}\rightarrow\mathcal{A}^{*-1}\) satisfying \(d\gamma=\alpha^{2}\).
**Definition 2.2**.: The map
\[\mathcal{G}^{2}E^{*}\rightarrow\mathcal{A}^{*-1},\quad(e\cdot e^{\prime}) \mapsto\gamma(e)\alpha^{2}(e^{\prime})+(-1)^{|e||e^{\prime}|}\gamma(e^{\prime })\alpha^{2}(e)\]
takes closed valued on \(\mathcal{B}^{*}(A)\). So it induces a map
\[\mathcal{F}:\mathcal{B}^{*}(\mathcal{A})\to H^{*-1}(\mathcal{A}),\]
which is called the **Bianchi-Massey tensor**. This map is independent of the choice of \(\alpha\) and \(\gamma\).
Another obstruction for formality is the uniform massey triple product. It is defined as follows.
**Definition 2.3**.: Given a chiece of \(\alpha\) and \(\gamma\) as before, the map
\[\gamma\alpha:E^{*}\otimes H^{*}(\mathcal{A})\rightarrow\mathcal{A}^{*-1},\quad e \otimes x\mapsto\gamma(e)\alpha(x)\]
takes closed values on \(K[E^{*}\otimes H^{*}(\mathcal{A})]\), which is the kernel of the full graded symmetrisation \(E^{*}\otimes H^{*}(\mathcal{A})\rightarrow\mathcal{G}^{3}H^{*}(\mathcal{A})\). So it induces a map
\[\mathcal{T}:K[E^{*}\otimes H^{*}(\mathcal{A})]\to H^{*-1}(\mathcal{A}),\]
which is called the **uniform massey triple product**.
Unlike the Bianchi-Massey tensor, the uniform massey triple product does depend on the choice of \(\alpha\) and \(\gamma\).
### \(A_{\infty}\)-algebra
**Definition 2.4**.: Let \(\mathbb{K}\) be a field. An \(A_{\infty}\)**-algebra over \(k\)** is a \(\mathbb{Z}\)-graded vector space \(\mathcal{A}=\bigoplus_{i\in\mathbb{Z}}\mathcal{A}^{i}\) endowed with graded \(\mathbb{K}\)-linear maps
\[m_{p}:\mathcal{A}^{\otimes p}\rightarrow\mathcal{A},\ p\geq 1\]
of degree \(2-p\) satisfying
\[\sum_{r+s+t=p}(-1)^{r+st}m_{r+t+1}(\mathbf{1}^{\otimes r}\otimes m_{s}\otimes \mathbf{1}^{\otimes t})=0. \tag{2.1}\]
Specially, when \(p=1\), we have
\[m_{1}m_{1}=0;\]
When \(p=2\), we have
\[m_{1}m_{2}=m_{2}(m_{1}\otimes\mathbf{1}+\mathbf{1}\otimes m_{1});\]
If \(m_{3}=0\), \(m_{2}\) is associative. Every CDGA is an \(A_{\infty}\)-algebra, where \(m_{1}\) is the differential \(d\), \(m_{2}\) is the multiplication, and \(m_{p}=0\) for all \(p\geq 3\).
**Definition 2.5**.: A **morphism of \(A_{\infty}\)-algebra**\(f:(\mathcal{A},m^{\mathcal{A}})\rightarrow(\mathcal{B},m^{\mathcal{B}})\) is a family of graded linear maps \(f_{p}:\mathcal{A}^{\otimes p}\rightarrow\mathcal{B}\) of degree \(1-p\) such that
\[\sum(-1)^{r+st}f_{r+t+1}(\mathbf{1}^{\otimes r}\otimes m_{s}^{\mathcal{A}} \otimes\mathbf{1}^{\otimes r})=\sum(-1)^{s}m_{r}^{\mathcal{B}}(f_{i_{1}} \otimes f_{i_{2}}\otimes\cdots\otimes f_{i_{r}}). \tag{2.2}\]
where the left hand side sum runs over all decompositions \(p=r+s+t\), and the right hand side sum runs over all \(1\leq r\leq p\) and all decompositions \(p=i_{1}+i_{2}+\cdots+i_{r}\). The sign on the right side is given by
\[s=(r-1)(i_{1}-1)+(r-2)(i_{2}-1)+\cdots+2(i_{r-2}-1)+(i_{r-1}-1).\]
Specially, when \(p=1\), we have
\[m_{1}f_{1}=f_{1}m_{1}.\]
\(f_{1}\) also induces a morphism \(f_{1}^{*}:H^{*}(\mathcal{A})\to H^{*}(\mathcal{B})\). The morphism \(f\) is called a **quasi-isomorphism** if \(f_{1}^{*}\) is an isomorphism.
Kadeishivili [8] proved that for every \(A_{\infty}\)-algebra \(\mathcal{A}\) there is a quasi-isomorphism from \(H^{*}(\mathcal{A})\) equipped with some \(A_{\infty}\)-algebraic structure to \(\mathcal{A}\). The \(m_{1}\) operation on \(H^{*}(\mathcal{A})\) is \(0\) and the \(m_{2}\) operation is induced from \(m_{2}\) on \(\mathcal{A}\). If this \(A_{\infty}\)-structure on \(H^{*}(A)\) can be chosen as \(m_{p}=0\) unless \(p=2\), this we say \(\mathcal{A}\) is **formal**.
A CDGA is formal as a CDGA if and only if it is formal as an \(A_{\infty}\)-algebra [14]; see also [9].
### Sullivan minimal model
**Definition 2.6**.: A **Sullivan minimal algebra** is a CDGA \(\mathcal{M}=\Lambda V^{*}\) which is free as a graded algebra. \(V^{*}\) has a homogeneous basis \(\{v_{\alpha}\}\) indexed by a well-ordered set, such that
1. \(dv_{\alpha}\in\Lambda V^{*}_{\beta<\alpha}\), where \(V^{*}_{\beta<\alpha}\) is spanned by \(v_{\beta}\) with \(\beta<\alpha\).
2. if \(|v_{\beta}|<|v_{\alpha}|\) then \(\beta<\alpha\).
This definition implies that all \(dv_{\alpha}\) are **reducible**, i.e. there exists \(x_{i},y_{i}\in\mathcal{M}^{+}\) such that \(dv_{\alpha}=\sum x_{i}y_{i}\). In this paper we mainly consider the case that \(\mathcal{M}\) is connected, i.e. \(H^{0}(\mathcal{M})=\mathbb{K}\). Then all \(|v_{\alpha}|\geq 1\) and \(\mathcal{M}^{+}\) is just the subspace of elements with degree at least 1. More generally, \(\mathcal{M}^{+}\) is the kernel of augmentation \(\epsilon:\mathcal{M}\to\mathbb{K}\) sending all \(v_{\alpha}\) to 0.
**Definition 2.7**.: If a CDGA \(\mathcal{A}\) is quasi-isomorphic to a Sullivan minimal algebra \(\mathcal{M}\), we call \(\mathcal{M}\) a **Sullivan minimal model** of \(\mathcal{A}\). Suppose \(M\) is a manifold, say \(\mathcal{M}\) is a Sullivan minimal model of \(M\) if it is a Sullivan minimal model of \(\Omega^{*}(M)\).
Every simply-connected manifold has a Sullivan minimal model generated by a finite dimensional vector space \(V^{*}\), and the degree of all elements in \(V^{*}\) is at least 2 [5]. More generally, every connected CDGA has a Sullivan minimal model generated by \(V^{*}\), and the degree of all elements in \(V^{*}\) is at least 1 [7]. Sullivan minimal model is unique up to quasi-isomorphisms.
A Sullivan minimal algebra \(\mathcal{M}\) is formal if and only if the following statement holds. \(\mathcal{M}\) is generated by \(V^{*}=C^{*}\oplus N^{*}\), where \(C^{*}\) is the subspace of closed elements in \(V^{*}\) and \(N^{*}\) is a direct complement of \(C^{*}\). All the closed elements in the ideal \(\mathbf{I}(N^{*})\) generated by \(N^{*}\) are exact [5].
Formality of Sphere Bundles over Formal Spaces
### The Bianchi-Massey tensor determines formality
Let \(\omega\in\mathcal{A}^{k}\) be a closed element. Suppose \(\theta\notin\mathcal{A}\) has degree \(k-1\), and satisfies \(\theta^{2}=\theta\theta=0\) (It trivially holds when \(k-1\) is odd). We can extend \(\mathcal{A}\) by \(\theta\):
\[\mathcal{A}_{\theta}=\mathcal{A}\otimes\Lambda\theta=\{x+\theta y\mid x,y\in \mathcal{A}\}\]
with \(d\theta=\omega\). Here \(\Lambda\theta=\langle 1,\theta\rangle\) is an exterior algebra generated by \(\theta\).
\(\mathcal{A}_{\theta}\) is also a CDGA. When \(\mathcal{A}\) is formal, there exists a zigzag of quasi-isomorphisms connecting \(\mathcal{A}\) and \(H^{*}(\mathcal{A})\). These quasi-isomorphisms are naturally extended to quasi-isomorphisms connecting \(\mathcal{A}_{\theta}\) and \(H^{*}(\mathcal{A})\otimes\Lambda\theta\). For the later CDGA we set \(d\theta=[\omega]\), the cohomology class of \(\omega\). Thus, the formality of \(\mathcal{A}_{\theta}\) is determined by \(H^{*}(\mathcal{A})\otimes\Lambda\theta\) as long as \(\mathcal{A}\) is formal.
Therefore, it is sufficient to consider the CDGA \(\mathcal{A}\) with trivial differential \(d=0\). In this case, the space of exact elements in \(\mathcal{A}_{\theta}\) is \(\operatorname{im}\omega\), the image of the map by left multiplying \(\omega\)
\[\omega:\mathcal{A}^{i}\to\mathcal{A}^{i+k},\quad x\mapsto\omega x.\]
The space of closed forms in \(\mathcal{A}_{\theta}\) is \(\mathcal{A}\oplus\theta\ker\omega\). Thus,
\[H^{*}(\mathcal{A}_{\theta})\simeq\operatorname{coker}\omega\oplus\theta\ker\omega.\]
**Definition 3.1**.: Let \(H^{*}\) be a finite dimensional graded commutative algebra over \(\mathbb{K}\). If there exists some \(\alpha_{H}\in(H^{n})^{\vee}\), the dual space of \(H^{n}\), such that the linear map
\[\alpha_{H}\frown:H^{i}\to(H^{m-i})^{\vee},\quad x\mapsto\big{(}y\mapsto\alpha _{H}(xy)\big{)}\]
is an isomorphism for all \(i\), then we say that \(H\) is \(n\)**-dimensional Poincare** and \(\alpha_{H}\) is a **Poincare class**.
A CDGA is called \(n\)**-dimensional Poincare** if its cohomology is.
For a Poincare CDGA, the Bianchi-Massey tensor \(\mathcal{F}\) and the uniform Massey product \(\mathcal{T}\) are equivalent [4, Lemma 2.8]. In particular, if \(\mathcal{F}\) vanishes, we can choose \(\alpha\) and \(\gamma\) such that \(\mathcal{T}=0\). Based on their proof we will show that, when \(\mathcal{A}_{\theta}\) is a Poincare CDGA, there exists such a choice also satisfying \(\operatorname{im}\gamma\in\mathbf{I}(\theta)\). Here \(\mathbf{I}(\theta)\) is the ideal generated by \(\theta\).
**Lemma 3.2**.: _Let \(\mathcal{A}\) be a CDGA with trivial differential. Suppose \(\mathcal{A}_{\theta}\) is \(n\)-dimensional Poincare and the Bianchi Massey tensor \(\mathcal{F}:\mathcal{B}^{n+1}(\mathcal{A}_{\theta})\to H^{n}(\mathcal{A}_{\theta})\) vanishes. There is a choice of \(\alpha\) and \(\gamma\) such that the uniform Massey product_
\[\mathcal{T}:K[E^{*}\otimes H^{*}(\mathcal{A}_{\theta})]\to H^{*-1}(\mathcal{A} _{\theta})\]
_vanishes and \(\operatorname{im}\gamma\in\mathbf{I}(\theta)\)._
Proof.: Pick a right inverse of the left multiplication map \(\omega:{\mathcal{A}}\to\operatorname{im}\omega\) and extend it to an endomorphism on \({\mathcal{A}}\). We use \(\omega^{-1}\) denote this map.
Choose an \(\alpha:H^{*}({\mathcal{A}}_{\theta})\to{\mathcal{A}}_{\theta}^{*}\) such that \(\alpha(\theta\ker\omega)\subset{\mathbf{I}}(\theta)\). We will use \(E^{*}\) denote \(E^{*}({\mathcal{A}}_{\theta})\) in this section. As \(\alpha^{2}(E^{*})\) consists of exact forms, it is in \(\operatorname{im}\omega\subset{\mathcal{A}}\). So we can set
\[\gamma=\theta\circ\omega^{-1}\circ\alpha^{2}:E^{*}\to{\mathcal{A}}_{\theta}^{* -1}.\]
Then \(\operatorname{im}\gamma\in{\mathbf{I}}(\theta)\). It follows that \(\operatorname{im}{\mathcal{T}}\in\theta\ker\omega\). For an extreme case that \(\ker\omega=0\), we already have \({\mathcal{T}}=0\). Note that in this case \(H^{*}({\mathcal{A}})\) is infinite dimensional, unless \({\mathcal{A}}=0\).
In general cases, \(\theta\ker\omega\) is non-trivial. Then it must include the dual of the Poincare class \(\alpha_{H}\), because for non-trivial \(\theta x\in\theta\ker\omega\) there exists \(y\in H^{*}({\mathcal{A}}_{\theta})\) such that
\[\alpha_{H}(\theta xy)=(\alpha_{H}\frown\theta x)y\neq 0.\]
To make \({\mathcal{T}}=0\), we will change \(\gamma\) to \(\gamma^{\prime}=\gamma+\eta\) for some \(\eta:E^{*}\to{\mathcal{Z}}^{*-1}\) satisfying \(\operatorname{im}\eta\in{\mathbf{I}}(\theta)\).
Consider the map
\[\gamma\alpha^{2}:E^{*}\otimes H^{*}({\mathcal{A}}_{\theta})\otimes H^{*}({ \mathcal{A}}_{\theta})\to{\mathcal{A}}_{\theta}^{*-1},\quad e\otimes x\otimes y \mapsto\gamma(e)\alpha(x)\alpha(y).\]
It takes closed value on \(K[E^{*}\otimes H^{*}({\mathcal{A}}_{\theta})\otimes H^{*}({\mathcal{A}}_{ \theta})]\), and factors through the projection \(E^{*}\otimes H^{*}({\mathcal{A}}_{\theta})\otimes H^{*}({\mathcal{A}}_{ \theta})\to E^{*}\otimes{\mathcal{G}}^{2}H^{*}({\mathcal{A}}_{\theta})\). Hence, \(\gamma\alpha^{2}\) induces another map
\[\mu:K[E^{*}\otimes{\mathcal{G}}^{2}H^{*}({\mathcal{A}}_{\theta})]\to H^{*-1}({ \mathcal{A}}_{\theta}).\]
\(\mu\) is depending on the choice of \(\gamma\). Our goal is to find some \(\gamma^{\prime}\) such that the corresponding \(\mu^{\prime}=0\) when acting on the degree \(n+1\) part of \(K[E^{*}\otimes{\mathcal{G}}^{2}H^{*}({\mathcal{A}}_{\theta})]\).
First consider \(\mu\) acting on \(K[E^{*}\otimes E^{*}]\). Note that here \(K[E^{*}\otimes E^{*}]\) means the kernel of full graded symmetrisation \(E^{*}\otimes E^{*}\to{\mathcal{G}}^{4}H^{*}({\mathcal{A}}_{\theta})\). For arbitrary \(e,e^{\prime}\in E^{*}\),
\[\gamma(e)\alpha^{2}(e^{\prime})-(-1)^{|e||e^{\prime}|}\gamma(e^{ \prime})\alpha^{2}(e) =\gamma(e)d\gamma(e^{\prime})-(-1)^{|e||e^{\prime}|+(|e^{\prime}| -1)|e|}d\gamma(e)\gamma(e^{\prime})\] \[=(-1)^{|e|-1}d\big{(}\gamma(e)\gamma(e^{\prime})\big{)}.\]
So \(\mu\) vanishes on graded anti-symmetric tensors, and it factors through the projection \(K[E^{*}\otimes E^{*}]\to{\mathcal{B}}^{*}({\mathcal{A}}_{\theta})\). Moreover, the induced map \({\mathcal{B}}^{*}({\mathcal{A}}_{\theta})\to H^{*-1}({\mathcal{A}}_{\theta})\) is exactly the Bianchi-Massey tensor, which is \(0\) when acting on \({\mathcal{B}}^{n+1}({\mathcal{A}}_{\theta})\) by assumption. Therefore, \(\mu\equiv 0\) on degree \(n+1\). As \(\eta(e)\alpha^{2}(e^{\prime})\) is exact for any \(e,e^{\prime}\in E^{*}\), \(\mu^{\prime}=0\) on the degree \(n+1\) part of \(K[E^{*}\otimes E^{*}]\) for any choice of \(\gamma^{\prime}\).
Let \(D^{*}\) be a direct complement of \(E^{*}\) in \({\mathcal{G}}^{2}H^{*}({\mathcal{A}}_{\theta})\). Restrict the corresponding projection \(E^{*}\otimes{\mathcal{G}}^{2}H^{*}({\mathcal{A}}_{\theta})\to E^{*}\otimes D^{*}\) on \(K[E^{*}\otimes{\mathcal{G}}^{2}H^{*}({\mathcal{A}}_{\theta})]\), and denote it by \(p\). Since \(\ker p\) is exactly \(K[E^{*}\otimes E^{*}]\), \(\mu\) induces a morphism \(\bar{\mu}:\operatorname{im}p\to H^{*}({\mathcal{A}}_{\theta})\) of degree \(-1\).
We extend \(\bar{\mu}\) defined on \(E^{*}\otimes D^{*}\) and make it vanishing if every term has a tensor factor in \(\theta\ker\omega\). This is consistent with the definition on \(\operatorname{im}p\): For arbitrary \(e\in E^{*}\) and \(x,y\in H^{*}({\mathcal{A}}_{\theta})\), if \(e\) has a tensor
factor in \(\theta\ker\omega\), then \(\alpha^{2}(e)\in{\bf I}(\theta)\) by our choice of \(\alpha\). But the only exact element in \({\bf I}(\theta)\) is \(0\), so \(\gamma(e)=0\). If \(x\) or \(y\) is in \(\theta\ker\omega\), then both \(\gamma(e)\) and \(\alpha(x)\alpha(y)\) is in \({\bf I}(\theta)\). In either case, \(\gamma(e)\alpha(x)\alpha(y)=0\). So the induced \(\mu\) and \(\bar{\mu}\) also vanish on such kind of elements.
Since \(E^{*}\) is the kernel of the multiplication map, the morphism
\[D^{*}\to H^{*}({\cal A}_{\theta}),\quad a\mapsto[\alpha^{2}(a)]\]
is an isomorphism. Suppose that \(\alpha_{H}\) is the Poincare class of \(H^{*}({\cal A}_{\theta})\). For each \(e\in E^{i}\), there is a well-defined morphism
\[H^{n+1-i}({\cal A}_{\theta})\to{\mathbb{K}},\quad[\alpha^{2}(a)]\mapsto- \alpha_{H}\circ\bar{\mu}(e\otimes a)\]
where \(a\in D^{n+1-i}\). So we can set \(\eta(e)\) such that \(\alpha_{H}\frown[\eta(e)]\) is this morphism. Therefore, on \(\operatorname{im}p\) we have
\[\mu^{\prime}\left(\sum e_{j}\otimes a_{j}\right)=\bar{\mu}\left(\sum e_{j} \otimes a_{j}\right)+\sum[\eta(e_{j})\alpha^{2}(a_{j})]=0.\]
It remains to verify that \(\operatorname{im}\eta\in{\bf I}(\theta)\). Suppose that \([\eta(e)]\notin\theta\ker\omega\) for some \(e\in E^{*}\). By the discussion above, the dual of \(\alpha_{H}\) is in \(\theta\ker\omega\). Thus, there exists some \(\theta x\in\theta\ker\omega\) such that \([\eta(e)]\theta x\) is equal to this dual. On the other hand, there exists some \(a\in D^{*}\) such that \([\alpha^{2}(a)]=\theta x\). Then every term of \(a\) must have a tensor factor in \(\theta\ker\omega\). This implies that \([\eta(e)]\theta x=-\bar{\mu}(e\otimes a)=0\), which is a contradiction. Therefore, \([\eta(e)]\in\theta\ker\omega\) and we can choose a representative in \({\bf I}(\theta)\).
Finally, we prove that the uniform Massey product \({\cal T}^{\prime}\) defined by \(\gamma^{\prime}\) is identically \(0\). This follows from \(K[E\otimes H^{*}({\cal A}_{\theta})]\otimes H^{*}({\cal A}_{\theta})\subset K [E\otimes H^{*}({\cal A}_{\theta})\otimes H^{*}({\cal A}_{\theta})]\) and
\[{\cal T}^{\prime}\left(\sum e_{j}\otimes x_{j}\right)y=\left[\sum\gamma(e_{j} )\alpha(x_{j})\alpha(y)\right]=\mu^{\prime}\left(\sum e_{j}\otimes(x_{j}\cdot y )\right)=0\]
for \(e_{j}\in E^{*}\), \(x_{j},y\in H^{*}({\cal A}_{\theta})\) and \(|e_{j}|+|x_{j}|+|y|=n+1\). Thus, \(\alpha_{H}\frown{\cal T}^{\prime}\left(\sum e_{j}\otimes x_{j}\right)=0\) and then \({\cal T}^{\prime}\left(\sum e_{j}\otimes x_{j}\right)=0\).
**Lemma 3.3**.: _Let \({\cal A}\) be a CDGA with trivial differential. Suppose that there is a choice of \(\alpha:H^{*}({\cal A}_{\theta})\to{\cal A}_{\theta}^{*}\) and \(\gamma:E^{*}\to{\cal A}_{\theta}^{*-1}\) such that \(\operatorname{im}\gamma\subset{\bf I}(\theta)\), and \({\cal T}\) defined by \(\gamma\) vanishes, then \({\cal A}_{\theta}\) is formal._
Proof.: We will construct an \(A_{\infty}\)-quasi-isomorphism \(f\) from \(H^{*}({\cal A}_{\theta})\) to \({\cal A}_{\theta}\).
Set \(f_{1}=\alpha\), \(f_{2}(x,y)=\gamma\big{(}(1\cdot xy-x\cdot y)\big{)}\), and \(f_{p}=0\) for all \(p\geq 3\). We will show that (2.2) holds under this definition.
When \(p=1\), the equation \(f_{1}m_{1}=m_{1}f_{1}\) is just \(0=d\alpha\). Also, \(f_{1}=\alpha\) induces isomorphism on cohomologies.
For \(p=2\), (2.2) becomes
\[f_{1}m_{2}=m_{2}(f_{1}\otimes f_{1})+m_{1}f_{2}.\]
This equation follows from our definition of \(f_{2}\).
For \(p=3\), we need to verify that
\[f_{2}(m_{2}\otimes\mathbf{1}-\mathbf{1}\otimes m_{2})=m_{2}(f_{1}\otimes f_{2}-f_ {2}\otimes f_{1}).\]
When acting on \(x\otimes y\otimes z\), the left hand side becomes
\[\gamma(1\cdot xyz-xy\cdot z-1\cdot xyz+x\cdot yz)=\gamma\alpha\big{(}-(xy \cdot z)\otimes 1+(x\cdot yz)\otimes 1\big{)},\]
and the right hand side becomes
\[(-1)^{|x|}\alpha(x)\gamma(1\cdot yz-y\cdot z)-\gamma(1\cdot xy-x\cdot y)\alpha (z)=\gamma\alpha\big{(}(-1)^{|x|+|x|(|y|+|z|-1)}(1\cdot yz-y\cdot z)\otimes x -(1\cdot xy-x\cdot y)\otimes z\big{)}.\]
Denote their difference as \(\gamma\alpha(\kappa)\), where
\[\kappa=-(xy\cdot z)\otimes 1+(x\cdot yz)\otimes 1-(-1)^{|x|(|y|+|z|)}(1\cdot yz- y\cdot z)\otimes x-(1\cdot xy-x\cdot y)\otimes z.\]
Observe that \(\kappa\in K[E^{*}\otimes H^{*}(\mathcal{A}_{\theta})]\) as its full graded symmetrisation is
\[-(xy\cdot z\cdot 1)+(x\cdot yz\cdot 1)-(-1)^{|x|(|y|+|z|)}(1\cdot yz\cdot x)+ (-1)^{|x|(|y|+|z|)}(y\cdot z\cdot x)+(1\cdot xy\cdot z)-(x\cdot y\cdot z)=0.\]
Thus, \([\gamma\alpha(\kappa)]=\mathcal{T}(\kappa)=0\). So \(\gamma\alpha(\kappa)\) is exact. On the other hand, \(\operatorname{im}\gamma\in\mathbf{I}(\theta)\) implies that \(\gamma\alpha(\kappa)\in\mathbf{I}(\theta)\). Therefore, \(\gamma\alpha(\kappa)=0\) as \(0\) is the only exact element in \(\mathbf{I}(\theta)\).
For \(p=4\), (2.2) becomes
\[0=-m_{2}(f_{2}\otimes f_{2}).\]
By our construction \(\operatorname{im}f_{2}\subset\operatorname{im}\gamma\subset\mathbf{I}(\theta)\), so the product of its elements are \(0\).
For \(p\geq 5\), (2.2) trivially holds since every term contains either \(m_{k}\) or \(f_{k}\) with \(k\geq 3\), which is \(0\) by our definition.
Combine the two lemmas above together, we have the following theorem.
**Theorem 3.4**.: _Let \(\mathcal{A}\) be a formal CDGA, and \(\mathcal{A}_{\theta}=\mathcal{A}\otimes\Lambda\theta\) is a Poincare algebra where \(\theta^{2}=0\). Then \(\mathcal{A}_{\theta}\) is formal if and only if its Bianchi-Massey tensor vanishes._
Let \(\pi:X\to M\) be an orientable \(S^{k}\)-bundle. We have the following CDGA equivalence.
\[\Omega^{*}(X)\simeq\begin{cases}\Omega^{*}(M)\otimes\Lambda(\theta),&d\theta=e,&k\text{ is odd},\\ \Omega^{*}(M)\otimes\Lambda(\theta,\theta^{\prime}),&d\theta=0,d\theta^{\prime }=\theta^{2}+\frac{1}{4}p,&k\text{ is even}.\end{cases}\]
Here \(|\theta|=k\), and \(|\theta^{\prime}|=2k-1\). \([e]\in H^{k+1}(M)\) is the Euler class, and \([p]\in H^{2k}(M)\) is the \(2k\)-th Pontryagin class of the sphere bundle.
When \(M\) is simply connected, the proof can be found in [6, Example 4, Page 202]. For general manifolds, the case that \(k\) is odd is proved in [13, Appendix], and a similar proof works when \(k\) is even.
Moreover, as long as \(\pi_{1}(M)\) acts nilpotently on \(H^{*}(S^{k})\), this equivalence still holds when \(X\) is a spherical fibration [7, Theorem 20.3].
Therefore, when \(M\) is compact and formal, \(\Omega^{*}(X)\) is a Poincare algebra. Also, it is equivalent to either \(H^{*}(M)\otimes\Lambda(\theta)\) with \(d\theta=[e]\), or \(H^{*}(M)\otimes\Lambda(\theta,\theta^{\prime})\) with \(d\theta=0,d\theta^{\prime}=\theta^{2}+\frac{1}{4}[p]\), whose formality is determined by the Bianchi-Massey tensor.
Moreover, if \(k\) is even, the kernel of multiplying \(\theta^{2}+\frac{1}{4}[p]\) in \(H^{*}(M)\otimes\Lambda(\theta)\) is \(0\). By the discussion in the proof of Lemma 3.2, \(\mathcal{T}\) of \(H^{*}(M)\otimes\Lambda(\theta,\theta^{\prime})\) has a trivial uniform Massey product. Then it is formal by Lemma 3.3. Thus, we have the following statement.
**Theorem 3.5**.: _Suppose \(M\) is a compact formal manifold, and \(\pi:X\to M\) is an orientable \(S^{k}\)-bundle. Then \(X\) is formal if and only if the Bianchi-Massey tensor of \(\Omega^{*}(X)\) vanishes. Moreover, when \(k\) is even, \(X\) is always formal._
### A special case: unit tangent bundle
In this subsection, we will consider a special type of sphere bundle. Let \(M\) be a compact formal manifold. Equip the tangent bundle \(TM\) with a metric, then the vectors of norm \(1\) form a sphere bundle \(UTM\), whose Euler class \(\chi(M)[\omega]\). Here \(\chi(M)\) is the Euler Characteristic and \(\omega\) is a volume form. \(UTM\) is called the unit tangent bundle of \(M\). We will explore when it is formal.
By the discussion of the previous subsection, we have known that the sphere bundle is formal if the Euler class is \(0\). So the non-trivial case only happens on even dimensional manifolds.
**Lemma 3.6**.: _Suppose \(\mathcal{A}\) is a \(2n\)-dimensional Poincare CDGA with trivial differential, and \(\omega\in\mathcal{A}^{2n}\) be the dual of Poincare class. We also assume that \(\mathcal{A}^{i}=0\) for \(i<0\). If \(\mathcal{A}_{\theta}=\mathcal{A}\otimes\Lambda\theta\) with \(d\theta=\omega\) is formal, then \(\mathcal{A}^{2i+1}=0\) for all \(i\in\mathbb{Z}\)._
Proof.: Assume that there exists some \(x\in\mathcal{A}^{2i+1}\). Then we can find a \(x^{*}\in\mathcal{A}^{2n-2i-1}\) such that \(xx^{*}=\omega\). Also, \(0<2i+1,2n-2i-1<2n\). So we can set \(\alpha:H^{*}(\mathcal{A}_{\theta})\to\mathcal{A}_{\theta}^{*}\) such that
\[\alpha([x])=x,\quad\alpha([x^{*}])=x^{*},\]
and \(\gamma:E^{*}\to\mathcal{A}_{\theta}^{*-1}\) such that
\[\gamma([x]\cdot[x^{*}])=\theta.\]
As the degrees of \(x\) and \(x^{*}\) are odd, in \(\mathcal{G}^{4}H^{*}(\mathcal{A}_{\theta})\) we have
\[([x]\cdot[x^{*}]\cdot[x]\cdot[x^{*}])=-([x]\cdot[x^{*}]\cdot[x]\cdot[x^{*}])\]
by switching the two \([x^{*}]\). Thus, \(\big{(}([x]\cdot[x^{*}])\cdot([x]\cdot[x^{*}])\big{)}\in K[\mathcal{G}^{2} \mathcal{G}^{2}H^{*}(\mathcal{A}_{\theta})]\) and it can be acted by the Bianchi-Massey tensor \(\mathcal{F}\).
\[\mathcal{F}\big{(}([x]\cdot[x^{*}])\cdot([x]\cdot[x^{*}])\big{)}=[2\gamma([x] \cdot[x^{*}])\alpha([x])\alpha([x^{*}])]=2[\theta\omega],\]
which is nonzero in \(H^{*}(\mathcal{A}_{\theta})\). Therefore, \(\mathcal{A}_{\theta}\) is non-formal.
**Lemma 3.7**.: _Suppose \(\mathcal{A}\) is a \(2n\)-dimensional Poincare CDGA with trivial differential, and \(\omega\in\mathcal{A}^{2n}\) be the dual of Poincare class. We also assume that \(\mathcal{A}^{i}=0\) if either \(i<0\) or \(i\) is odd, and \(\mathcal{A}^{0}=\mathbb{K}\). If \(\mathcal{A}_{\theta}\) is formal, then the multiplication map \(\mathcal{A}^{i}\otimes\mathcal{A}^{j}\to\mathcal{A}^{i+j}\) is injective for all \(i,j\leq n\)._
Proof.: If \(i\) or \(j=0\), the multiplication map is an isomorphism. Assume that the multiplication map has a non-trivial kernel in \(\sum_{i=1}^{k}x_{i}\otimes y_{i}\in\mathcal{A}^{i}\otimes\mathcal{A}^{j}\), where \(0<i,j\leq n\). \(\{x_{i}\}\) can be chosen linearly independent with \(k\geq 1\). A similar procedure can make \(\{y_{i}\}\) linearly independent: If \(y_{i}=c_{1}y_{1}+\ldots c_{i-1}y_{i-1}\), \(x_{1}\otimes y_{1}+\ldots+x_{i}\otimes y_{i}\) can be rewritten as \((x_{1}+c_{1}x_{i})\otimes y_{1}+\ldots+(x_{i-1}+c_{i-1}x_{i})\otimes y_{i-1}\). \(\{x_{1}+c_{1}x_{i},\ldots,x_{i-1}+c_{i-1}x_{i}\}\) is also linearly independent.
Take \(x_{1}^{*}\in\mathcal{A}^{2n-i},y_{1}^{*}\in\mathcal{A}^{2n-j}\) such that \(x_{i}x_{1}^{*}=y_{i}y_{1}^{*}=\delta_{1i}\omega\). Then \(n\leq x_{1}^{*},y_{1}^{*}<2n\). Hence, \(x_{1}^{*},y_{1}^{*}\) are non-exact in \(\mathcal{A}_{\theta}\), but the products \(x_{i}x_{1}^{*},y_{i}y_{1}^{*}\) are either \(0\) or \(\omega\), which are both exact in \(\mathcal{A}_{\theta}\). So we can set \(\alpha:H^{*}(\mathcal{A}_{\theta})\to\mathcal{A}_{\theta}\) such that
\[\alpha([x_{i}])=x_{i},\quad\alpha([y_{i}])=y_{i},\quad\alpha([x_{1}^{*}])=x_{1 }^{*},\quad\alpha([y_{1}^{*}])=y_{1}^{*},\]
and \(\gamma:E^{*}\to\mathcal{A}_{\theta}\) such that
\[\gamma\left(\sum_{i=1}^{k}[x_{i}]\cdot[y_{i}]\right)=0,\quad\gamma([x_{i}] \cdot[x_{1}^{*}])=\gamma([y_{i}]\cdot[y_{1}^{*}])=\delta_{1i}\theta.\]
Then \(\left((\sum_{i=1}^{k}[x_{i}]\cdot[y_{i}])\cdot([x_{1}^{*}]\cdot[y_{1}^{*}]) \right)-\sum_{i=1}^{k}\left(([x_{i}]\cdot[x_{1}^{*}])\cdot([y_{i}]\cdot[y_{1} ^{*}])\right)\) is in \(K[\mathcal{G}^{2}\mathcal{G}^{2}H^{*}(\mathcal{A}_{\theta})]\), and
\[\mathcal{F}\left(\left(\left(\sum_{i=1}^{k}[x_{i}]\cdot[y_{i}] \right)\cdot([x_{1}^{*}]\cdot[y_{1}^{*}])\right)-\sum_{i=1}^{k}\left(([x_{i}] \cdot[x_{1}^{*}])\cdot([y_{i}]\cdot[y_{1}^{*}])\right)\right)\] \[=-[\gamma([x_{1}]\cdot[x_{1}^{*}])y_{1}y_{1}^{*}]-[\gamma([y_{1}] \cdot[y_{1}^{*}])x_{1}x_{1}^{*}]\] \[=-2[\theta\omega].\]
Therefore, \(\mathcal{A}_{\theta}\) is non-formal.
**Lemma 3.8**.: _Suppose \(H^{*}\) is a \(2n\)-dimensional Poincare graded algebra, \(H^{0}=\mathbb{K}\), and \(H^{i}\) is nontrivial only when \(i\) is even and \(0\leq i\leq 2n\). If the multiplication map \(H^{i}\otimes H^{j}\to H^{i+j}\) is injective for all \(i,j\leq n\), then \(H^{*}=\mathbb{K}[x]/(x^{p})\) is a quotient of the polynomial ring with a single variable._
Proof.: First observe that the dimension of every \(H^{i}\) is at most \(1\). If \(x,y\in H^{i}\) are linearly independent for some \(0<i\leq n\), then \(x\otimes y-y\otimes x\) will be a non-trivial element in the kernel of the multiplication map \(H^{i}\otimes H^{i}\to H^{2i}\). As \(H^{*}\) is \(2n\)-dimensional Poincare, for \(n<i\leq 2n\) we have \(\dim H^{i}=\dim H^{2n-i}\leq 1\).
Let \(k\) be the smallest positive integer such that \(\dim H^{k}=1\), and \(x\) be a generator of \(H^{k}\). There exists some \(l\in\mathbb{Z}\) such that \(lk\leq n\) and \((l+1)k>n\), i.e. \(2lk\leq 2n<(2l+2)k\). Actually, \(l\) must be positive:
By \((l+1)k>n\) we have \(l\geq 0\). If \(l=0\) then \(k>n\). As \(H^{*}\) is Poincare, \(\dim H^{2n-k}=\dim H^{k}=1\). But \(2n-k<n<k\), which is a contradiction. By assumption we obtain that \(H^{ik}=\langle x^{i}\rangle\) for \(0\leq i\leq 2l\) inductively.
If \(2lk<2n<(2l+1)k\), then \(\dim H^{2n-2lk}=\dim H^{2lk}=1\) and \(2n-2lk<k\), which contradicts to that \(k\) is the smallest number such that \(\dim H^{k}=1\).
If \((2l+1)k<2n<(2l+2)k\), we also have \(\dim H^{2n-2lk}=1\). When \(l\geq 2\), \(2n-2lk<2k<n\). When \(l=1\), \(2k>n\) then \(2n-2lk<2n-n=n\). So in both cases the multiplication map \(H^{2n-2lk}\otimes H^{(l-1)k}\to H^{2n-(l+1)k}\) is injective. This implies \(\dim H^{2n-(l+1)k}=1\). As \(2n-(l+1)k<2n-n=n\), \(\dim H^{4n-(2l+2)k}\) is also \(1\). By the fact that \(H^{*}\) is Poincare, \(\dim H^{(2l+2)k-2n}=1\). However, \((2l+2)k-2n<k\), which is a contradiction.
When \(2n=(2l+1)k\), there exists some element in \(H^{2l}\) such that it multiplying \(x\) is the dual of Poincare class. This element must be proportional to \(x^{2l}\). Thus, \(H^{2n}=\langle x^{2l+1}\rangle\). Suppose that there is some other \(r\) not divisible by \(k\) such that \(\dim H^{r}=1\). As \(H^{*}\) is Poincare we can assume that \(r\leq n\). Then there exists some \(i\geq 0\) such that \(lk<r+ik<(l+1)k\). So \(ik<n\), and \(\dim H^{r+ik}=\dim H^{r}\dim H^{ik}=1\). At least one of \(r+ik\) and \(2n-(r+ik)\) is not greater than \(n\). Write this number as \(s\), then \(lk<s\leq n\) and \(\dim H^{s}=1\). It follows that \(\dim H^{2n-lk-s}=\dim H^{lk+s}=\dim H^{lk}\dim H^{s}=1\). But \(2n-lk-s<k\), which is a contradiction. Therefore, \(H^{k}=\mathbb{K}[x]/(x^{2l+2})\). A similar discussion shows that when \(2n=2lk\), \(H^{k}=\mathbb{K}[x]/(x^{2l+1})\).
Combining the lemmas above together, we have the following theorem.
**Theorem 3.9**.: _Suppose that \(\mathcal{A}\) is a \(2n\)-dimensional Poincare CDGA, which is formal and connected. Let \(\omega\in\mathcal{A}^{2n}\) such that \([\omega]\) is the dual of Poincare class. Set \(\mathcal{A}_{\theta}=\mathcal{A}\otimes\Lambda\theta\) with \(d\theta=\omega\). If \(\mathcal{A}_{\theta}\) is formal, then \(H^{*}(\mathcal{A})=\mathbb{K}[x]/(x^{p})\) is a quotient of the polynomial ring with a single variable._
**Corollary 3.10**.: _Let \(M\) be a compact orientable formal manifold. Its unit tangent bundle \(UTM\) is formal if and only if one of the following statement holds_
1. _The Euler characteristic_ \(\chi(M)=0\)_._
2. \(H^{*}(M)=\mathbb{K}[x]/(x^{p})\) _is a quotient of the polynomial ring with a single variable._
**Corollary 3.11**.: _Let \(M\) be a compact orientable formal manifold. If the Euler characteristic \(\chi(M)<0\), then \(UTM\) is non-formal._
**Example 3.12**.: As a simple example, the circle bundles over Riemann surfaces distinguish the different cases above.
\begin{tabular}{|c|c|c|} \hline genus & unit tangent bundle & Euler class is volume form \\ \hline
0 & formal & formal \\ \hline
1 & formal & non-formal \\ \hline \(\geq 2\) & non-formal & non-formal \\ \hline \end{tabular}
An Obstruction of Formality for General Sphere Bundles
If the base manifold \(M\) is compact and formal, we have known when the sphere bundle \(X\) is formal. One may ask the case for non-formal \(M\). We will discuss it in this section, and give an obstruction of the formality for \(X\).
Let \(\mathcal{M}\) be the Sullivan minimal model of a CDGA \(\mathcal{A}\). For a closed element \(\omega\in\mathcal{A}\), there exists a closed element in \(\mathcal{M}\) whose cohomology class is corresponding to \([\omega]\). We will also write this element in \(\mathcal{M}\) as \(\omega\). Regardless of the choice of \(\omega\), we have the quasi-isomorphism
\[\mathcal{M}_{\theta}=\mathcal{M}\otimes\Lambda\theta\simeq\mathcal{A}\otimes \Lambda\theta=\mathcal{A}_{\theta},\]
where \(d\theta=\omega\). In this section we assume that \(|\omega|\) is even. So \(\theta^{2}\) is automatically \(0\).
The Sullivan minimal model of \(\mathcal{A}_{\theta}\) is depending on whether \(\omega\in\mathcal{M}\) is reducible. Note that the choice of \(\omega\in\mathcal{M}\) is independent of its reducibility, since exact elements in \(\mathcal{M}\) are reducible by the definition of minimal Sullivan algebra in Section 2.3.
**Proposition 4.1**.: _When \(\omega\in\mathcal{M}\) is reducible, \(\mathcal{M}_{\theta}\) is a Sullivan minimal model of \(\mathcal{A}_{\theta}\)._
Proof.: Suppose \(\mathcal{M}=\Lambda V^{*}\) and \(V^{*}=\langle v_{\alpha}\rangle\). Write \(\|v_{\alpha}\|=\alpha\) as the index of \(v_{\alpha}\). Then set \(\|\theta\|>\alpha\) for all \(|v_{\alpha}|\leq|\theta|\), and \(\|\theta\|<\alpha\) for all \(|v_{\alpha}|<|\theta|\). So \(\{v_{\alpha}\}\cup\{\theta\}\) is a well-ordered set. It is straightforward to verify that \(\mathcal{M}_{\theta}=\Lambda(V^{*}\oplus\langle\theta\rangle)\) is a Sullivan minimal algebra.
**Theorem 4.2**.: _Suppose \(\omega\) is a closed element in minimal Sullivan algebra \(\mathcal{M}\) satisfying the following conditions._
1. \(|\omega|=2r\) _for some odd integer_ \(r\)_, and_ \([\omega]\) _has a representative can be written as_ \[\sum_{i=1}^{k}x_{i}y_{i},\]
_where \(x_{i},y_{i}\) are all closed in \(\mathcal{M}^{r}\)._
_2. There exists some \(s\geq 0\) such that \(H^{s}(\mathcal{M})\) is non-trivial. Moreover, the morphism \(\omega:H^{s}(\mathcal{M})\to H^{s+2r}(\mathcal{M})\) multiplying by \([\omega]\) is an isomorphism, and \(\omega:H^{s-r}(\mathcal{M})\to H^{s+r}(\mathcal{M})\) is injective._
_Then \(\mathcal{M}_{\theta}=\mathcal{M}\otimes\Lambda\theta\) is non-formal, where \(d\theta=\omega\)._
Proof.: Assume that \(\mathcal{M}_{\theta}\) is formal, then we can write \(\mathcal{M}_{\theta}=\Lambda V^{*}\) and \(V^{*}=C^{*}\oplus N^{*}\) such that all closed elements in \(\mathbf{I}(N^{*})\) are exact.
Without loss of generality, we can assume that \(\omega=\sum_{i=1}^{k}x_{i}y_{i}\) and \(\theta\in\mathbf{I}(N^{*})\). Precisely, let \(\bar{\omega}=\sum_{i=1}^{k}x_{i}y_{i}\) be the representative in \([\omega]\) satisfying Condition 1. Then \(\bar{\omega}=d\bar{\theta}\) for some \(\bar{\theta}\in\mathbf{I}(N^{*})\). Suppose that \(\bar{\theta}=\xi+\theta\eta\) for some \(\xi,\eta\in\mathcal{M}\), then
\[\bar{\omega}-\omega-d\xi=d(\theta(\eta-1))=(\eta-1)\omega-\theta d\eta\]
is exact in \({\cal M}\). Thus, \(d\eta=0\) and \([(\eta-1)\omega]=0\) in \(H^{*}({\cal M})\). Since \(\omega:H^{s}({\cal M})\to H^{s+2r}({\cal M})\) is an isomorphism between non-trivial spaces, \([\omega]\) can not be trivial. So \(\eta\) must be 1. This means that \({\cal M}_{\theta}={\cal M}\otimes\Lambda\theta\) and \({\cal M}_{\theta}={\cal M}\otimes\Lambda\bar{\theta}\) are isomorphic.
By assumption, there exists a closed but non-exact element \(a\in{\cal M}^{s}\). Then \([a\omega]\neq 0\) as \(\omega:H^{s}({\cal M})\to H^{s+2r}({\cal M})\) is an isomorphism. This implies that \([ax_{i}]\neq 0\) for some \(q\leq i\leq k\). Without loss of generality, we assume that \([ax_{1}]\neq 0\) and set \(a_{0,0}=a\).
We will prove the theorem by induction. Suppose that we have constructed some closed \(a_{i,2i}\in{\cal M}^{s}\) such that in \(H^{*}({\cal M})\)
\[[a_{i,2i}x_{j}]=0,\mbox{ for }j\leq 2i,\mbox{ and }[a_{i,2i}x_{2i+1}]\neq 0.\]
For \(j\leq 2i\), write
\[a_{i,2i}x_{j}=d(\xi_{i,j}+\theta\eta_{i,j})=d\xi_{i,j}+\omega\eta_{i,j}-\theta d \eta_{i,j},\]
where \(\xi_{i,j}\in{\cal M}^{s+r-1},\eta_{i,j}\in{\cal M}^{s-r}\) and \(\xi_{i,j}+\theta\eta_{i,j}\in{\bf I}(N^{*})\). Then \(d\eta_{i,j}=0\), and \(\omega\eta_{i,j}=a_{i,2i}x_{j}-d\xi_{i,j}\) is exact in \({\cal M}\). By the injectivity of \(\omega:H^{s-r}({\cal M})\to H^{s+r}({\cal M})\), \(\eta_{i,j}\) is exact in \({\cal M}\).
For each \(j>2i+1\), \([a_{i,2i}x_{2i+1}x_{j}]\in H^{s+2r}({\cal M})\), so there exists some closed \(a_{i+1,j}\in{\cal M}^{s}\) such that \([a_{i,2i}x_{2i+1}x_{j}]=[\omega a_{i+1,j}]\) in \(H^{s+2r}({\cal M})\). Hence, we can write
\[a_{i,2i}x_{2i+1}x_{j}=\omega a_{i+1,j}+d(\zeta_{i+1,j}+\theta\lambda_{i+1,j})\]
where \(\zeta_{i+1,j}\in{\cal M}^{s+2r-1},\lambda_{i+1,j}\in{\cal M}^{s}\) and \(\zeta_{i+1,j}+\theta\lambda_{i+1,j}\in{\bf I}(N^{*})\). Similar as the discussion of the previous paragraph, \(\lambda_{i+1,j}\) is exact in \({\cal M}\).
Thus,
\[a_{i,2i}x_{2i+1}\omega=-\sum_{j\leq 2i}d(\xi_{i,j}+\theta\eta_{i,j})x_{2i+1}y_ {j}+\sum_{j>2i+1}\big{(}\omega a_{i+1,j}y_{j}+d(\zeta_{i+1,j}+\theta\lambda_{i +1,j})y_{j}\big{)}.\]
Then we have
\[\theta\left(a_{i,2i}x_{2i+1}-\sum_{j>2i+1}a_{i+1,j}y_{j}\right)+\sum_{j\leq 2 i}(\xi_{i,j}+\theta\eta_{i,j})x_{2i+1}y_{j}-\sum_{j>2i+1}(\zeta_{i+1,j}+ \theta\lambda_{i+1,j})y_{j}\]
is closed in \({\cal M}_{\theta}\). Since it is also in \({\bf I}(N^{*})\), it must be exact. Therefore,
\[[a_{i,2i}x_{2i+1}]=\sum_{j>2i+1}[a_{i+1,j}y_{j}]\]
because that \(\xi+\theta\eta\) is exact in \({\cal M}_{\theta}\) implies that \(\eta\) is exact in \({\cal M}\), and \(\eta_{i,j},\lambda_{i+1,j}\) are all exact.
As \([a_{i,2i}x_{2i+1}]\neq 0\) in \(H^{*}({\cal M})\), at least one \([a_{i+1,j}y_{j}]\neq 0\) for some \(j>2i+1\). Without loss of generality, we can assume that \([a_{i+1,2i+2}]\neq 0\) in \(H^{s}({\cal M})\). We will show that \([a_{i+1,2i+2}x_{j}]=0\) in \(H^{*}({\cal M})\) for \(j\leq 2i+2\).
By definition, when \(j\leq 2i\) we have
\[\begin{split}\omega a_{i+1,2i+2}x_{j}&=a_{i,2i}x_{2i+1 }x_{2i+2}x_{j}-d(\zeta_{i+1,2i+2}+\theta\lambda_{i+1,2i+2})x_{j}\\ &=d(\xi_{i,j}+\theta\eta_{i,j})x_{2i+1}x_{2i+2}-d(\zeta_{i+1,2i+ 2}+\theta\lambda_{i+1,2i+2})x_{j}.\end{split} \tag{4.1}\]
So
\[\theta a_{i+1,2i+2}x_{j}-(\xi_{i,j}+\theta\eta_{i,j})x_{2i+1}x_{2i+2}+(\zeta_{ i+1,2i+2}+\theta\lambda_{i+1,2i+2})x_{j}\]
is closed in \(\mathbf{I}(N^{*})\), thus exact. Same as the discussion above, \(a_{i+1,2i+2}x_{j}\) is exact in \(\mathcal{M}\). The case for \(j=2i+1\) or \(2i+2\) is similar, as the first line of (4.1) still holds and the first term on the right hand side is \(0\).
Since \([a_{i+1,2i+2}]\neq 0\) in \(H^{s}(\mathcal{M})\), \([\omega a_{i+1,2i+2}]=\sum_{j>2i+2}[a_{i+1,2i+2}x_{j}y_{j}]\) is also non-zero. Then there exists some \([a_{i+1,2i+2}x_{j}y_{j}]\neq 0\) for \(j>2i+2\), and without loss of generality, we can assume that \([a_{i+1,2i+2}x_{2i+3}]\neq 0\).
Therefore, by induction we can find some \([a_{i,2i}]\neq 0\) such that either \([a_{i,2i}x_{j}]=0\) for all \(j\) (\(k=2i\)), or \([a_{i,2i}x_{j}]\neq 0\) only when \(j=k\) (\(k=2i+1\)). The previous case implies that \([\omega a_{i,2i}]=0\), which contradicts to the injectivity of \(\omega:H^{s}(\mathcal{M})\to H^{s+2r}(\mathcal{M})\). For the latter case, we can add a term \(x_{k+1}y_{k+1}\) with \(x_{k+1}=y_{k+1}=0\). Alternatively, as
\[d(\theta a_{i,2i}x_{k})=\sum_{j<k}(a_{i,2i}x_{j})y_{j}x_{k}=\sum_{j<k}d(\xi_{i,j}+\theta\eta_{i,j})y_{j}x_{k},\]
\(\theta a_{i,2i}x_{k}-\sum_{j<k}(\xi_{i,j}+\theta\eta_{i,j})y_{j}x_{k}\) is closed in \(\mathbf{I}(N^{*})\). So it is exact. Then \(a_{i,2i}x_{k}\) is exact in \(\mathcal{M}\), which is a contradiction.
**Remark 4.3**.: The condition that \(|\omega|\equiv 2\,(\operatorname{mod}4)\) is necessary. For \(|\omega|=4\), let \(\mathcal{M}=\Lambda\langle x,\xi\rangle\) be the Sullivan minimal model of \(\mathbb{C}P^{2}\), where \(|x|=2\), \(|\xi|=5\), \(dx=0\), \(d\xi=x^{3}\). Then \(\omega=x^{2}\) induces an isomorphism \(H^{0}(\mathcal{M})\to H^{4}(\mathcal{M})\). However, \(\mathcal{M}_{\theta}\) is formal because we can set \(C^{*}=\langle x,\theta x-\xi\rangle\) and \(N^{*}=\langle\theta\rangle\).
On a symplectic manifold \((M,\omega)\), if \([\omega]\) is an integral cohomology class, there exists a circle bundle whose Euler class is \([\omega]\). This circle bundle is called the Boothby-Wang fibration.
**Corollary 4.4**.: _Let \((M,\omega)\) be a connected symplectic manifold satisfying the hard Lefschetz property. Suppose \([\omega]\) is an integral and reducible cohomology class, i.e. there exists some \(x_{i},y_{i}\in H^{1}(M)\) such that \([\omega]=\sum x_{i}y_{i}\), then the Boothby-Wang fibration of \(M\) is non-formal._
Proof.: Let \(\mathcal{M}\) be the Sullivan minimal model of \(M\), then there are \(\omega,x_{i},y_{i}\in\mathcal{M}\) whose cohomology classes are same as the corresponding elements in \(H^{*}(M)\), and they satisfy \(\omega=\sum x_{i}y_{i}\).
Suppose \(\dim M=2n\), then \(\omega:H^{n-1}(\mathcal{M})\to H^{n+1}(\mathcal{M})\) is isomorphic and \(\omega:H^{n-2}(\mathcal{M})\to H^{n}(\mathcal{M})\) by the hard Lefschetz property. As \([\omega]^{n}\neq 0\), there exists some \([x_{i_{1}}y_{i_{1}}\dots x_{i_{n}}y_{i_{n}}]\neq 0\). Hence
\([x_{i_{1}}\dots x_{i_{n-1}}]\neq 0.\) Then we can apply Theorem 4.2 to prove that \(\mathcal{M}_{\theta}\) and the Boothby-Wang fibration of \(M\) are non-formal.
When the base manifold \(M\) is formal, the condition that \([\omega]\) is reducible in \(H^{*}(M)\) can be weakened to reducible in \(\mathcal{M}\), the Sullivan minimal model of \(M\). Precisely, this means that \([\omega]\) has a reducible representative \(\omega_{0}\in\mathcal{M}\). The reason is that \(\mathcal{M}\) can be generated by some \(C^{*}\oplus N^{*}\) and closed elements in \(\mathbf{I}(N^{*})\) are all exact. So the \(\Lambda C^{*}\) part of \(\omega_{0}\) is also a representative of \([\omega]\). However, the sufficiency of this weakened condition for general base manifolds remains unknown.
Besides, it is uncertain whether the above corollary still holds without the hard Lefschetz property.
Finally, when \(\omega\) is irreducible, the Sullivan minimal model of \(\mathcal{M}_{\theta}\) is slightly different. In this case, \(\omega\) can be chosen a generator of \(\mathcal{M}\), i.e. \(\mathcal{M}=\Lambda V^{*},V^{*}=\langle v_{\alpha}\rangle\) and \(\omega=v_{\alpha}\) for some \(\alpha\). Let \(V^{*}/\omega\) be a subspace spanned by all \(v_{\alpha}\) except \(\omega\), where the order of \(v_{\alpha}\) is preserved. Then let \(\mathcal{M}/\omega=\Lambda(V^{*}/\omega)\) and \(\Pi:\mathcal{M}\to\mathcal{M}/\omega\) be the natural projection. \(\Pi\circ d\) can be taken as the differential of \(\mathcal{M}/\omega\).
**Proposition 4.5**.: \(\mathcal{M}/\omega\) _is a Sullivan minimal model of \(\mathcal{M}_{\theta}\). The inclusion is a quasi-isomorphism._
Thus, the way of proving formality or finding obstructions for irreducible Euler classes may be quite different than the reducible case. In both cases, it is interesting to determine whether there exists a formal sphere bundle over a non-formal manifold.
|
2308.09899
|
Towards a High-Performance Object Detector: Insights from Drone
Detection Using ViT and CNN-based Deep Learning Models
|
Accurate drone detection is strongly desired in drone collision avoidance,
drone defense and autonomous Unmanned Aerial Vehicle (UAV) self-landing. With
the recent emergence of the Vision Transformer (ViT), this critical task is
reassessed in this paper using a UAV dataset composed of 1359 drone photos. We
construct various CNN and ViT-based models, demonstrating that for single-drone
detection, a basic ViT can achieve performance 4.6 times more robust than our
best CNN-based transfer learning models. By implementing the state-of-the-art
You Only Look Once (YOLO v7, 200 epochs) and the experimental ViT-based You
Only Look At One Sequence (YOLOS, 20 epochs) in multi-drone detection, we
attain impressive 98% and 96% mAP values, respectively. We find that ViT
outperforms CNN at the same epoch, but also requires more training data,
computational power, and sophisticated, performance-oriented designs to fully
surpass the capabilities of cutting-edge CNN detectors. We summarize the
distinct characteristics of ViT and CNN models to aid future researchers in
developing more efficient deep learning models.
|
Junyang Zhang
|
2023-08-19T03:57:52Z
|
http://arxiv.org/abs/2308.09899v2
|
Towards a High-Performance Object Detector: Insights from Drone Detection Using ViT and CNN-based Deep Learning Models
###### Abstract
_Abstract: Accurate drone detection is strongly desired in drone collision avoidance, drone defense and autonomous Unmanned Aerial Vehicle (UAV) self-landing. With the recent emergence of the Vision Transformer (ViT), this critical task is reassessed in this paper using a UAV dataset composed of 1359 drone photos. We construct various CNN and ViT-based models, demonstrating that for single-drone detection, a basic ViT can achieve performance 4.6 times more robust than our best CNN-based transfer learning models. By implementing the state-of-the-art You Only Look Once (YOLO v7, 200 epochs) and the experimental ViT-based You Only Look At One Sequence (YOLOS, 20 epochs) in multi-drone detection, we attain impressive 98% and 96% mAP values, respectively. We find that ViT outperforms CNN at the same epoch, but also requires more training data, computational power, and sophisticated, performance-oriented designs to fully surpass the capabilities of cutting-edge CNN detectors. We summarize the distinct characteristics of ViT and CNN models to aid future researchers in developing more efficient deep learning models._
_Keywords: Vision Transformer, Convolutional Neural Network, Drone Detection, Transfer Learning, You Only Look Once, You Only Look At One Sequence._
## I. Introduction
In the commercial drone market, computer vision is the most widely used technology in obstacle avoidance systems due to the physical limitations of infrared ray and ultrasonic wave technologies, as well as the high cost of laser-based obstacle avoidance systems. Both infrared and ultrasonic obstacle avoidance systems have strict requirements concerning the reflecting object and surrounding environment, making these technologies less reliable. For example, infrared light can be absorbed by black objects and penetrate transparent objects, and its receiver can be disturbed by other sources of infrared light. Similarly, ultrasonic waves can be absorbed by sponges and disturbed by propellor airflow. Moreover, drones, known for their speed, compact size, and difficulty to locate and intercept, present unique challenges. During the war in Ukraine, soldiers were injured and killed daily by bomber drones and suicide drones. In current autonomous UAV self-landing technology, GPS is used to locate the drones. However, most GPS systems have around a 1-meter error range, which is not safe enough for a large number of drones to autonomously land. These scenarios highlight the need for a high-performance drone detector, and computer vision presents the most economical and generalizable solution.
Convolutional neural networks (CNNs) have made significant advancements in several areas within the field of computer vision. Typically, contemporary detectors utilize pure convolution networks to draw out features. Traditional image classification networks like VGG 16 [9] and ResNet 50 [5] are used as the foundational architecture for our single object detectors. In the case of the YOLO series of detectors, they utilize a unique residual network known as Darknet, which offers superior efficiency in feature extraction [8]. In this paper, we deploy YOLO v7 [11] on the multiple drone detection task. On the other hand, the Vision Transformer (ViT) [3] represents a novel application of the Transformer model, which has become a favored choice for various natural language processing (NLP) tasks including machine translation, question answering, text classification, document summarization, and more [13]. A crucial aspect of the Transformer's success lies in its capacity to comprehend intricate interdependencies among long input sequences through self-attention [10]. With the introduction of the Vision Transformer (ViT), it has been demonstrated for the first time that the transformer architecture can be applied directly to image-based tasks. This is accomplished by perceiving an image as a series of patches and feeding these patches into an encoder network based on multi-headed self-attention layers [3]. In our drone dataset, we only use a plain ViT-b16 model plus a few top layers to prove that it can achieve much higher accuracy than leading-edge convolutional networks VGG 16 and ResNet 50 at a cost of longer training time and larger model. To further compare the CNN and ViT based networks on more complex object detection challenge, we experiment with the only open-source ViT-based YOLO detector, You Only Look At One Sequence (YOLOS), which uses a ViT as backbone and a simplified detector head without any performance-oriented designs [4]. By doing so, we demonstrate that while ViT can more efficiently capture long-range dependencies between image patches via self-attention, it also requires more training data and a careful design to perform well. Finally, we analyze and summarize the distinct features of CNN and ViT-based models, and their unique advantages in different scenarios.
## II Dataset
The drone dataset [6] consists of 1359 images, 1297 of which contain only a single drone, while the remaining 62 images feature multiple drones. Along with these images, the dataset also provides the X and Y coordinates for the drone's center in each image, as well as the height and width necessary to draw bounding boxes for object detection (Figure 1). Currently, due to the limited number of multi-object training images and for the sake of simplicity, we are only using the images containing a single drone for training.
Analyzing the filtered dataset's complexity, we look at the X and Y coordinates of the center of the drone relative to the image for spatial complexity and the area ratio is defined as Equation 1 for perspective complexity. Using a 3D visualization and bubble chart, we are able to better understand and visualize the drone dataset (Figure 2-3). From the graphs, there appears to be a Gaussian distribution across spatial and perspective complexity in the drone dataset, showing that the filtered drone dataset images are not heavily skewed or biased by the Central Limit Theorem. The visualization also shows that a large portion of the drones in the dataset are very small and therefore hard to be located accurately.
\(area\ ratio=\frac{\text{size of bounding box }(px)}{\text{size of image }(px)}\) _Equation (1)_
To test ViT's full capability of multi-object detection, we further augment the dataset by applying mosaic, noise, flip, rotation and blur, so that the training dataset size increases to 3021 and there are more images containing multiple drones (Figure 4). We have trained YOLO v7, YOLO v8, and YOLOS on the augmented dataset.
## III Methods and Implementation
### _Single-Drone Detection_
A 14 layers vanilla customized CNN model (Figure 5) and three popular transfer learning models are used as backbones to tackle the drone detection challenge: ResNet50 (Figure 6), VGG16 (Figure 7) and ViT-b16 (Figure 8). As it is shown in the following figures, we design the same top 5 layers for them for better comparison. Since it is a regression task, the MSE loss function and linear activation function are used on the final output layer. The Adam optimizer is deployed as usual with a learning rate of 0.0001. Notably, all the images are reshaped to (256,256,3) before training to fit in the vision transformer's unique patch size (16 by 16). Compared to classical CNN, ResNet and VGG network, the ViT-b16 transformer is more computationally intensive. With our ViT-b16 model having 86 million trainable parameters (Figure 8), our best performing CNN model VGG16 only has 19 million trainable parameters (Figure 7).
Figure 4: Augmented drone dataset visualization
Figure 3: Bubble chart of drone bounding box size (area ratio) and coordinates (centerX and centerY)
Figure 2: 3D visualization of Spatial and Perspective Complexity on Drone Dataset
Before analyzing the vision transformer, we need to first understand the working principles of the ViT-b16 model. An image is split into fixed-size patches (Figure 9), then linearly embedding and position embeddings (Figure 10) are added to each of them. The necessity of the positional embeddings comes from the invariance of output context vector with respect to different permutation of input patches. In a self-attention layer, each output vector is calculated as a weighted sum of all value vectors (Figure 11), and the weights (i.e., attention scores) are based solely on the content of the key and query vectors and not their order (Figure 12). As a result, despite the output context vector order will be permuted in the same way as we permute the input sequence, the set content of output context vectors will remain the same. Then, the resulting sequence of vectors from embedding layers are fed to a standard Transformer encoder (Figure 9), which contains six transformer layers in our case. One transformer layer consists of one self-attention layer followed by a dense layer. Different from a CNN layer which only captures local context information by sliding a small kernel window, a self-attention layer treats all image patches equally, so it is inherently better at detecting long-range dependency between patches and requires a larger dataset to efficiently learn these dependencies. In order to perform classification, we use the standard approach of adding an extra learnable "classification token" to the sequence [12]. Unlike other transformers, the vision transformer does not have a decoder network, allowing our model to improve its training efficiency significantly, since we only need to train the embedding layers, a transformer encoder network, and a SoftMax classifier. In our case, we use 5 layers MLP with a bounding box detector head instead of a SoftMax classifier.
Figure 5: Vanilla CNN model summary
Figure 8: ViT-b16 model summary
Figure 6: ResNet 50 model summary
Figure 7: VGG16 model summary
Figure 9: Vision Transformer model architecture
### _Multi-Drone Detection_
YOLO v7, YOLO v8 and YOLOS are trained on the augmented multi-drone dataset. Unlike the YOLO models (Figure 13) which use CNNs as backbones and complex heads, YOLOS uses ViT as backbone (Figure 14) and only an MLP of 2 hidden layers to implement both classification and bounding box regression heads [4]. In essence, YOLOS is not optimized for better performance but to prove the transferability of ViT [4]. The transformation from a Vision Transformer (ViT) to a YOLOS detector is straightforward. First, YOLOS eliminates the [CLS] token used for image classification and instead adds a hundred learnable detection tokens ([DET] tokens), which are randomly initialized, to the input patch embeddings ([PATCH] tokens) for the purpose of object detection. Secondly, during the training process, YOLOS substitutes the image classification loss found in ViT with a bipartite matching loss, enabling it to carry out object detection in a set prediction manner, in line with the method proposed by DETR [2], the first attempt to successfully apply transformers to multi-object detection.
## IV Results and Analysis
### _Single-Drone dataset_
For the sake of simplicity, we present only the plots for ViT-b16 and our highest-performing CNN transfer learning model: VGG16. As illustrated in the plots below, we successfully achieved 89% and 95.3% validation accuracy on our drone dataset using our VGG16 and ViT-b16 models, respectively, after just 50 epochs. When comparing the two models, the ViT-b16 transformer loss is 4.6 times lower than our best model, VGG16, with the VGG16 loss standing at
Figure 11: Output context vector calculation in the self-attention layer
Figure 12: Attention scores calculation in the self-attention layer
Figure 10: Positional embedding visualization
Figure 13: YOLO v8 model summary
around 0.0067 and the transformer loss approximately 0.00145 (Figure 15). For reference, we also plot the IoU comparison (Figure 15). It is clear that the CNN model lacks the capacity to capture deeper data dependencies and begins to overfit the drone dataset early on. In contrast, our plain ViT model does not demonstrate an apparent overfitting pattern and manages to attain higher accuracy in the validation dataset. This highlights the power and potential of the vision transformer in object detection tasks through the self-attention mechanism (Figure 16).
However, similar to ViT-b16, YOLOS suffers from the same problem of requiring longer training time and a larger model. Specifically, YOLOS-small contains 30.7 million trainable parameters, while YOLOv8-small only contains 11 million (Figure 13). Every YOLOS training epoch takes around 10-12 minutes, which is more than 15 times longer than the performance-oriented YOLO v8 and YOLO v7, so we only utilize the first 20 epochs for demonstration purpose.
Other research reveals another key advantage of vision transformers in object classification over CNNs. When the training dataset size is sufficiently large (more than 100 million), ViT outperforms CNN by larger margins [3] (Figure 21). This is intuitively understandable, given ViT's more complex model structure. It is also worthwhile to validate this in the object detection task, so we train our YOLOS model on the original dataset of only 1359 images without any data augmentation, with only 62 of them containing multiple drones in one image. We found that the mAP drops quickly from 96% to around 92% (Figure 22), performing much closer to YOLO models (89%). More importantly, the validation loss plots on the original dataset appear much worse and unstable (Figure 23), suggesting that the model might not even be learning at all. From this, we conclude that the dataset size is extremely important for ViT-based models in object detection tasks.
Conclusion and Discussion
In the comparative analysis conducted, we identified four main insights about the performance of Vision Transformer (ViT) and Convolutional Neural Network (CNN) based deep learning models in the realm of object detection. Primarily, ViT-based models consistently demonstrate superior performance over CNN-based models in similar epochs for object detection and classification tasks. This can be ascribed to the built-in self-attention mechanism in ViT that effectively captures long-range data dependencies. However, the use of ViT networks brings with it the demand for increased computational power and a lengthier training period, a stark contrast to the requirements of CNN networks. Furthermore, the extent of training data plays a critical role in ViT's commendable performance. A combination of large-scale training datasets and data augmentation methodologies ensures ViT's dominance over CNN. Finally, it should be noted that our study is centered solely around pure ViT models. For ViT to surpass the performance of leading-edge CNN models in more complex tasks, a more intricately designed structure would be necessary to elevate ViT's performance further.
## VI Acknowledgement
The author would like to thank Tyler Yu from University of California, Irvine Computer Science department for creating the 3D visualization and bubble chart for the drone dataset.
|
2310.01057
|
Advancements in Optimization: Adaptive Differential Evolution with
Diversification Strategy
|
This study presents a population-based evolutionary optimization algorithm
(Adaptive Differential Evolution with Diversification Strategies or ADEDS). The
algorithm developed using the sinusoidal objective function and subsequently
evaluated with a wide-ranging set of 22 benchmark functions, including
Rosenbrock, Rastrigin, Ackley, and DeVilliersGlasser02, among others. The study
employs single-objective optimization in a two-dimensional space and runs ADEDS
on each of the benchmark functions with multiple iterations. In terms of
convergence speed and solution quality, ADEDS consistently outperforms standard
DE for a variety of optimization challenges, including functions with numerous
local optima, plate-shaped, valley-shaped, stretched-shaped, and noisy
functions. This effectiveness holds great promise for optimizing supply chain
operations, driving cost reductions, and ultimately enhancing overall
performance. The findings imply the importance of effective optimization
strategy for improving supply chain efficiency, reducing costs, and enhancing
overall performance.
|
Sarit Maitra
|
2023-10-02T10:05:41Z
|
http://arxiv.org/abs/2310.01057v3
|
# Advancements in Optimization: Adaptive Differential Evolution with Diversification Strategy
###### Abstract
This study presents a population-based evolutionary optimization algorithm (Adaptive Differential Evolution with Diversification Strategies or ADEDS). The algorithm developed using the sinusoidal objective function and subsequently evaluated with a wide-ranging set of 22 benchmark functions, including Rosenbrock, Rastrigin, Ackley, and DeVilliersGlasser02, among others. The study employs single-objective optimization in a two-dimensional space and runs ADEDS on each of the benchmark functions with multiple iterations. In terms of convergence speed and solution quality, ADEDS consistently outperforms standard DE for a variety of optimization challenges, including functions with numerous local optima, plate-shaped, valley-shaped, stretched-shaped, and noisy functions. This effectiveness holds great promise for optimizing supply chain operations, driving cost reductions, and ultimately enhancing overall performance. The findings imply the importance of effective optimization strategy for improving supply chain efficiency, reducing costs, and enhancing overall performance.
Differential evolution; Evolutionary algorithm; Non-convex function; Optimization; Sinusoidal function;
## 1 Introduction
The goal of optimization is to maximize a system's desirable attributes while concurrently minimizing its unfavorable characteristics. While it is a well-known problem in applied machine learning and mathematical literature (Momin and Yang, 2013), it has a fair amount of share in different business domains, including supply chain analytics. Since its inception, the metaheuristic Differential Evolution (DE) algorithm (Storn and Price, 1995) has become a popular evolutionary algorithm to deal with optimization problems. Over the years, the applications of DE have been found in a wide spectrum of supply chain functions (e.g., Jauhar et al., 2017; Doolun et al., 2018; Yousefi and Tosarkani, 2022; Nimmy et al., 2022; Guo et al., 2023). The use of stochastic optimization techniques has increased over the past few years (e.g., Lan, 2020; Zakaria et al., 2020, etc.), and this has sparked significant interest within the research community regarding how optimization issues are handled. This motivated us to relook into the limitations encountered with the existing DE algorithms and the need for improved optimization techniques by introducing a new variant of DE.
The widespread application of DE can be found in diverse domains, including data mining, feature extraction, supply chain analytics, production scheduling, etc. A significant amount of research has already gone into improving this method and applying it to a range of practical issues (Opara and Arabas, 2019). In fact, since early 2010, researchers improved DE to improve its effectiveness and efficiency in handling various optimization challenges (Ahmad et al., 2022). This surge in popularity is attributed to DE's simple structure, rapid convergence, and robustness. However, DE's local search capabilities are often insufficient, making it prone to premature convergence and getting trapped in local optima (Wang et al., 2022; Liu et al., 2020). Additionally, when applied to high-dimensional optimization problems, DE's optimization accuracy tends to diminish (Cai et al., 2019; Liu et al., 2023). Maitra et al. (2023) have explored the scope of the existing DE approach and introduced a new algorithm to overcome the trap of local minima and enhance the performance of the algorithm. However, their algorithm was not validated with multiple benchmark test functions. Several variants of DE algorithms were introduced over the years to overcome the limitations of DE (e.g., Chakraborty, 2008; Mallipeddi, 2011; Draa et al., 2015; Deng et al., 2021; Xu et al., 2021). These variations have attempted to address problems like early convergence and robustness to various kinds of problems. Their advancement highlights the ongoing research efforts to enhance and broaden the capabilities of DE and associated
optimization methods. Even with the improvement, considering its popularity, there is a growing need for in-depth research addressing these limitations and seeking potential enhancements to further elevate DE's effectiveness as an optimization tool.
This study introduces the Adaptive Differential Evolution with Diversification Strategies (ADEDS) algorithm, which improves on the limitations of the classic DE algorithm. ADEDS overcomes limitations in parameter tuning, local search capabilities, and high-dimensional optimization. Its adaptive system controls parameters, reducing human inference load. ADEDS's adaptability broadens its applicability to a wider range of optimization problems. It addresses early convergence issues by using dynamic diversification measures, allowing individuals to explore various alternatives without avoiding early convergence. This feature enhances ADEDS's ability to manage complex landscapes with multiple local optima. Therefore, the primary contribution of this work is the introduction and validation of the ADEDS algorithm, which addresses limitations in traditional DE for optimization tasks. The connection to supply chain analytics is made in this work to underscore the real-world implications and significance of the algorithm's effectiveness in optimization tasks. It is no secret that today's supply chain managers use technology to coordinate activities throughout the supply chain and provide important insights into performance.
## 2 ADEDS - Adaptive Differential Evolution with Diversification Strategies
To address the issues of premature convergence and lack of diversity in Differential Evolution variants, we propose the ADEDS algorithm. It is an evolutionary optimization algorithm that falls under the category of population-based stochastic optimization methods. We compare ADEDS with traditional DE, which is valuable in supply chain analytics, to identify advanced optimization techniques that can outperform traditional methods, thereby enhancing supply chain efficiency.
The purpose of this analysis is to discover the best solution for the sinusoidal function as displayed in Fig. 1, which is difficult for typical Differential Evolution (DE) techniques to solve.
* Initialization: \(Population=[x_{1},x_{2},\ldots,x_{population\_size}]\), each \(x_{1}\)is a vector in the search space with no dimensions. Table 1 displays the pseudo code of population initiation.
Figure 1: Non-Convex Sinusoidal Function Landscape
* Adaptive mutation rate: \(F=initial\_mutation\_rate*\left(1-\frac{generation}{max\_generation}\right)\). Table 2 displays the pseudo code of adaptive mutation.
* Adaptive crossover rate: \(CR=initial\_crossover\_rate*\frac{generation}{max\_generation}\). Table 3 displays the pseudo code of adaptive crossover.
* Mutation strategy: Calculate the trial solution \(v_{t}\) for each candidate solution \(x_{i}\) using mutation strategies. \[v_{t}=x_{r1}+F*\left(x_{r2}-x_{r3}\right)\] \(r1,r2,r3\) are distinct random indices representing different solutions in the population, F is the adaptive mutation factor.
* Crossover operation: Combine the trial solution \(v_{t}\) with the original solution \(x_{t}\) to create a new candidate solution \(u_{i}\). Table 4 displays pseudo code combining mutation strategy and crossover operation. \[u_{i}[j]=\begin{cases}v_{t}[j]&if\;\;j=random(0,1)<CR\\ x_{t}[j]&otherwise\end{cases}\]
\begin{table}
\begin{tabular}{|c|} \hline Function initialize\_population(population\_size, bounds): \\ population = List () \\ For i = 1 to population\_size: \\ individual = [] \\ For j = 1 to length(bounds): \\ \# Generate a random value within the specified bounds for dimension ’i’ \\ random\_value = RandomUniform(bounds[j]. low, bounds[j]. high) \\ Add random\_value to individual \\ Add individual to population \\ Return population \# Return the populated list of individuals. \\ End Function \\ \hline \end{tabular}
\end{table}
Table 1: Pseudo code for population initiation
\begin{table}
\begin{tabular}{|c|} \hline Function adaptive\_mutation rate (generation, max\_generations, initial\_mutation\_rate): \\ \# adaptive mutation rate based on generation and max\_generations \\ mutation\_rate = initial\_mutation\_rate* (1.0 - generation / max\_generations) \\ Return mutation\_rate \\ End Function \\ \hline \end{tabular}
\end{table}
Table 2: Pseudo code for adaptive mutation
\begin{table}
\begin{tabular}{|c|} \hline Function adaptive\_crossover\_rate(generation, max\_generations, initial\_crossover\_rate): \\ \# adaptive crossover rate based on generation and max\_generations \\ crossover\_rate = initial\_crossover\_rate* (generation / max\_generations) \\ Return crossover\_rate \\ End Function \\ \hline \end{tabular}
\end{table}
Table 3: Pseudo code for adaptive crossover rate
* Local Search: After generating a trial solution \(v_{i}\), apply a local optimization algorithm to refine \(v_{i}\) and potentially replace \(x_{i}\) with a better solution. Table 5 displays pseudo code of local search. \[x_{i}=local\_optimization(v_{i})\] \[\begin{array}{l}\mbox{Function local\ search(solution):}\\ \mbox{result}=\mbox{minimize}\ \mbox{(objective\_function, solution, method='L-BFGS-B', bounds=bounds)}\\ \mbox{\# Return the optimized solution}\\ \mbox{Return result.x}\\ \mbox{End Function}\end{array}\]
* Select candidate solutions for the next generation based on their fitness values. Solutions with lower fitness values are favored. \[\mbox{\it Population}=select\_new\_population(population)\]
* Monitor the convergence of the optimization process. In this implementation, the algorithm stops when either of the following conditions is met: \(\circ\) It reaches the maximum number of generations specified by max_generations. \(\circ\) Fitness values have stagnated for stagnation_limit consecutive generations. \(\circ\) These criteria help prevent the algorithm from running indefinitely and provide a mechanism for early stopping if convergence is achieved.
Table 6 displays the pseudo code of early stopping criteria if convergence is achieved.
\begin{table}
\begin{tabular}{|l|} \hline For i from 0 to population\_size - 1: \\ individual = population[i] \# Get the current individual \\ \# adaptive mutation rate and crossover rate \\ F = adaptive\_mutation\_rate(generation, max\_generations) \\ CR = adaptive\_crossover\_rate(generation, max\_generations) \\ neighbors = list(range(population\_size)) \\ \# Randomly select two distinct neighbors \\ neighbor1 = population[select\_random\_element(neighbors)] \\ neighbor2 = population [select\_random\_element(neighbors, exclude=neighbor1)] \\ \# trial solution using DE mutation strategy \\ trial\_solution = individual + F * (neighbor1 - individual) + F * (neighbor2 - individual) \\ \# fitness of the trial solution \\ trial\_fitness = objective\_function(trial\_solution) \\ \# Update the population with the trial solution if it’s better \\ if trial\_fitness \(<\) fitness[i]: \\ population[i] = trial\_solution \\ fitness[i] = trial\_fitness \\ End For \\ \hline \end{tabular}
\end{table}
Table 4: Pseudo code Mutation strategy & crossover operation: F is a scaling factor for mutation. It is dynamically adjusted based on the generation progress using the adaptive_mutation_rate function.
\begin{table}
\begin{tabular}{|l|} \hline Function local\ search(solution): \\ result = minimize (objective\_function, solution, method=’L-BFGS-B’, bounds=bounds) \\ \# Return the optimized solution \\ Return result.x \\ End Function \\ \hline \end{tabular}
\end{table}
Table 5: Pseudo code for local search
* The optimal solution to the problem is generally regarded as being the best solution discovered during the optimization process.
Directly expressing the distribution as a simple analytical formula is often infeasible due to the complexity of real-world optimization landscapes. Instead, the invariant distribution of solutions is determined empirically through the optimization process itself. The algorithm explores the solution space by iteratively generating and evaluating candidate solutions, adapting its parameters, and gradually converging towards regions of interest in the search space.
Fig. 2 displays a visual representation of where the algorithm found solutions in the search space defined by the x and y axes. Areas with a higher frequency of solutions (elevated regions in the 3D plot) indicate that the algorithm frequently finds solutions in those areas. This suggests that those regions contain multiple local optima. The peak that stands out from the rest suggests the location of the global optimum. Multiple peaks in the plot indicate the presence of local optima, and the algorithm has found solutions in these local optima. Table 7 displays the pseudocode of the algorithm for non-convex sinusoidal function.
\begin{table}
\begin{tabular}{|l|} \hline Function has \_converged(best\_fitness\_history, stagnation\_limit): \\ \# Check if the length of best \_fitness\_history is less than the stagnation\_limit \\ If length(best \_fitness\_history) \textless stagnation\_limit Then \\ Return False \# Convergence not reached yet \\ End If \\ \# Check if the last ’stagnation\_limit’ elements in best\_fitness\_history are all the same \\ If all(best\_fitness\_history[-stagnation\_limit:] \(==\) best\_fitness\_history[-1]) Then \\ Return True \# Convergence has been reached \\ Else \\ Return False \# Convergence not reached yet \\ End If \\ End Function \\ \hline \end{tabular}
\end{table}
Table 6: Pseudo code for convergence threshold
\begin{table}
\begin{tabular}{|l|} \hline Function sinusoidal\_function(x): \\ \(\text{Return sin(x[0])}+\text{cos(x[1])}\) \\ End Function \\ \hline \end{tabular}
\end{table}
Table 7: Pseudo code Non-Convex Objective Function (Sinusoidal Function)
ADEDS algorithm is configured to run with a population size of 50, a maximum of 100 generations, and it's restricting the search space to the range -10 to 10 for both x and y dimensions. The algorithm tries to find the optimal solution within these constraints.
Fig 3 displays the convergence where; we see that the fitness value decreases as the number of generations increases. This indicates that the optimization algorithm is converging towards a better solution over time. The decreasing trend suggests that the algorithm is effective in finding improved solutions at each generation.
Figure 3: ADEDS Convergence plot: Sinusoidal objective function
Figure 2: Solution Distribution and Density
Fig. 4 displays the diversity and how the diversity metric evolves over generations in optimization. It starts with high diversity, indicating a wide range of solutions, decreasing as generations progress. Eventually, it reaches a stable level, indicating a limited set of solutions.
Fig. 5 displays the convergence rate, where we see the improvement in the algorithm, with negative values indicating decreasing fitness values and larger values indicating faster convergence. As the process progresses, significant improvements become less frequent.
These three plots collectively illustrate the optimization process's dynamics and efficiency. The convergence plot demonstrates that the algorithm is consistently finding better solutions over time. The Diversity Plot shows how the population's variety decreases as the algorithm converges to a narrower set of solutions. The convergence rate plot quantifies the speed of improvement, with rapid convergence at the early stages of optimization. Table 8 displays the optimal solution and convergence rate for the 10-D problem.
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**Best solution (10 dimensions)** & **Best fitness** & **Final convergence rate** \\ \hline
[36.14, -204.24, 104.18, -39.79, -18.74, 65.49, -35.66, -15.51, 61.75, -96.77] & -1.9992 & -0.0715 \\ \hline \end{tabular}
\end{table}
Table 8: Optimal solution and convergence rate
Figure 4: Diversity: ADEDS with Sinusoidal objective function
Figure 5: Convergence Rate
The algorithm finds a low fitness value (close to -2.0) for the best solution, indicating a near-optimal or optimal solution. The final convergence rate was close to zero, indicating a stabilized search. The best fitness value is close to the global optimum, indicating successful performance.
The algorithm that converges to a lower fitness value faster is considered more efficient. Fig. 6 displays that the ADEDS line is lower than the DE line at a particular generation, which means that the ADEDS algorithm found a better solution (lower fitness) at that generation when applied to the convex objective function. However, ADEDS's aim is to find the global minimum efficiently in more complex and non-convex landscapes. Fig. 7 displays the convergence of the sinusoidal objective function.
Table 9 displays a comparative study between the two.
Fig. 6: Comparison with traditional DE (sphere function)
Fig. 7: Convergence behavior with sinusoidal objective function
[MISSING_PAGE_POST]
\[f(x_{0}^{*}+x_{1}^{*})\leq\min_{global}f(x_{0}+x_{1})\]
Where \(\min_{global}f(x_{0}+x_{1})\) represents the true global minima. When we apply the ADEDS algorithm to this problem, it efficiently explores the search space, adaptively adjusting its parameters like mutation scale factor (F) and crossover rate (CR) to guide the search.
To assess the algorithm's performance and applicability for various optimization challenges, we tested it against a variety of benchmark functions. Appendix 1 displays the success rate for various test functions with the numbers of iterations.
## 3 Benchmark functions
Benchmark functions are crucial for evaluating and developing optimization methods, providing a controlled environment for assessing algorithm performance. They cover various optimization challenges, such as convex and non-convex landscapes, single and multiple optima, and multimodality levels. Test functions are classified based on surface shape, such as bowl-shaped, plate-shaped, and multiple local minima, representing increasing difficulty in optimizing test functions.
### Many local optima
The gradient-based optimization technique struggles to find the minima of the functions with multiple local optima. This can be solved using heuristic search-based optimization methods. These algorithms begin with several starting points and end with solutions.
#### 3.1.1 Ackley function
Ackley function (Ackley, 1987) is a multimodal and non-convex function with multiple local minima and a global minimum at \(f(x)=0\), which occurs when \(x\) is the zero vector (Back, 1996).
Figure 8: DE Optimization Results for Convex and Non-Convex Problems
\[f_{ackley}(x)=-a\ exp\left(-b\sqrt{\frac{1}{n}\sum_{i=1}^{n}x_{i}^{2}}\right)- exp\left(\frac{1}{n}\sum_{i=1}^{n}cos(2\pi x_{i})\right)+a+exp(1)\]
Where, n is the dimension of vector x, \(\sum_{i=1}^{n}x_{i}^{2}\) represents the sum of the squares of all components of the vector x, \(\sum_{i=1}^{n}cos(2\pi x_{i})\) represents the sum of the cosine values of each component of x after scaling by \(2\pi\). The recommended variable values are a = 20, b = 0.2 and c = 2\(\pi\). Here, \(a\ exp\left(-b\sqrt{\frac{1}{n}\sum_{i=1}^{n}x_{i}^{2}}\right)\) depends on the Euclidean norm (root mean square) of x. It decreases exponentially as the norm increases. \(exp\left(\frac{1}{n}\sum_{i=1}^{n}cos(2\pi x_{i})\right)\) is the sum of the cosine values of each component. It oscillates between -1 and 1, contributing to the function's oscillatory behavior. The constants and exp (1) are added to shift the function to a minimum value of 0 at the global minimum. Here global minima \(f_{ackley}(0,0)=\ 0.00\).
#### 3.1.2 **Bukin function N.6**
The region around Bukin functions' minimal points resembles a fractal (with subtle seesaw edges). This characteristic makes them extremely challenging to optimize using any global (or local) optimization technique.
\[f_{bukin}(x)=100\sqrt{x_{2}-0.01x_{1}^{2}+0.01(x_{1}+10)}\]
The function validated on the rectangle \(x_{1}\ \epsilon\ [-15,-5],\ x_{2}\ \epsilon\ [-3,-3]\). Here global minima \(f_{bukin}(x^{*})=0.00\).
#### 3.1.3 **Rastrigin function**
The Rastrigin function is a classic example of a multi-modal optimization problem (Rastrigin, 1974).
\begin{table}
\begin{tabular}{|c|} \hline function Ackley (x): \\ dimension = length (x) \\ sum\_sq = 0 \\ sum\_cos = 0 \\ for each element in x: \\ sum\_sq = sum\_sq + element \(\wedge\) 2 \\ sum\_cos = sum\_cos + cos (2 * * element) \\ term1 = -20 * exp (-0.2 * sqrt (sum\_sq / dimension)) \\ term2 = -exp (sum\_cos / dimension) \\ result = term1 + term2 + 20 + exp (1) \\ return result \\ \hline \end{tabular}
\end{table}
Table 10: Pseudo code Ackley function
\begin{table}
\begin{tabular}{|c|} \hline function Bukin(x): \\ x, y = x \\ term1 = 100 * sqrt (abs (v - 0.01 * x*2)) \\ term2 = 0.01 * abs (x + 10) \\ result = term1 + term2 \\ return result \\ \hline \end{tabular}
\end{table}
Table 11: Pseudo code Bukin Function
The function validated on the hypercube \(x_{l}\in[-5.12,5.12],for\ all\ i=1,...,n\). Here global minima\(f_{rastrigin}(0,\ldots,0)=\ 0.00\).
#### 3.1.4 Cross-in-tray function
Cross-in-Tray function has multiple global minima (Jamil & Yang, 2013).
The function validated on:
\[min=\begin{cases}f_{crossintray}(1.35,-1.35)=2.06262\\ f_{crossintray}(1.35,1.35)=2.06262\\ f_{crossintray}(-1.35,1.35)=2.06262\\ f_{crossintray}(-1.35,-1.35)=2.06262\end{cases},\text{ with search domain }-10\leq x,y\leq 10\]
#### 3.1.5 Levy function
\[f_{levy}(x)=sin^{2}(\pi w_{1})+\sum_{l}^{n-1}(w_{l}-1)^{2}\left[1+10sin^{2}(\pi w _{l}+1)\right]+(w_{n}-1)^{2}[1+sin^{2}(2\pi w_{n})]\]
Where, \(w_{l}=1+\frac{x_{l}-1}{4}\), for all i = \(1,\ldots,\text{n}\).
\begin{table}
\begin{tabular}{|l|} \hline function \(\text{Rastrigin}(x)\): \\ A = 10 \\ sum of squares = 0 \\ for xi in x: \\ sum of squares \(+\)= xi ’\(\sim\)2 - A * cos (2 * pi * xi) \\ result = A\({}^{\frac{n}{2}}\) length(x) + sum of square \\ return result \\ \hline \end{tabular}
\end{table}
Table 12: Pseudo code for Rastrigin function
\begin{table}
\begin{tabular}{|l|} \hline function \(\text{CrossInTray}(x)\): \\ x, y = x \\ a = abs (100 - sqrt (x*2 + y*2) / pi) \\ b = abs(sin(x) * sin(y) * exp(a)) + 1 \\ result = - 0.0001 * (abs (sin (x) * sin (y) * exp (a)) + 1) \({}^{\wedge}\) 0.1 \\ return result \\ \hline \end{tabular}
\end{table}
Table 13: Pseudo code for Cross-in-Tray function
The function validated on the hypercube \(x_{i}\in[-10,10],\ for\ all\ i=1,...,n.\) Here global minima \(f_{levy}(1,1)=0.00.\)
#### 3.1.6 Egg-holder function
The topography of the non-convex Egg-Holder function is misleading, and it is an incredibly difficult function to optimize (Whitley et al., 1996).
\[f_{eggholder}(x)=-(x_{2}+47)sin\left(\sqrt{x_{2}+\frac{x_{1}}{2}}+47\right)-x_{ i}sin\left(\sqrt{x_{1}-(x_{2}+47)}\right)\]
This function validated on the square \(x_{l}\in[-512,404.2319],\ for\ all\ i=1,2.\) Here global minima \(f_{eggholder}(x^{*})=-959.6407.\)
#### 3.1.7 Schaffer function N. 2
\[f_{schaffer}(x)=0.5+\frac{sin^{2}(x_{1-}^{2}x_{2}^{2})-0.5}{[1+0.001(x_{1-}^{2} x_{2}^{2})]^{2}}\]
This function validated on the square \(x_{i}\in[-100,100],\ for\ all\ i=1,2.\) Here global minima \(f_{schaffer}(x^{*})=0.00.\)
\begin{table}
\begin{tabular}{|l|} \hline function Schaffer(x): \\ x, y = x \\ numerator = sin (x*2 - y*2) - 0.5 \\ denominator = (1 + 0.001 * (x*2 + y*2)) *2 \\ result = 0.5 + (numerator / denominator) \\ return result \\ \hline \end{tabular}
\end{table}
Table 16: Pseudo code Schaffer function
\begin{table}
\begin{tabular}{|l|} \hline function Lévy(x): \\ x, y = x \\ term1 = sin (3 * pi * x) *2 \\ term2 = (x - 1) *2 * (1 + sin (3 * pi * y) *2) \\ term3 = (y - 1) *2 * (1 + sin (2 * pi * y) *2) \\ result = term1 + term2 + term3 \\ return result \\ \hline \end{tabular}
\end{table}
Table 14: Pseudo code for Levy function
\begin{table}
\begin{tabular}{|l|} \hline function EggHolder(x): \\ x, y = x \\ a = sort (fabs (v + x/2 + 47)) \\ b = sqrt (fabs (x - (y + 47))) \\ result = - (y + 47) * sin (a) - x * sin (b) \\ return result \\ \hline \end{tabular}
\end{table}
Table 15: Pseudo code for Egg-Holder function
#### 3.1.8 Schwefel function
The Schwefel function is complex, with many local minima (Schwefel, 1981).
\[f_{schwefel}(x,y)=418.9829\ *\ 2-(x*sin(\sqrt{|x|}))+-(y*sin(\sqrt{|y|}))\]
This function validated on the hypercube \(x_{i}\in[-500,500],for\ all\ i=1,...,n\). Here global minima \(f_{schwefel}(x^{*})=0.00\).
#### 3.1.9 Shubert function
This function has several local minima and many global minima.
\[f_{shubert}(x)=\left(\sum\nolimits_{i=1}^{5}i\ cos((i+1)x_{1}+i)\right)\left( \sum\nolimits_{i=1}^{5}i\ cos((i+1)x_{2}+i)\right)\]
This is validated on the square \(x_{i}\in[-10,10],for\ all\ i=1,2\). Here global minima \(f_{shubert}(x^{*})=-186.7309\).
#### 3.1.10 Drop-Wave function
This is multimodal and highly complex function.
\[f_{dropwave}(x)=-\frac{1+cos\left(12\sqrt{x_{1}^{2}+x_{2}^{2}}\right)}{0.5(x_{1 }^{2}+x_{2}^{2})+2}\]
It is validated on the square \(x_{i}\in[-5.12,5.12],for\ all\ i=1,2\). Here global minima \(f_{dropwave}(x^{*})=-1\).
\begin{table}
\begin{tabular}{|l|} \hline function Shubert(x): \\ x, y = x [0], x [1] \\ result = 418.9829 * 2 - (x * sin (sqrt (abs (x)))) - (y * sin (sqrt (abs (y)))) \\ return result \\ \hline \end{tabular}
\end{table}
Table 17: Pseudo code Schwefel function
\begin{table}
\begin{tabular}{|l|} \hline function DropWave(x): \\ x, y = x \\ numerator = - (1 + cos (12 * sqrt (x*2 + y*2))) \\ denominator = 0.5 * (x*2 + y*2) + 2 \\ result = numerator / denominator \\ return result \\ \hline \end{tabular}
\end{table}
Table 18: Pseudo function Shubert function
\begin{table}
\begin{tabular}{|l|} \hline function Shubert(x): \\ x, y = x \\ m = 5 \\ result = 1 \\ for i in range (1, m + 1): \\ temp\_sum = sum ((2 * i) * cos ((2 * i) * (x + y))) \\ result *= temp\_sum \\ return result \\ \hline \end{tabular}
\end{table}
Table 19: Pseudo code for Drop-Wave function
#### 3.1.11 Himmelblau's function
It is a multi-modal function (Himmelblau, 2018).
\[f_{himmellau}(x,y)=(x^{2}+y-11)^{2}+(x+y^{2}-7)^{2}\]
\[min=\begin{cases}f_{himmellau}(3.00,2.00)=0\\ f_{himmellau}(-2.80,3.13)=0\\ f_{himmellau}(-3.78,-3.28)=0\end{cases}\]
with search domain \(-5\leq x,y\leq 5\)
\[f_{himmellau}(3.58,-1.84)=0\]
### ADEDS with many local optima
Many local optima in optimization problems make it difficult for algorithms to locate the global optimum, which is the best feasible solution to the problem. Table 21 reports ADEDS with the many local optima. All the tests are performed using two dimensions, 50 population sizes, and 10 iterations. We have run the optimization algorithms 10 times for each function (num_runs), which determines that independent optimization runs will be conducted to compare the performance of DE and ADEDS. Each run initializes the algorithm independently and runs it for 50 iterations (max_generations). It determines how many iterations each algorithm will perform before stopping.
Significant differences in mean fitness values and low p-values suggest that ADEDS outperform DE on several benchmark functions except Shubert and Himmelblau functions. In all the test functions, ADEDS achieved the global minima with just 10 runs, which is promising. Grid-based parameter setting can be employed here to optimize the performance of algorithms for specific problems.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline
**SI** & \multicolumn{2}{c|}{**Benchmark function**} & \multicolumn{1}{c|}{**Mean fitness**} & \multicolumn{1}{c|}{**Std dev**} & \multicolumn{1}{c|}{**t-stats**} & \multicolumn{1}{c|}{**p-value**} \\ \hline
1 & Rastrigin & DE & 8.429 & 3.733 & -6.773 & 0.000\({}^{***}\) \\ \cline{2-6} & & ADEDS & 0.000 & 0.000 & & \\ \hline
2 & Ackley & DE & 11.460 & 3.913 & -8.784 & 0.000\({}^{***}\) \\ \cline{2-6} & & ADEDS & 0.000 & 0.000 & -8.784 & 0.000\({}^{***}\) \\ \hline
3 & Cross in tray & DE & -1.935 & 0.065 & -3.905 & 0.001\({}^{***}\) \\ \cline{2-6} & & ADEDS & -2.062 & 0.000 & & \\ \hline
4 & Egg holder & DE & -1446.82 & 372.62 & 3.922 & 0.000\({}^{***}\) \\ \cline{2-6} & & ADEDS & -959.640 & 0.000 & & \\ \hline
5 & Drop-wave & DE & -0.974 & 0.031 & -2.449 & 0.024\({}^{**}\) \\ \cline{2-6} & & ADEDS & -1.000 & 0.000 & & \\ \hline
6 & Levy & DE & 4.379 & 3.365 & -3.904 & 0.001\({}^{***}\) \\ \cline{2-6} & & ADEDS & 0.000 & 0.000 & & \\ \hline \end{tabular}
\end{table}
Table 21: ADEDS with many local optima benchmark test report
\begin{table}
\begin{tabular}{|c|c|c|} \hline function Himmelblau(x): & x, y = x \\ term1 = (x’2 + y - 11) \textasci{2} term2 = (x + y’2 - 7) \textasci{2} result = term1 + term2 \\ return result \\ \hline \end{tabular}
\end{table}
Table 20: Pseudo code Himmelblau
### Plate-Shaped
Shape optimization is a well-established part of the calculus of variations. It is an extension of optimal control theory, with the minimizing parameter being simply the domain in which the problem is described. There are frequently several equal or nearly similar solutions for plate-shaped functions. Since the objective function values of these solutions are so similar, it is challenging for optimization algorithms to discriminate between them and choose the optimal one
#### 3.3.1 Booth Function
\[f_{booth}(x)=(x_{1}+2x_{2}-7)^{2}+(2x_{1}+x_{2}-5)^{2}\]
This is validated on the square \(x_{i}\in[-10,10],for\ all\ i=1,2\). Here global minima \(f_{booth}(x^{*})=0.00\).
#### 3.3.2 Matyas Function
This test function has no local minima.
\[f_{matyas}(x)=0.26(x_{1}^{2}+x_{2}^{2})-0.48x_{1}x_{2}\]
It is validated on the square \(x_{i}\in[-10,10],for\ all\ i=1,2\). Here global minima \(f_{matyas}(0,0)=0.00\).
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline \multirow{2}{*}{7} & \multirow{2}{*}{Schwefel} & DE & -0.0732 & 0.223 & \multirow{2}{*}{2.930} & \multirow{2}{*}{0.008\({}^{***}\)} \\ \cline{3-3} \cline{5-6} & & ADEDS & -0.995 & 0.000 & & \\ \hline \multirow{2}{*}{8} & \multirow{2}{*}{Schaffer} & DE & -0.073 & 0.223 & \multirow{2}{*}{-12.380} & \multirow{2}{*}{0.000\({}^{***}\)} \\ \cline{3-3} \cline{5-6} & & ADEDS & -0.995 & 0.000 & & \\ \hline \multirow{2}{*}{9} & \multirow{2}{*}{Bukin} & DE & 25.450 & 5.211 & \multirow{2}{*}{-14.651} & \multirow{2}{*}{0.000\({}^{***}\)} \\ \cline{3-3} \cline{5-6} & & ADEDS & 0.000 & 0.000 & & \\ \hline \multirow{2}{*}{10} & \multirow{2}{*}{Shubert} & DE & -3292.422 & 624.517 & \multirow{2}{*}{-3.044} & \multirow{2}{*}{0.010\({}^{***}\)} \\ \cline{3-3} \cline{5-6} & & ADEDS & -3840.000 & 0.000 & & \\ \hline \multirow{2}{*}{11} & \multirow{2}{*}{Himmelblau} & DE & 0.000 & 0.000 & \multirow{2}{*}{2.627} & \multirow{2}{*}{0.017\({}^{***}\)} \\ \cline{3-3} \cline{5-6} & & ADEDS & 0.000 & 0.000 & & \\ \hline \end{tabular}
\end{table}
Table 22: Pseudo code Booth function
\begin{table}
\begin{tabular}{|c|} \hline function Booth(x): \\ x, y = x \\ term1 = (x + 2* y - 7) \({}^{\kappa}2\) \\ term2 = (2*x + y - 5) \({}^{\kappa}2\) \\ result = term1 + term2 \\ return result \\ \hline \end{tabular}
\end{table}
Table 23: Pseudo code Matyas function
#### 3.3.3 McCormick Function
This is validated on the rectangle \(x_{1}\in[-1.5,4],x_{2}\in[-3,4]\). Here global minima \(f_{mccormick}(-0.54719,-1.54719)=-1.9133\).
### ADEDS with plate shaped.
Table 23 presents the performance report, which clearly shows the superiority of ADEDS over DE in all categories. Here too, all the tests are performed using the same configurations. We see that ADEDS achieves the global optima in all three tests with just 10 runs.
### Valley-Shaped
Long, narrow valleys with several local optima are common in valley-shaped functions. Optimization algorithms may struggle to avoid these valleys, resulting in convergence to suboptimal solutions.
#### 3.3.1 Three-Hump Camel Function
\[f_{threehumpcame!}(x)=2x_{1}^{2}-1.05x_{1}^{4}+\frac{x_{1}^{6}}{6}+x_{1}x_{2}+x _{2}^{2}\]
\begin{table}
\begin{tabular}{|c|c|} \hline function McCormick(x): \\ x, v = x \\ term1 = sin (x + y) \\ term2 = (x - y) \({}^{\star}\)2 \\ term3 = -1.5 * x \\ term4 = 2.5 * y \\ result = term1 + term2 + term3 + term4 + 1 \\ return result \\ \hline \end{tabular}
\end{table}
Table 24: Pseudo code McCormick function
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline
**SI** & \multicolumn{2}{c|}{**Benchmark function**} & \multicolumn{2}{c|}{**Mean fitness**} & \multicolumn{2}{c|}{**Std dev**} & \multicolumn{1}{c|}{**t-stats**} & \multicolumn{1}{c|}{**p-value**} \\ \hline
1 & McCormick & DE & -1.913 & 2.230 & \multirow{2}{*}{4.113} & \multirow{2}{*}{0.000\({}^{***}\)} \\ \cline{3-3} \cline{5-7} & & ADEDS & -4.972 & 0.000 & & \\ \hline
2 & Matyas & DE & 0.585 & 0.757 & \multirow{2}{*}{-2.318} & \multirow{2}{*}{0.004\({}^{***}\)} \\ \cline{3-3} \cline{5-7} & & ADEDS & 0.000 & 0.000 & & \\ \hline
3 & Booth & DE & 10.417 & 0.000 & \multirow{2}{*}{-6.309} & \multirow{2}{*}{0.000\({}^{***}\)} \\ \cline{3-3} \cline{5-7} & & ADEDS & 0.000 & 0.000 & & \\ \hline \end{tabular}
\end{table}
Table 25: ADEDS with plate shaped benchmark test report
This is validated on the square \(x_{i}\in[-5,5],for\) all \(i=1,2\). Here global minima \(f_{threehumpcamel}(x^{*})=0\).
#### Six-Hump Camel Function
\[f_{sixhumpcamel}(x)=\left(4-2.1x_{1}^{2}+\frac{x_{1}^{4}}{3}\right)x_{1}^{2}+x_{ 1}x_{2}+(-4+4x_{2}^{2})x_{2}^{2}\]
This is validated on the rectangle \(x_{1}\in[-3,3],\)\(x_{2}\in[-2,2]\). Here global minima \(f_{sixhumpcamel}(x^{*})=1.0316\).
#### Rosenbrock Function
The global minimum of the unimodal function is in a small, parabolic valley (Rosenbrock, 1960). Convergence to the minimum is challenging, even though this valley is simple to locate (Picheny et al., 2012).
\[f_{osenbrock}(x)=\sum\nolimits_{i}^{n-1}[100(x_{i+1}-x_{i}^{2})^{2}+(x_{i}-1)^{2}]\]
The is validated on the hypercube \(x_{i}\in[-5,10],for\) all \(i=1,...,d\), although it may be restricted to the hypercube xi \(\in[\)-2.048, 2.048\(]\), for all i = 1,..., n. Here global minima \(f_{osenbrock}(x^{*})=0\).
\begin{table}
\begin{tabular}{|l|} \hline function ThreeHumpCamel(x): \\ x, y = x \\ term1 = 2 * x\({}^{\prime}\)2 \\ term2 = -1.05 * x\({}^{\prime}\)4 \\ term3 = (x\({}^{\prime}\)6) / 6 \\ term4 = x * y \\ term5 = y\({}^{\prime}\)2 \\ result = term1 + term2 + term3 + term4 + term5 \\ return result \\ \hline \end{tabular}
\end{table}
Table 26: Pseudo code for Three-Hump Camel function
\begin{table}
\begin{tabular}{|l|} \hline function SixHumpCamel(x): \\ x, y = x \\ term1 = 4 - 2.1 * x\({}^{\prime}\)2 + (x\({}^{\prime}\)4) / 3 \\ term2 = \(x^{\prime}\)2 \\ term3 = x * y \\ term4 = -4 + 4 * y\({}^{\prime}\)2 \\ term5 = y\({}^{\prime}\)2 \\ result = term1 * term2 + term3 + term4 * term5 \\ return result \\ \hline \end{tabular}
\end{table}
Table 27: Pseudo code (Six-hump Camel)
\begin{table}
\begin{tabular}{|l|} \hline function SixHumpCamel(x): \\ x, y = x \\ term1 = 4 - 2.1 * x\({}^{\prime}\)2 + (x\({}^{\prime}\)4) / 3 \\ term2 = \(x^{\prime}\)2 \\ term3 = x * y \\ term4 = -4 + 4 * y\({}^{\prime}\)2 \\ term5 = y\({}^{\prime}\)2 \\ result = term1 * term2 + term3 + term4 * term5 \\ return result \\ \hline \end{tabular}
\end{table}
Table 28: Pseudo code Rosenbrock function
\[min=\left(\begin{array}{c}n=2\to f_{rosenbrock}(1,1)=0\\ n=3\to f_{rosenbrock}(1,1,1)=0\\ n>3\to f_{rosenbrock}(1,\ldots,1)=0\end{array}\right.,\]
#### 3.3.4 Dixon-Price function
\[f_{dixonprice}(x)=(x_{i}-1)^{2}+\sum\nolimits_{i=2}^{n}[i(2x_{i}^{2}-x_{i-1})^ {2}]\]
The is validated on the hypercube \(x_{i}\in[-10,10],for\ all\ i=1,...,n\). Here global minima \(f_{dtxnonprice}(x^{*})=0\).
Table 27 displays the report with clear superiority of ADEDS over DE in all categories. Rosenbrock function is unimodal. Its minimum is tucked away in a valley with a flat bottom and a banana-shaped shape. Here, the algorithm required more than 100 runs to succeed.
### Other
#### 3.4.1 Beale Function
This is multimodal, with sharp peaks at the corners of the input domain.
\[f_{beagle}(x)=(1.5-x_{1}+x_{1}x_{2})^{2}+(2.25-x_{1}+x_{1}x_{2}^{2})^{2}+(2.625 -x_{1}+x_{1}x_{2}^{3})^{2}\]
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline
**Sl** & \multicolumn{2}{|c|}{**Benchmark function**} & \multicolumn{2}{|c|}{**Mean fitness**} & \multicolumn{2}{|c|}{**Std dev**} & \multicolumn{1}{|c|}{**t-stats**} & \multicolumn{1}{|c|}{**p-value**} \\ \hline
1 & Rosenbrock & DE & 50.138 & 53.356 & \multirow{2}{*}{-2.819} & \multirow{2}{*}{0.011***} \\ \cline{3-3} \cline{5-6} & & ADEDS & 0.000 & 0.000 & & \\ \hline
2 & Three-hump camel & DE & 0.784 & 0.481 & \multirow{2}{*}{-4.883} & \multirow{2}{*}{0.000***} \\ \cline{3-3} \cline{5-6} & ADEDS & 0.000 & 0.001 & & \\ \hline
3 & Six-hump camel & DE & -0.759 & 0.167 & \multirow{2}{*}{-4.865} & \multirow{2}{*}{0.000} \\ \cline{3-3} \cline{5-6} & ADEDS & -1.031 & 0.000 & & \\ \hline
3 & Dixon price & DE & 65.246 & 50.986 & \multirow{2}{*}{-3.839} & \multirow{2}{*}{0.001***} \\ \cline{3-3} \cline{5-6} & ADEDS & 0.000 & 0.000 & & \\ \hline \end{tabular}
\end{table}
Table 30: ADEDS with valley-shaped benchmark test report
\begin{table}
\begin{tabular}{|c|} \hline function Dixon Price(x): \\ x, y = x \\ term1 = (x - 1) ** 2 \\ term2 = (y - 1) ** 2 \\ term3 = 100 * (x ** 2 - y ) ** 2 \\ return term1 + term2 + term3 \\ \hline \end{tabular}
\end{table}
Table 29: Pseudo code Dixon-Price function
It is validated on the square \(x_{t}\in\ [-4.5,4.5],for\ all\ i\ =\ 1,2\). Here global minima \(f_{beale}(3,0)=0\).
#### 3.4.2 Goldstein-Price Function
This function has several local minima.
\[f_{goldsteinprice}(x) =[1+(x_{1}+x_{2}+1)^{2}+(19-14x_{1}+3x_{1}^{2}-14x_{2}+6x_{1}x_{2 }+3x_{2}^{2})]\ *[(30+(2x_{1}\] \[+3x_{2})^{2}(18-32x_{1}+12x_{1}^{2}+48x_{2}-36x_{1}x_{2}+27x_{2}^{2})]\]
It is validated on the square \(x_{i}\in\ [-2,2],for\ all\ i\ =\ 1,2\). Here global minima \(f_{goldsteinprice}(0,-1)=3\).
#### 3.4.3 Forrester function
\[f_{forrester}(x)=(6x-2)^{2}sin(12x-4)\]
It is evaluated on \(x\ \in\ [0,1]\)(Forrester et al., 2008)
#### 3.4.4 DeVilliersGlasser02
Gavana (2016 and subsequently Layeb (2022) found that DeVilliersGlasser02 is harder to solve than others.
\[f_{devittiersglasser02}(x)=(2x_{1}-3x_{2})^{2}+18x_{1}-32x_{2}+12x_{1}^{2}+48x_ {2}+27x_{2}^{2}\]
Here, \(t_{l}=0.1(1-i)\) and \(y_{l}=53.81(1.27^{t_{i}})tanh(3.012t_{l}+sin(2.13t_{l}))cos(e^{0.507}t_{l})\)
\begin{table}
\begin{tabular}{|l|} \hline function Goldstein-Price(x): \\ term1 = (1 + (x [0] + x [1] + 1) *2 * (19 - 14 * x [0] + 3 * x [0] *2 - 14 * x [1] + 6 * x [0] * x [1] + 3 * x [1] *2)) \\ term2 = (30 + (2 * x [0] - 3 * x [1]) *2 * (18 - 32 * x [0] + 12 * x [0] *2 + 48 * x [1] - 36 * x [0] * x [1] + 27 * x [1] *2)) \\ return term1 * term2 \\ \hline \end{tabular}
\end{table}
Table 31: Pseudo code (Beale shaped)
\begin{table}
\begin{tabular}{|l|} \hline function Forrester(x): \\ term1 = (6 * x [0] - 2) * sin (12 * x [0] - 4) \\ term2 = (6 * x [1] - 2) * sin (12 * x [1] - 4) \\ return term1 + term2 \\ \hline \end{tabular}
\end{table}
Table 33: Pseudo Code Forrester function
The function is evaluated on \(x\in[1,60]\)_for_\(1,\ldots,n\). Here the global optimal \(f_{devilliersglasser02}(x^{*})=0\).
Table 31 presents the report. Here too, ADEDS outperforms DE in most of the categories except Beale where although the global minima are achieved by both DE and ADEDS, the output is not statistically significant. It suggests that the observed variations in performance between the two groups (in this case, ADEDS and DE) could be attributable to random variability or by chance rather than a systematic and meaningful difference.
### Practical implications
To this end, we see the all-inclusive benchmarking tests on a diverse set of benchmark functions showcasing ADEDS's versatility and effectiveness. ADEDS consistently outperforms traditional DE in terms of convergence speed and solution quality across a range of optimization challenges, including functions with many local optima, plate-shaped, valley-shaped, stretched-shaped, and noisy functions. For most of the vital test problems, such as Ackley, Matyas, Booth, Goldstein-Price, Beale, Bukin, Levy, McCormic, Six-Hump Camel, and Three-Hump Camel, the algorithm did exceedingly well. This gives us confidence that ADEDS is appropriate for supply chain analytics and capable of addressing the complexities and challenges of supply chain management. Its adaptability, performance, and ability to handle various optimization landscapes make it a relevant and promising approach for improving supply chain operations and decision-making. ADEDS can be used to address the challenges by finding solutions that optimize various supply chain parameters, such as inventory levels, production schedules, routing plans, and supplier selections.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline
**S1** & \multicolumn{2}{|c|}{**Benchmark function**} & \multicolumn{1}{c|}{**Mean fitness**} & \multicolumn{1}{c|}{**Std dev**} & \multicolumn{1}{c|}{**t-stats**} & \multicolumn{1}{c|}{**p-value**} \\ \hline
1 & Beale & DE & 0.000 & 0.000 & 1.000 & 0.330 \\ \cline{3-6} & & ADEDS & 0.000 & 0.000 & & \\ \hline
2 & Goldstein price & DE & 44.868 & 38.521 & -3.260 & 0.000\({}^{**}\) \\ \cline{3-6} & & ADEDS & 2.990 & 0.000 & & \\ \hline
3 & Forrester & DE & -44.183 & 17.378 & 5.548 & 0.017\({}^{***}\) \\ \cline{3-6} & & ADEDS & -12.041 & 0.000 & & \\ \hline
4 & DeVilliersGlasser02 & DE & 143.644 & 201.111 & -6.576 & 0.000\({}^{***}\) \\ \cline{3-6} & & ADEDS & 47.938 & 142.900 & -6.576 & 0.000\({}^{***}\) \\ \hline \end{tabular}
\end{table}
Table 35: ADEDS with other benchmark test report
\begin{table}
\begin{tabular}{|c|} \hline function DeVilliersGlasser2(x): \\ n = length(x) \\ t = [0.1 * (i - 1) for i in range (1, 25)] \\ y = [53.81 * (1.27*) * tanh (3.012 * t + sin (2.13 * t)) * cos (exp (0.507) * t) for t in t] \\ if length(x)! = 5: \\ raise ValueError (“Input vector x must have 5 dimensions.”) \\ result = 0.0 \\ for i in range (24): \\ term = ( \\ x [0] * x [1] *t[i] * tanh (x [2] * [i] + sin (x [3] * [i[i])) * cos(t[i] * exp (x [4])) - y[i] \\ ) result \(+=\) term*2 \\ return result \\ \hline \end{tabular}
\end{table}
Table 34: Pseudo code DevilliersGlasser02
While ADEDS designed and evaluated in the context of single-objective optimization it can potentially be adapted to handle multi-objective optimization problems by incorporating suitable mechanisms or modifications to address multiple conflicting objectives simultaneously.
## Conclusion
The Adaptive Differential Evolution with Diversification Strategies (ADEDS) algorithm is a population-based method for solving complex real-parameter single-objective optimization problems. ADEDS is shown to excel in handling non-convex optimization problems, where traditional DE struggles to escape local optima. Visualizations and analysis demonstrated its capability to explore and adapt to complex landscapes. ADEDS dynamically modifies critical parameters based on the optimization process conditions and includes diversification measures to avoid early convergence and encourage solution space exploration. The study analyzed ADEDS performance across various benchmark functions, where the algorithm achieved good success rates in various settings, making it a promising tool for real-world optimization challenges. This study contributes to the growing body of information on optimization algorithms and their application in real-world problem solving.
|
2302.06529
|
Unleashing the Power of Electrocardiograms: A novel approach for Patient
Identification in Healthcare Systems with ECG Signals
|
Over the course of the past two decades, a substantial body of research has
substantiated the viability of utilising cardiac signals as a biometric
modality. This paper presents a novel approach for patient identification in
healthcare systems using electrocardiogram signals. A convolutional neural
network is used to classify users based on images extracted from ECG signals.
The proposed identification system is evaluated in multiple databases,
providing a comprehensive understanding of its potential in real-world
scenarios. The impact of Cardiovascular Diseases on generic user identification
has been largely overlooked in previous studies. The presented method takes
into account the cardiovascular condition of the patients, ensuring that the
results obtained are not biased or limited. Furthermore, the results obtained
are consistent and reliable, with lower error rates and higher accuracy
metrics, as demonstrated through extensive experimentation. All these features
make the proposed method a valuable contribution to the field of patient
identification in healthcare systems, and make it a strong contender for
practical applications.
|
Caterina Fuster-Barceló, Carmen Cámara, Pedro Peris-López
|
2023-02-13T17:14:55Z
|
http://arxiv.org/abs/2302.06529v2
|
# Unleashing the Power of Electrocardiograms:
###### Abstract
Over the course of the past two decades, a substantial body of research has substantiated the viability of utilising cardiac signals as a biometric modality. This paper presents a novel approach for patient identification in healthcare systems using electrocardiogram signals. A convolutional neural network is used to classify users based on images extracted from ECG signals. The proposed identification system is evaluated in multiple databases, providing a comprehensive understanding of its potential in real-world scenarios. The impact of Cardiovascular Diseases on generic user identification has been largely overlooked in previous studies. The presented method takes into account the cardiovascular condition of the patients, ensuring that the results obtained are not biased or limited. Furthermore, the results obtained are consistent and reliable, with lower error rates and higher accuracy metrics, as demonstrated through extensive experimentation. All these features make the proposed method a valuable contribution to the field of patient identification in healthcare systems, and make it a strong contender for practical applications.
Identification, patient, electrocardiogram, health, artificial intelligence.
## I Introduction
Identifying patients is a crucial aspect of providing quality healthcare. In the event of critically ill, elderly, or disabled patients who require frequent medical treatments, rapid and easy identification is vital [1]. Patient misidentification is one of the leading causes of medical errors and medical malpractice in hospitals and has been recognised as a serious risk to patient safety [2]. Therefore, it can be a significant challenge in hospitals, healthcare systems, and heritage sites. Patient identification problems, such as the use of multiple names and identities or the lack of identification documents when patients are non-residents, have been reported [3, 4]. This can result in errors such as the administration of the wrong medication to the wrong patient, incorrect diagnosis, inappropriate treatment, delays, and cancellation of operations, among others. For example, a 2016 study classified 7,600 out of 10,915 events as wrong-patient events, involving issues related to patient identification, such as patient misidentification and duplicate records [5]. In a survey performed by Patient Now Organisation in 2022, it is stated that, on average, organisations report spending 109.6 hours per week resolving patient identity issues and spend $1.3M annually on patient resolution [6]. In addition, patient identification can cause harm to patients, as well as economic costs to the health system, with up to 10-20% errors from patient misidentification resulting in harm to patients [7]. There are concerns related to data privacy and security, as patient information must be protected against unauthorised access and breaches [8]. It has been observed that during the global COVID-19 pandemic, there has been an increase in the incidence of ransomware attacks on healthcare facilities. A study conducted by ProofPrint in 2021 surveyed more than 600 healthcare facilities and revealed that following a ransomware attack, there was an increase in mortality rates at a quarter of these facilities. A notable example of this phenomenon can be seen in the case of a hospital in Dusseldorf, Germany, which was forced to close its emergency department due to a ransomware attack in 2020, resulting in the death of a patient who had to be rerouted to another hospital. The impact of ransomware attacks on healthcare informatic systems has been shown to result in an increase in complications from medical procedures, delays in procedures, and an increased mortality rate [9, 10].
Wristbands, a type of identification band worn on the wrist, are utilised in healthcare facilities to identify patients and ensure that proper treatment and care are received. Typically, wristbands include the patient's name, identification number, and other pertinent information, such as allergies, medical conditions, and medications. A major issue with wristbands is that the information contained in each wristband is not standardised and is often entered manually, which can lead to misidentification of patients [11, 12]. The loss of wristbands is also a concern. In a study conducted in 712 hospitals in the USA, 2,463,727 wristbands were examined, and 2.7% (67,289) of them had errors, with 49.5% of these errors due to the absence of wristbands [13]. A similar study was also conducted in 204 small hospitals, where 451,436 identification wristbands were examined and 28,800 (5.7%) had errors, with the majority (64.4%) of these errors being related to the absence of wristbands [14]. As stated above, losing a wristband can lead to misidentification of the patient and incorrect prescriptions for medications. Despite these limitations, wristbands are still used in the healthcare system.
To avoid these problems, some hospitals and heritage sites may have protocols in place to ensure that wristbands are properly secured and replaced if lost. They may also use other forms of identification, such as biometric tokens, Radio Frequency Identification (RFID) tags or smart cards, to supplement or replace wristbands [2]. User identification tech
nologies are advantageous because, despite other identification systems, biometric data is more difficult to "steal, exchange or forget" [4].
Biometric methods such as fingerprint recognition [15], facial recognition [16], palm recognition [17], [18], Electrocardiogram (ECG) key generation [19], iris recognition [20] and Electroencephalogram (EEG) identification [21] can be used to supplement or replace wristbands for patient identification in healthcare [4]. These methods use unique physical characteristics of the patient to identify them and can be integrated into electronic medical record systems to improve patient identification accuracy and reduce the risk of errors in medical treatment. Additionally, smart cards with chips embedded with patient's personal information can also be used to replace wristbands to reduce the risk of lost or stolen wristbands [22]. However, the implementation of these methods also raises concerns around data privacy and security.
There exist some limitations to the methods currently employed for patient identification in healthcare systems and facilities. As shown in Table I, a comparison is provided between these existing biometric methods applied to patient identification, with respect to various aspects of each method. Regarding fingerprint recognition, which has been extensively researched for patient identification, as demonstrated in [23], [24], there are numerous limitations related to its inclusion. Aermatoglyphia, which is defined as the congenital or acquired loss of the epidermal ridges of the fingertips, commonly known as fingerprints, is one of such limitations. There are various causes of aermatoglyphia such as Kindler syndrome, chronic hand eczema, peeling skin syndrome, and others [25]. Additionally, research conducted in [26] presented a case report showing that capecitabine, a fluoropyrimidine used as a treatment for tumours, has occasionally been reported to cause aermatoglyphia as a secondary effect upon its use. Specifically, a case study of a woman who after suffering from left breast cancer, metastasis, and chemotherapy had to take capecitabine. After two years of capecitabine intake, she reported aermatoglyphia. Therefore, it is concluded that the fingerprint is not inclusive enough to be included as an identification method for patients in healthcare systems and facilities.
Similar limitations exist when voice recognition is used as a biometric trait for patient identification. It is not as inclusive as other biometric traits, such as DNA or a facial scan, as there are individuals who are non-verbal or have a communication disorder, which can impede the effectiveness of voice recognition. Additionally, the individual must be awake and aware for the voice recognition to be successful, whereas, with other biometric traits, such as a facial scan, identification can be achieved even when the individual is unconscious. For example, in the case of a retina scan, the patient doesn't need to be awake, but it may not be as user-friendly as other biometric tests, as the eye must be manually opened to perform the test. Additionally, even if the patient is awake, the test may be uncomfortable to perform.
Regarding the reliability of biometric systems, facial recognition issues may arise when applied to patients in a healthcare facility. Factors such as illumination, pose variation, expression changes, wearing a face mask, or even facial paralysis following a stroke may impede the ability to conduct a facial scan [27], [28].
Additional challenges are present with other biometric systems. For example, using DNA testing, which is one of the most inclusive biological traits, for user identification in a patient, requires a minimum of 24 hours in a hospital if laboratories are not busy. Therefore, it may not be feasible to identify the patient immediately.
In an effort to overcome the limitations of current methods for patient identification in healthcare systems, this work proposes a novel approach using ECG signals. The authors propose refining and implementing the method outlined in [29] for use in healthcare systems and facilities. This innovative approach is applied to various databases, comprising a diverse range of users, including those with various Cardiovascular Disease (CVD) and those participating in various activities. The identification method involves segmenting the ECG signal into windows of peaks (heartbeats), aligning each peak (beat) to construct a matrix, and transforming the matrix into a heatmap (resulting in an image called Elektrokardiomatrix (EKM) seen in Fig. 1), which serves as the image input for the patient identification system. A convolutional neural network is then utilised to classify patients based on these images constructed from ECG signals. The conversion of the ECG signal to a heatmap, referred to as the EKM, is introduced for the purpose of patient identification for the first time. The application of this methodology for the identification, classification, or diagnosis of CVD has been previously documented in the literature (cited in references [30, 31, 32, 33, 34]). The proposed identification system is extensively tested on multiple databases, providing a comprehensive understanding of its potential in real-world scenarios. This study also demonstrates the potential for this method to be implemented in actual healthcare settings, providing a reliable and efficient solution for patient identification.
#### Contributions
1. Presenting a new method for identifying patients within healthcare systems utilizing Electrocardiogram (ECG) signals.
2. Refining the approach presented in [29] to facilitate its integration into healthcare systems and facilities.
Fig. 1: Elektrokardiomatrix (EKM), image used to perform patient identification
3. Extensive testing of the proposed patient identification system on multiple databases, providing a comprehensive understanding of its potential in real-world scenarios (Normal Sinus Rhythm Database (NSRDB), MIT-BIH Arrhythmia Database (MIT-BIHDB), Physikalisch-Technische Bundesanstalt (PTBDB), and Glasgow University Database (GUDB)).
4. Exemplifying the feasibility of implementing this approach in real-world healthcare settings, thus offering a reliable and efficient solution for patient identification.
## II Materials and Methods
### _Data_
The experiment described in this article was conducted using four different databases. As outlined in Table II, the databases were selected based on the attributes of their users. The purpose of the experimentation was to focus on different aspects of different subjects, to study the behaviour of the model from various perspectives. To this end, a database was chosen as it featured control users without significant CVD (NSRDB), and another database was chosen as it includes patients with various CVD (PTBDB). In addition, two databases were chosen as they comprise healthy and non-healthy patients (PTBDB and MIT-BIHDB), as well as a database of healthy users in various scenarios (GUDB). Through this approach, an evaluation of the model's performance was conducted under conditions that simulate an application for healthcare systems and facilities.
The first database used in this study is NSRDB, which is publicly available on Physionet [35]. The database comprises 18 healthy subjects (without significant arrhythmias), and records were obtained in the Arrhythmia Laboratory at Beth Israel Hospital in Boston. One record per user is present.
The second database selected is widely used MIT-BIHDB, which is also publicly available on Physionet [36]. This database consists of 48 recordings from 47 different subjects. From the pool of 47 subjects, there are 23 control users and 24 patients who are considered to have significant arrhythmias. In particular, this database combines healthy individuals and patients with CVD, mimicking real-world situations.
The PTBDB is another publicly available database from Physionet, which comprises 549 records from 290 subjects [37]. This database is notable for its inclusion of both patients with various cardiac conditions and healthy subjects. Specifically, the original database contains 52 healthy control users and the remaining individuals with different CVD. This database is an unbalanced dataset, with some users having more than one Elektrokardiogramm (EKG) recorded. Therefore, only one EKG per user was considered in all experiments. The first experiment with this database will examine the entire dataset, while the second experiment will focus on specific CVD without including healthy subjects to demonstrate the performance of the presented approach on users with these cardiac conditions. The CVD being studied in these experiments are bundle branch block, cardiomyopathy, dyshrhythmia, myocardial infarction, myocarditis, and valve heart disease. As such, two approaches will be taken concerning this database: processing the entire dataset and processing only the subjects with the aforementioned CVD.
Inclusion of the GUDB database among those studied was deemed necessary. This database, which is available through a request to Glasgow University [38], possesses a unique attribute in that all 25 users were recorded in five different scenarios: sitting, completing a maths test on a tablet, walking on a treadmill, running on a treadmill, and using a hand bicycle. As a result, each user will have five ECG recordings. All participants in this database are considered healthy subjects, with no significant CVD or health issues. Through this database, an investigation of the identification of patients with varying heartbeat rhythms is proposed.
### _Patient identification_
The methodology developed in [29] (sketched in Figure 3) has been implemented for patient identification. This methodology involves the conversion of ECGs into heatmaps of aligned R-peaks, which are then used to form a matrix called a EKM. The EKM is then plotted into a heatmap. A Convolutional Neural Network (CNN) architecture, which is both simple and effective, is utilised for patient identification and has been found to offer high accuracy and low error rates.
#### Ii-B1 Creation of the EKM
The conversion of an EKG into a heatmap involves several steps, as outlined in Figure 3, further described in [29] and available on Github at [https://github.com/cfusterbarcelo/ELEKTRA-approach](https://github.com/cfusterbarcelo/ELEKTRA-approach).
The process of creating the EKM dataset is presented in Algorithm 1. The initial step in performing patient identification or classification with EKGs is the obtaining of EKG recordings from the selected databases. This is achieved by reading the.hea and.dat files in which the ECG is contained.
Once the EKG has been read, certain parameters must be initialized. The sampling frequency (sf) is specific to each database and is provided by the database. For example, the sf for the NSRDB is 128 and for the MIT-BIHDB it is 360.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline Biometric System & **User Friendly** & **Reliability** & **Inclusivity** & **Availability** & **No Awareness** \\ \hline
**Fingerprint R.** & ✓ & ✗ & ✗ & ✗ & ✓ \\
**DNA Test** & ✗ & ✓ & ✓ & ✓ & ✓ \\
**Facial R.** & ✓ & ✗ & ✓ & ✗ & ✓ \\
**Retina Scan** & ✗ & ✓ & ✓ & ✗ & ✗ \\
**Voice R.** & ✓ & ✗ & ✗ & ✗ & ✗ \\
**Electrocardiograms** & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline \hline \end{tabular}
\end{table} TABLE I: Biometric systems for patient identification in healthcare systems and facilities: limitations and advantages.
The initial window is also initialized at this stage and will be updated after each EKM is obtained. This is necessary in order to move the beginning of the window by the number of _bpf_ that have been chosen as a hyperparameter (\(bpf=(3,5,7)\)). The next step is to define these hyperparameters, including the _bpf_. Resulting images depending on the beats per frame (bpf) are shown in Figure 2.
```
Define path to.hea and.dat files Initialise: \(Initial\ window=0\), \(sf\) Define Hyperparameters: \(bpf\), \(\alpha_{i}\), \(\alpha_{e}\), train percentage, #EXM total R peaks list, filtered \(\text{EKG}\leftarrow\text{\bf PanTompkins}\)(unfiltered EKG, \(sf\)) detrend EKG \(\leftarrow\) Detrend(filtered EKG) norm EKG \(\leftarrow\) Normalise(detrend EKG) \(\mu\) = mean peak distance(R peaks list) while (train and test EKM! #EXM total) doCreate EKM matrix(\(\mu\), R peaks list, norm EKG, Initial window, \(\alpha_{i}\), \(\alpha_{e}\)) Standarise EKM\((-1,1)\) Generate EKM image Save EKM in train or test \(Initial\ window+=bpf\) endwhile
```
**Algorithm 1** Creation of an EKM database
The values for \(\alpha_{i}\) and \(\alpha_{e}\) are the percentage of the signal that is to be taken before and after each peak of a window to construct each segment of the EKM. These values can be defined by the user, given certain thresholds (\(\alpha_{i}<100\%>\alpha_{e}\))1. The total number of EKMs must be specified when dealing with an extensive database, such as the NSRDB, as well as the percentage of the subsets for training and testing, are also determined.
Footnote 1: For this approach, \(\alpha_{i}=0.2\) and \(\alpha_{e}=0.3\)
Two lists are necessary for constructing each EKM: i) a clean and filtered EKG signal and ii) a list of the R peaks of this EKG signal. These lists are obtained through the Pan and Tompkins algorithm [39]. Thus, the unfiltered EKG signal and the sampling frequency are provided to this algorithm, with the signal then being detrended. The mean distance between R peaks for each signal must also be calculated by using the list of R peaks obtained earlier.
With this information, the creation of all EKM from the chosen database can proceed. The process for creating a single EKM is detailed in Algorithm 2. This involves creating a segment, based on the values of \(\alpha\) and \(\mu\), for each peak within the window, concatenating and aligning these segments, and repeating this process for the entire window.
```
0:\(\mu\), R peaks list, norm EKG, Initial window, \(\alpha_{i}\), \(\alpha_{e}\) for each peak \((p_{x})\) of the current window do segment = norm EKG[\(p_{x}-\alpha_{i}\mu:p_{x}+\alpha_{e}\mu\)] All segments \(\leftarrow\) Append(segment) EKM = Concatenate(All segments, 1) endfor return EKM
```
**Algorithm 2** Creation of **one** EKM
Proceeding to Algorithm 1 following the creation of the EKM, standardization of the EKM is undertaken. Subsequently, an image of the EKM is generated by plotting it in the form of a heatmap and being saved. The process is repeated for subsequent windows until all the signal or the desired number of EKMs are obtained. The process of creating EKMs is repeated for each user in the databases discussed in Section II-A. For example, for the NSRDB dataset, a specific number of images per user, such as 3000, are obtained by repeating the process. However, for the GUDB, PTBDB, and MIT-BIHDB datasets, since the EKGs for each user are shorter, the entire signal is used to extract as many EKMs as possible. Therefore, the number of EKMs may vary among datasets, users, and bpfs.
Lastly, the division of each constructed dataset into training and testing sets is performed. Based on the number of images obtained from each database, a division of 80% for training and 20% for testing (as is the case for the NSRDB ) or 90% for training and 10% for testing (as is the case for the other datasets) is established. Additionally, the training set is divided into a training and validation set, as is common practice, in order to cross-validate all parameters of the network.
#### Ii-B2 Convolutional Neural Network for patient identification
The identification process is then carried out using the obtained EKM datasets. The architecture of the proposed CNN is detailed in Figure 4 and the layers of the networks with the
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|} \cline{2-9} \multicolumn{1}{c|}{} & \multicolumn{4}{c|}{**Subject Information**} & \multicolumn{1}{c|}{**Number of**} & \multicolumn{1}{c|}{**Sampling**} \\ \cline{2-9} \multicolumn{1}{c|}{} & **Number** & **Male** & **Women** & **Ages** & **Pathologies** & **records** & **frequency (Hz)** \\ \hline
**NSRDB** & 18 & 5 & 13 & 20-50 & Healthy Subjects & 18 & 1000 \\ \hline
**MITDB** & 47 & 26 & 22 & 23-89 &
\begin{tabular}{c} 23 random subjects \\ and 24 patients with \\ significant arrhythmias \\ \end{tabular} & 48 & 360 \\ \hline
**PTBDB** & 290 & 209 & 81 & 17-87 &
\begin{tabular}{c} Different Cardiac \\ Conditions \\ \end{tabular} & 549 & 1000 \\ \hline
**GUDB** & 25 & 9 & 15 & 18-X & Healthy Subjects & 100 & 250 \\ \hline \end{tabular}
\end{table} TABLE II: Database information and characteristics.
number of parameters are detailed in Table III. A preprocessing step is performed as the first step by the network, where input images are cropped to reduce the number of parameters and eliminate irrelevant information. The cropped images are fed into a Convolutional Neural Network (CNN), and the second layer of the CNN is structured with a 2D Convolution, Rectified Linear Unit (Relu) activation, Max Pooling, and Dropout operations. The third layer, the fully connected layer (FCL), is the classification layer, which aims to group the features into the number of classes to be identified. The FCL consists of Flatten, Dense, and Softmax activation operations. The Adam optimizer is used for optimization, and the network is trained in batches with different numbers of epochs and steps per epoch based on the chosen database, with some experiments requiring up to 300 epochs.
The CNN trains the model by optimising a Categorical Cross-Entropy cost function. The number of epochs for the training process may vary for each experiment, depending on the model's response to the training. The batch size used for all experiments may also be subject to change depending on the size of the dataset.
It is noteworthy that the primary goal of this study is to refine, enhance, execute, and assess the methodology presented in [29] for practical application, specifically in the area of patient identification. Using the databases incorporated in the experiments, it is demonstrated that patient identification can be achieved among a diverse range of patients and situations using a straightforward yet efficacious CNN. The study highlights the ability to use a basic architecture to classify and identify patients with exceptional performance. The ultimate objective is to investigate the impact of various conditions and scenarios on patient identification and demonstrate the proposed approach's robustness.
Fig. 4: CNN Architecture
Fig. 3: Pipeline of the methodology followed in [29] from ECG acquisition to User Classification
\begin{table}
\begin{tabular}{c c c} \hline \hline Layer (type) & Output Shape & Param \# \\ \hline Cropping2D & (None, 21, 33, 3) & 0 \\ Conv2D & (None, 19, 31, 32) & 896 \\ MaxPooling2D & (None, 9, 15, 32) & 0 \\ Dropout & (None, 9, 15, 32) & 0 \\ Flatten & (None, 4320) & 0 \\ Dense & (None, 256) & 1,106,176 \\ Dense & (None, C) & 59,624 \\ \hline \hline \end{tabular} \({}^{\dagger}\)Notation: \(C\) number of users.
\end{table} TABLE III: Model Summary of the CNN
Fig. 2: EKMs fed into the CNN depending on the bpf.
## III Experiments
### _Healthy Subjects_
In this study, the database known as NSRDB was used, as it comprises healthy individuals, as previously specified in Section II-A. This database was chosen as a baseline due to two factors: first, it is the largest database in the possession of the researchers, which allows the extraction of 3000 images per user, resulting in a total of 54000 images per experiment regardless of the number of beats per frame (bpf). Secondly, using healthy participants in this database provides an initial understanding of the model's performance in an ideal scenario.
As can be observed in Table IV, the results of testing the model trained with NSRDB are presented for different epochs. To demonstrate its performance, the model has been trained and evaluated for 3, 5, and 7 bpf. It is noted that, when examining the results by epochs for each of the bpf experiments, there is a trend in which the 150 epochs are the optimal choice. This is due to the fact that training the model with 100 epochs tends to result in underfitting, whereas training with 200 epochs results in overfitting. For instance, when considering the experiment's results with 5 bpf, 99.78% accuracy is obtained with 100 epochs. Then, with slightly more training, an even better result of 99.82% accuracy is achieved with 150 epochs. However, further increasing the number of epochs to 200 results in a slight decrease in accuracy to 99.69%, indicating that the network cannot continue learning features and suffers from overfitting.
It can be observed that the highest result is obtained using 7bpf and 150 epochs, resulting in an accuracy of 99.84%. Furthermore, it is noteworthy that all values for False Acceptance Rate (FAR) and False Rejection Rate (FRR) are low, indicating a high level of performance for the proposed method utilising the ELEKTRokardiomatrix Application to biometric identification with Convolutional Neural Networks (ELEKTRA) approach. It can be noted that the results obtained from this database are highly satisfactory, with near-perfect accuracy (approximately 100%) and low error rates (approximately 0%). In conclusion, it can be stated that the presented method is a viable model capable of achieving low error rates and high accuracy in identifying healthy users, as demonstrated by using the NSRDB database.
### _Patients with Arrhythmia and healthy subjects_
The MIT-BIHDB has been selected for analysis to study the proposed method's behaviour when applied to a diverse population of individuals, including those with and without significant arrhythmia. Hence, one CVD is analysed together with healthy subjects in this experiment. This experiment aims to determine whether cardiovascular disease, such as arrhythmia, can negatively impact the identification results when both affected and healthy individuals are included. This examination provides a closer approximation to the real-world scenario in a healthcare facility.
It is noted that the number of images or EKMs available for the MIT-BIHDB is less than that for NSRDB. As the number of bpf increases, the number of images decreases as shown in Table V. For example, the number of images for 3bpf is 35949, while for 5bpf and 7bpf it is 21149 and 15119 respectively, resulting in approximately half the number of images for 7bpf compared to 3bpf.
It should be noted that despite the decrease in the number of images compared to the NSRDB, promising results have been obtained when utilising the MIT-BIHDB. As previously mentioned, it has been observed that there is a slight overfitting of the network occurs when training with 200 epochs for some experiments utilising different numbers of bpf. However, it is also observed that for the 7 bpf case, 200 epochs of training yields the best result for the entire database. Generally, the best results are obtained when using 5 bpf, with the highest accuracy of 97.89% being achieved when using 7 bpf and 200 epochs of training.
It can be inferred from the results obtained from the MIT-BIHDB, where healthy users are mixed with patients with various cardiovascular diseases, that the proposed approach can satisfactorily classify and identify patients regardless of their cardiac health condition. The accuracy obtained, which exceeds 95% and reaches 97.89%, suggests that the method is highly effective. Furthermore, the low error rates represented by the FAR and FRR values indicate that the proposed method can be successfully employed for patient identification healthcare facilities regardless of the cardiac health condition of each patient.
methodology on a dataset comprising solely of individuals with CVD and comparison with other databases with diverse subjects to continue with the presented application for a healthcare facility.
Two separate experiments were conducted on this database: one including all users and another one including only those with certain types of cardiac disease and excluding healthy subjects. The results of these experiments can be observed in Table VI. The first experiment includes a variety of cardiac diseases, such as Myocarditis and Dysrhythmia, and healthy subjects, providing a mixed population similar to the experiments conducted on the MIT-BIHDB. However, due to the limited size of the EKG recordings in this database, the number of EKMs obtained is smaller. Although two hundred thirty-two users are comprised in this database.
It can be inferred from the results obtained in the experiment conducted on the PTBDB that, despite the drastic reduction in the number of EKMs and the increase in the number of users (4.8 times more users than the MIT-BIHDB), the proposed method is capable of identifying users with high performance. The accuracy achieved, up to 93.96% for 3bpf, is noteworthy considering the limitations in the number of images and the increase in the number of users. This demonstrates the feasibility and robustness of the proposed method.
Based on the results obtained from the entire processing of PTBDB, it can be concluded that the proposed method ELEKTRA is capable of effectively identifying patients, even in situations where a wide range of cardiovascular disorders are present among the user population. Despite using a database with a smaller number of images and an increased number of users, the method demonstrates promising performance in terms of accuracy. Furthermore, while the FAR values are low, indicating a high performance in correctly identifying legitimate users, the FRR values are not as favourable, indicating that the method may sometimes reject legitimate users. However, considering the primary objective of patient identification in a hospital setting and the need to prevent impersonation, the results can still be considered satisfactory.
The results of the experiments carried out on the segmented PTBDB database, which consists only of users with CVD, are presented in Table VII. It is observed that better results are obtained with this segmentation of the database than with the whole database. This may be due to the increased dissimilarity of the ECG recordings between users and CVD, as previously demonstrated in other studies [40, 41, 42, 43, 44, 45]), making identification easier in this segmented population. It is worth noting that with a relatively small database of 162 users, the proposed approach demonstrates its potential for accurately identifying patients with CVD.
Based on the experiments conducted on the PTBDB, it can be concluded that the proposed approach for patient identification can accurately identify a wide range of individuals with and without various cardiac conditions. Furthermore, the results obtained when only individuals with cardiac conditions were considered to demonstrate the model's enhanced ability to identify this population, potentially due to the inter-subject variability present among individuals with cardiac conditions. These findings showcase the feasibility and robustness of the proposed model when applied to real-world scenarios in healthcare facilities.
### _Subjects performing activities_
In this experiment, the chosen database was used to understand the feasibility and robustness of the proposed approach for patient identification in different scenarios where cardiovascular activity can affect the identification process. In a practical scenario, it may occur that patients exhibit varying heartbeat rates, which does not necessarily impact their identification. For instance, a patient who has recently been involved in an accident or who is walking through a hospital corridor is expected to exhibit a higher heart rate than when they are resting in bed. As such, a study of the impact of cardiovascular activity and differing heartbeat rates on patient identification is deemed necessary. For this purpose, the GUDB is studied, as it encompasses users who are engaging in different activities with varying heartbeat rates.
It is worth noting that the number of EKMs obtained to perform patient identification in this database is relatively low, with 25 healthy users in five different scenarios as described in Section II-A. ECG recordings of each participant were taken while they participated in five different activities. It is recognised that the heart rate and behaviour of the individual would differ in each of these activities. For example, sitting would be considered an activity with a low heart rate, whereas running would be considered a cardiovascular activity with a higher heart rate. Therefore, the main objective of using
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline
**bpf** & **EKMs** & **Epochs** & **Loss** & \begin{tabular}{c} **Accuracy** \\ (\%) \\ \end{tabular} & \begin{tabular}{c} **FAR** \\ (\%) \\ \end{tabular} &
\begin{tabular}{c} **FRR** \\ (\%) \\ \end{tabular} \\ \hline \multirow{3}{*}{3} & \multirow{3}{*}{9854} & 150 & 0.4454 & 88.55 & 0.05 & 11.45 \\ & & 200 & 0.4214 & 89.80 & 0.05 & 10.20 \\ & & 250 & **0.2831** & **93.96** & **0.03** & **6.04** \\ \hline \multirow{3}{*}{5} & \multirow{3}{*}{5891} & 150 & 0.7463 & 82.85 & 0.08 & 17.15 \\ & & 200 & 0.512 & 87.27 & 0.06 & 12.73 \\ & & 250 & **0.4594** & **87.44** & **0.06** & **12.56** \\ \hline \multirow{3}{*}{7} & \multirow{3}{*}{4180} & 150 & 1.2036 & 71.98 & 0.15 & 28.02 \\ & & 200 & 0.9239 & 79.12 & 0.11 & 20.88 \\ \cline{1-1} & & 250 & **0.5128** & **87.64** & **0.07** & **12.36** \\ \hline \hline \end{tabular}
\end{table} TABLE VI: Results obtained during the experiments carried out with **PTBDB** with 3, 5 and 7bpf.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline
**bpf** & **EKMs** & **Epochs** & **Loss** & \begin{tabular}{c} **Accuracy** \\ (\%) \\ \end{tabular} & \begin{tabular}{c} **FAR** \\ (\%) \\ \end{tabular} &
\begin{tabular}{c} **FRR** \\ (\%) \\ \end{tabular} \\ \hline \multirow{3}{*}{3} & \multirow{3}{*}{7266} & 150 & 0.4445 & 89.61 & 0.07 & 10.39 \\ & & 200 & 0.2703 & 93.91 & 0.04 & 6.09 \\ & & 250 & 0.2302 & 96.40 & 0.02 & 3.60 \\ & & 300 & **0.1920** & **97.09** & **0.02** & **2.91** \\ \hline \multirow{3}{*}{5} & \multirow{3}{*}{4350} & 150 & 0.6338 & 84.13 & 0.10 & 15.87 \\ & & 200 & 0.4225 & 88.91 & 0.01 & 11.09 \\ & & 250 & 0.3125 & 93.04 & 0.05 & 6.96 \\ & & 300 & **0.2124** & **95.00** & **0.03** & **5.00** \\ \hline \multirow{3}{*}{7} & \multirow{3}{*}{3066} & 150 & 1.4065 & 65.10 & 0.25 & 34.9 \\ & & 200 & 0.8560 & 81.21 & 0.13 & 18.79 \\ & & 250 & **0.5645** & **88.26** & **0.09** & **11.74** \\ \cline{1-1} & & 300 & 0.5114 & 86.58 & 0.10 & 13.42 \\ \hline \hline \end{tabular}
\end{table} TABLE VII: Results obtained when testing over the **PTBDB** for patients with CVD.
this database was to analyse the performance of the proposed approach for patient identification in scenarios where the individual's heart rate may vary as in a real application for healthcare systems and facilities.
The experiments carried out in this section have been grouped into three main categories: those in which all activities are separated and images containing 3 bpf are used, those in which the same approach is taken but with 5 bpf, and those in which all activities are combined to train and test the network to simulate better real-life scenarios where a patient may have varying heart rate during the identification process.
Based on the experimentation conducted on the GUDB with 3bpf, as presented in Table VIII, it can be inferred that the proposed approach for patient identification is capable of achieving a high level of accuracy, even with a low number of images. Furthermore, the results obtained per scenario suggest that the identification of users with lower heart rates may be more successful than those with higher heart rates. This observation supports the potential utility of the proposed approach in real-world scenarios where patients may have varying heart rates during identification.
Following the experiments performed on the GUDB with different numbers of bpf, it can be observed that the bpf is a critical parameter that must be carefully considered when dealing with different heart rates. The results obtained in tables VIII and IX suggest that it may be easier to identify users at rest with lower heart rates than when their heart beats faster. This could also explain the results obtained for the other activities studied in the tables. However, it is important to note that the number of images may also play a role in the performance of the network, as less than a thousand images may not be sufficient to properly train and test the network. As a result, the experiments for the GUDB with 7 bpf were not included due to the limited number of images available.
The results obtained in the experimentation of the GUDB when all activities are merged, as presented in Table X, demonstrate the feasibility of the proposed approach in identifying patients with different heartbeat rates. The highest accuracy of 91.32% was achieved in the experiments with 3bpf, 250 epochs, and 8099 images. A comparison with the results obtained in the experiment where the scenarios were separated, as presented in Table VIII, reveals that this new result is better than those obtained in scenarios involving cardiac activity (such as jogging with an 85.82%), but not as high as resting scenarios (such as sitting with a 98.51%).
The results obtained with 5bpf show lower accuracy and error rates, which may be attributed to the reduced number of images used compared to the experiment with 3bpf. This highlights the importance of enroling patients in different situations and cardiac conditions to improve the performance of the proposed approach in identifying patients regardless of their heartbeat rhythm at the time of identification.
The robustness of the presented approach for patient identification in different scenarios and with varying heart rates is demonstrated through experimentation on the GUDB. The results obtained in this experiment indicate the feasibility of identifying patients in different situations, such as those that may occur in a healthcare facility where patients may arrive with elevated heart rates due to accidents or other reasons. Besides, it is crucial to consider the possibility of alterations in the heart rhythm patterns of patients during their hospitalisation as a result of their daily activities. This experimentation brings the proposed approach closer to real-life applications in healthcare systems and facilities.
databases, providing a comprehensive understanding of its potential in real-world scenarios. The study demonstrates the feasibility and robustness of this approach and its ability to overcome the limitations of current patient identification methods in healthcare 2. The results of the experiments indicate that the proposed system has the potential to be a reliable and effective method for use in healthcare facilities, and it is inclusive even when patients have health conditions or impairments. The results of the experiment carried out in this study provide evidence of the effectiveness and robustness of the proposed approach for the identification of patients using ECGs. As outlined in [46], the presented method meets several essential requirements for a patient identification system. The experimentation conducted in Section III-D, where the approach was tested on subjects performing different activities, demonstrates the potential applicability of the proposed method in various scenarios, including emergencies. Furthermore, the proposed solution is highly maintainable, as a fine-tuning of the network would be sufficient to adapt to new situations, cardiac diseases or activities. Additionally, the use of ECGs as a signal for patient identification is already well-established in healthcare systems and facilities, potentially increasing acceptance among both patients and medical professionals, as the signal is already a commonly used diagnostic tool. This makes the approach easy to learn and operate, making it a valuable addition to healthcare systems and facilities. The reliability and effectiveness of the proposed approach for patient identification using ECG signals has been demonstrated through experimentation conducted on both individuals with and without cardiovascular conditions, as evidenced in the examination of the MIT-BIHDB and PTBDB datasets (presented in Sections III-B and III-C). The use of this methodology for the study and analysis of various cardiovascular diseases has been established through previous works (such as those presented in [30, 34, 43]), thereby indicating the possibility of both diagnosing cardiovascular conditions and identifying patients through the use of this approach. Other studies over user or patient identification with ECG signals do not test or study their methodology over users with different CVD. Thus, including people with cardiovascular diseases in the identification process has been shown to be possible in the presented research, making the method inclusive for a diverse range of patients as everyone has a beating heart and CVD does not affect the patient identification. Additionally, the proposed approach does not require the individual's conscious participation, making it suitable for identifying unconscious patients or those with varying heart rates or cardiovascular conditions. Finally, it is noted that biometric systems utilising electrocardiogram (ECG) signals for patient identification possess a significant advantage over other biometric characteristics, as the diagnosis of the patient's cardiovascular health can also be obtained during the identification process.
Footnote 2: A detailed comparison between our proposal and ECG-based identification systems has been included in the Appendix section for thoroughness.
In summary, the results of the experiments carried out in this study indicate that the proposed approach to the identification of patients using ECGs has the potential to be a reliable and effective method for use in healthcare facilities and systems. The use of ECGs as a signal for patient identification is well-established. In this sense, the proposed approach based on ECG signal is suitable for different scenarios and possesses a high degree of maintainability. These findings indicate that further research and development of this approach may be merited to bring it closer to real-world application in healthcare systems and facilities.
## Data availability
All utilised datasets are either accessible online ([35, 36, 37]) or can be obtained upon request ([38]) to guarantee the replicability of all the experiments.
## Credit authorship contribution statement
Caterina Fuster-Barcelo: Conceptualisation, Methodology, Experimentation, Supervision, Writing - original draft, Writing - review & editing, Validation. Pedro Peris-Lopez: Conceptualisation, Methodology, Supervision, Funding, Writing - review & editing, Validation. Carmen Camara: Conceptualisation, Methodology, Supervision, Writing - review & editing, Validation.
## Declaration of competing interest
The authors declare that they have no known competing financial interests or personal relationships that could have influenced the work reported in this article.
## Acknowledgement
This work was supported by the Spanish Ministry of Science, Innovation and Universities grants TED2021-131681B-I00 (CIOMET); and by the Comunidad de Madrid (Spain) under the project PUCFA (PUCFA-CM-UC3M).
The authors gratefully acknowledge the computer resources at Artemisa, funded by the European Union ERDF and Comunitat Valenciana as well as the technical support provided by the Instituto de Fisica Corpuscular, IFIC (CSIC-UV).
## Appendix
This section provides a comparative analysis between the existing state-of-the-art methods for user identification using ECG signals in Table XI. Notably, none of the solutions presented in the table has been evaluated for patient identification. The focus of the research presented in the table is on user classification or identification without consideration for real-life application. As a result, patient identification is not the objective of any research presented in the table except for the present study. The works presented in this comparative analysis were selected based on using similar datasets for user identification, thereby rendering the comparison more valid.
Our proposal aims to refine and enhance the methodology previously presented in [29] by using a straightforward approach that can be easily applied to patient identification in healthcare settings. The simplicity of the method is of utmost importance. Upon examination of some of the works listed in
Table XI, it can be observed that some of the research attained slightly better results than the present study. However, this is achieved at the cost of complex architectures or methodologies for user classification and feature extraction, as exemplified in [47, 48, 49].
In work presented in [50], a non-fiducial feature extraction technique based on AutoRegressive Model (ARM) was employed to perform user identification through ECG. The authors conducted two tests on the same database, the PTBDB. They were able to classify 50 healthy users and 50 users with CVD with an accuracy of 98% and 100%, respectively, as determined by utilising one data vector per user. Besides, error values are not provided and their computational cost could be altered as they need to perform feature extraction. However, the proposed method in this study offers a better generalisation approach, as demonstrated by testing over hundreds of images for each user, while still achieving similar results on the PTBDB.
Compared to other works, the presented study exhibits superior results in terms of user identification performance, even though these other methods may utilize more complex architectures or methodologies. As demonstrated by works such as [51, 52, 53, 54, 55, 48], the results obtained by these other methods fall short in comparison to the outcomes achieved in the study.
In this research, different aspects of study are proposed in regards to patient identification. One of them involves examining the impact of CVD on the identification of individuals. This is particularly relevant as the proposed method is intended for use in healthcare systems for patient identification. Unlike most of the existing literature, the cardiovascular health status of users is not considered as a factor affecting the accuracy of results in these studies. This research seeks to fill this gap in the field and provide valuable insights into the influence of cardiovascular diseases on the identification process.
All in all, the distinctive aspect of the proposed approach is its focus on the impact of CVD on user identification, which has been largely overlooked in previous studies. The method considers the cardiovascular condition (CVDs and different heartbeat rates) of the users, ensuring that the results obtained are not biased or limited. This highlights the importance of considering such physiological factors in developing user identification methods and emphasises the superiority of the proposed approach. Furthermore, the results obtained are consistent and reliable, with lower error rates and higher accuracy metrics, as demonstrated through extensive experimentation. All these features make the proposed method a valuable contribution to the field of user identification in healthcare systems and make it a strong contender for practical applications.
|
2310.06738
|
Polarization study of single color centers in aluminum nitride
|
Color centers in wide-bandgap semiconductors are a promising class of
solid-state quantum light source, many of which operate at room temperature. We
examine a family of color centers in aluminum nitride, which emits close to 620
nm. We present a technique to rapidly map an ensemble of these single photon
emitters, identifying all emitters, not just those with absorption dipole
parallel to the laser polarization. We demonstrate a fast technique to
determine their absorption polarization orientation in the c-plane, finding
they are uniformly distributed in orientation, in contrast to many other
emitters in crystalline materials.
|
J. K. Cannon, S. G. Bishop, J. P. Hadden, H. B. Yagci, A. J. Bennett
|
2023-10-10T16:08:10Z
|
http://arxiv.org/abs/2310.06738v1
|
# Polarization Study of Single Color Centers in Aluminum Nitride
###### Abstract
Color centers in wide-bandgap semiconductors are a promising class of solid state quantum light source, many of which operate at room temperature. We examine a family of color centers in aluminum nitride which emit close to 620 nm. We present a technique to rapidly map an ensemble of these single photon emitters, identifying all emitters, not just those with absorption dipole parallel to the laser polarization. We demonstrate a fast technique to determine their absorption polarization orientation in the c-plane, finding they are uniformly distributed in orientation, in contrast to many other emitters in crystalline materials.
Color centers (CCs) are emissive point structures in a crystal consisting of combinations of impurities, vacancies and lattice defects which at high-density can give a crystal color. When a single center is isolated, the photon statistics often demonstrate anti-bunching. CCs with internal energy levels embedded deep within the bandgap of their semiconductor host can emit at room-temperature. The most investigated example is the negatively charged nitrogen vacancy (NV\({}^{-}\)) complex in diamond [1] which shows great promise as a room temperature light source for quantum communications [2], biocompatible quantum-sensor [3], long-lived spin memory [4] and as a platform for tests of fundamental quantum physics [5]. In recent years, CCs have been found in a large range of other wide-band-gap materials including silicon carbide [6] and hexagonal boron nitride [7, 8].
Isolated CCs in the group III-nitrides have been less well investigated, with a small number of works reporting their presence in gallium nitride (GaN) [9, 10, 11] and aluminum nitride (AlN) [12, 13, 14] emitting in the visible and near infrared. The prevalence of these semiconductors in high power electronics and solid state lighting makes this an especially promising platform to investigate, with an established route to cost-effective and commercial-scale fabrication, and epitaxial material available at low cost. AlN is a wurzite semiconductor with a direct band-gap of 6.2 eV and a high refractive index of 2.14 at 600 nm [15]. Its wide transparency window, second and third order optical non-linearity [16] have motivated the development of waveguide-coupled devices [17, 18, 19] including cavity structures [20]. This processing technology could be adapted to integrate CCs in photonic integrated circuits for applications in quantum technologies.
In previous publications, density functional theory has been used to predict emission from several CCs in AlN, namely the anti-site nitrogen vacancy complex (N\({}_{\mathrm{Al}}\)V\({}_{\mathrm{N}}\)) at 712 nm [21], the divacancy (V\({}_{\mathrm{Al}}\)V\({}_{\mathrm{N}}\)) at 867 nm [21] and negative vacancy (V\({}_{\mathrm{N}}^{-}\)) at 443 \(-\) 517 nm [22]. The conditions to form these CCs are not well understood, but one report has shown CCs in GaN are correlated with the density of threading dislocations but are not correlated spatially [23], which may point to a common cause. Other works have pointed to the polarity of the crystal playing a role in the epitaxy of material containing CCs in GaN [24]. However, the small number of experimental reports on isolated CCs in AlN means there is little evidence CCs in GaN and AlN are related, even if they share some optical characteristics and exist in similar semiconductors.
Here we report our studies on isolated CCs in a commercially-sourced AlN-on-sapphire sample that has a convenient density of CCs with zero-phonon lines clustered around to 620 nm. To learn more about the AlN CC's physical origin we investigate the ensemble distribution of absorption dipoles. We find that, despite the crystalline nature of the film, there is no preferred absorption dipole orientation direction for the CCs. Detailed study of the temporal, spectral and photo-physical properties of a small number of CCs provides an insight into their internal energy levels and physical structure.
The sample measured in this work is a single crystal, 1 um thick epi-layer of AlN grown via metal-organic chemical vapour deposition on a [0001] plane sapphire substrate. In confocal scan maps, CCs appeared as diffraction limited spots, Figure 1(a). A confocal microscope was used to excite and collect light from the CCs. The sample was excited with a wavelength of \(\lambda_{exc}=532\) nm using a DPSS laser. The excitation polarization was purified with an additional linear polarizer. A 0.9 numerical aperture objective focused the excitation laser onto the sample and collected the resultant fluorescence. The fluorescence between 550 and 650 nm was coupled into a SMF28 fibre and subsequently measured with an SPCM-AQRH silicon Avalanche Photo Diode (APD) from Excelitas. The beam was translated laterally on the sample's surface by a mirror galvanometer and 4f imaging system, whilst the focal depth was controlled by a piezoelectric actuator.
The density of CCs is sufficiently low that they can be individually addressed, as shown in Fig. 1(a). The CC at the middle of the scan has a typical emission spectrum consistent with a zero-phonon line around 620 nm at room temperature (not visible in this example) and a broad phonon side-band extending to 800 nm, shown in Fig. 1(b). The absence of an obvious zero-phonon line is common amongst other CCs in AlN, indicative of a low Debye-Waller factor [12, 13]. We observe the usual indicators of quantised point-like emission, namely resolution limited spot size, saturation of intensity at increasing laser power, and few nanosecond radiative decays in all investigated CCs, as previously reported [12] (data not shown). Firm
proof of quantised electronic states comes in the form of photon statistics displaying anti-bunching under continuous wave excitation with \(g^{(2)}(\tau)<0.5\) at low powers. At low excitation power \(g^{(2)}(0)=0.29\pm 0.04\). The data is fit with the equation:
\[g^{(2)}(\tau)=1-a_{1}e^{-|\tau|/\tau_{1}}+a_{2}e^{-|\tau|/\tau_{2}} \tag{1}\]
with \(\tau_{1}\) and \(\tau_{2}\) representing the anti-bunching and bunching lifetimes respectively. At \(44.3\,\mathrm{\SIUnitSymbolMicro W}\) excitation \(\tau_{1}=8.3\pm 0.8\,\mathrm{ns}\) and \(\tau_{2}=3.7\pm 1.1\,\mathrm{\SIUnitSymbolMicro s}\). At \(1.40\,\mathrm{mW}\) excitation \(\tau_{1}=3.5\pm 0.2\) ns and \(\tau_{2}=198\pm 7\) ns. Under stronger optical excitation, increasingly prominent bunching is observed in Fig. 1(b), suggestive of one or more long-lived shelving states that block photon emission [1].
Previous reports of CCs in c-plane AlN have all reported linear absorption dipoles. Motivated to determine whether these absorption dipoles are aligned to the crystal directions, we have developed an automated procedure to measure the polarizations of a large number of CCs. Conventionally, the in-plane orientation angle when maximum absorption occurs, \(\theta_{\mathrm{abs}}\), for a given CC is found by continuous rotation of the laser polarization \(\theta\) whilst monitoring the emission intensity. This results in a detected intensity variation, \(I(\theta)=a+b\cos^{2}(\theta-\theta_{\mathrm{abs}})\) for a given CC located in some initial scan area. \(\eta\) is the visibility of the variation in \(I(\theta)\). High visibility is consistent with a single linear absorption dipole. Fig 2(a) illustrates one CC with \(\eta=0.92\). Measurements of 47 CCs are presented in Fig 2(b) showing the majority of CCs surveyed have a high degree of polarisation, indicating absorption to a single dipole. However, applying this technique to an ensemble of CCs identified in a scan area naturally pre-selects CCs aligned to the laser polarization of the initial scan. A different approach is required to uniformly sample the ensemble.
Fig. 3(a-c) illustrates three scan maps measured over the same area at laser excitation angles \(\theta\) = 0, 60 and 120\({}^{\circ}\), without polarization filtering in the collection path. It is evident that some CCs are preferentially excited at one laser polarization, and entirely suppressed in the other scans. To obtain the full picture of the CC locations, the three scan maps in (a)-(c) are combined with the positive-root of their quadrature sum, shown in (d). The CC density is one CC per \(11\,\mathrm{\SIUnitSymbolMicro m}^{2}\).
It is also possible to build a polarization map for every pixel using the three area maps in Fig. 3(a-c). This is achieved by defining a vector aligned to the direction of the laser polarization for each angle, \(\vec{I_{0}}\), \(\vec{I_{00}}\) and \(\vec{I_{120}}\), with a magnitude equal to the intensity of each pixel in each map. It can be shown that the vector sum of these maps is aligned to the dipole absorp
Figure 1: Identifying color centers in aluminum nitride. a) Confocal scan map of an area of the sample, highlighting a representative single color center in the white dashed circle. Scale bar is \(2\,\mathrm{\SIUnitSymbolMicro m}\). b) Autocorrelation measurement performed at pump powers of \(44.3\,\mathrm{\SIUnitSymbolMicro W}\) and \(1.40\,\mathrm{mW}\). c) Room temperature spectrum of the color center under \(532\,\mathrm{nm}\) excitation. The orange region denotes the observation window for photo-physical measurements. The uncorrected spectrum of the CC is given in gray with the spectrum used for the background in black.
Figure 2: a) Polarization resolved photon counting measurement of a CC performed in absorption. b) Histogram of CC visibilities performed in absorption.
tion angle at each pixel.
\[\vec{D_{\text{abs}}}=\sum\left(\vec{I_{0^{\circ}}}+\vec{I_{60^{\circ}}}+\vec{I_{12 0^{\circ}}}\right) \tag{2}\]
\(\vec{D_{\text{abs}}}\) is parameterised by its magnitude \(r=|\vec{D_{\text{abs}}}|\) and its angle \(\theta_{\text{abs}}\).
We have verified this method is accurate in measuring the polarization in regions of high intensity, such as where CCs are located, by comparison to subsequent laser polarization scans on individual CCs. However, in the area between CCs the low intensity background contains uncorrelated noise in the \(\vec{I_{0^{\circ}}}\)-\(\vec{I_{60^{\circ}}}\) and \(\vec{I_{120^{\circ}}}\) maps resulting from counting noise in each map. We therefore weight each point in the angle map, Fig. 3(e), using the corresponding point in the intensity map, Fig. 3(d). The weighting function, \(w(x,y)\), takes the form \(w(x,y)=(I(x,y)-I_{\text{Min}})/(I_{\text{Max}}-I_{\text{Min}})\) where \(I(x,y)\) is the intensity at each location, \(I_{\text{Max}}\) the maximum and \(I_{\text{Min}}\) the minimum on the intensity map in (d). The resulting angle map in (e) shows polarization as a color, with intensity at each pixel weighted by the intensity in (d). White indicates a low intensity. This procedure provides an efficient and automated method of assessing every CC within an ensemble.
In our optical system, the birefringence of a dichroic beamsplitter results in a small reduction in the purity of the laser polarization at some angles. The maximum visibility achievable by the microscope is 99.8 %. When the effect of the birefringence was greatest, the visibility was 94.8 %, which has no impact on the conclusions of our study.
With this information we can look at the statistics of CC absorption angles over the 2025 \(\upmu\)m\({}^{2}\) area. A histogram of weighted pixel angle, shown in Fig. 3(f), indicates that the absorption dipole angles of CCs are uniformly distributed. This in contrast to many other solid state emitters, for instance the neutral exciton in InAs quantum dots usually has two dipoles along crystal axes in the plane of the sample [25] and the NV\({}^{-}\) in diamond has four possible orientations which can be readily observed in [111] oriented samples [26]. We anticipate that scanning samples featuring CCs with high \(\eta\) and a limited number of dipole orientations would yield distinct peaks at angles corresponding to the crystal axis of the sample.
The lack of any preferred direction in the in-plane component of absorption dipoles in AlN CCs points to a more complex origin than the diamond NV\({}^{-}\). In future, sampling the ensemble distribution of emission dipole orientations could yield further insight into the physical origin of these CCs, or could be applied to other material systems. Identification of all CCs in these samples, not just those aligned to the laser in a single scan, could be useful in correlative microscopy techniques where transmission electron microscopy and cathodoluminescence is compared to PL maps to investigate the link between threading dislocations in AlN and CCs.
Figure 3: a-c) Room temperature confocal scan maps of the sample at excitation polarization angles of 0\({}^{\circ}\), 60\({}^{\circ}\) and 120\({}^{\circ}\) using an excitation wavelength of 532 nm. Scale bar is 5 \(\upmu\)m. d) Quadrature sum intensity map composed of each pixel in Figures 3a-3c. e) Angle map of the area, weighted by the intensity of the pixels. f) Histogram of pixel orientations as as a function of angle, relative to the crystal. Noise arising from the low intensity areas between pixels is removed by applying an intensity threshold.
###### Acknowledgements.
We acknowledge financial support provided by EPSRC via Grant No. EP/T017813/1 and the European Union's H2020 Marie Curie ITN project LasIonDef (GA No. 956387). Device processing was carried out in the cleanroom of the ERDF-funded Institute for Compound Semiconductors (ICS) at Cardiff University.
## Data Availability
Data supporting the findings of this study are available in the Cardiff University Research Portal at [http://doi.org/10.17035/d.2023.0248563719](http://doi.org/10.17035/d.2023.0248563719).
|
2307.05826
|
Detection, Instance Segmentation, and Classification for Astronomical
Surveys with Deep Learning (DeepDISC): Detectron2 Implementation and
Demonstration with Hyper Suprime-Cam Data
|
The next generation of wide-field deep astronomical surveys will deliver
unprecedented amounts of images through the 2020s and beyond. As both the
sensitivity and depth of observations increase, more blended sources will be
detected. This reality can lead to measurement biases that contaminate key
astronomical inferences. We implement new deep learning models available
through Facebook AI Research's Detectron2 repository to perform the
simultaneous tasks of object identification, deblending, and classification on
large multi-band coadds from the Hyper Suprime-Cam (HSC). We use existing
detection/deblending codes and classification methods to train a suite of deep
neural networks, including state-of-the-art transformers. Once trained, we find
that transformers outperform traditional convolutional neural networks and are
more robust to different contrast scalings. Transformers are able to detect and
deblend objects closely matching the ground truth, achieving a median bounding
box Intersection over Union of 0.99. Using high quality class labels from the
Hubble Space Telescope, we find that the best-performing networks can classify
galaxies with near 100\% completeness and purity across the whole test sample
and classify stars above 60\% completeness and 80\% purity out to HSC i-band
magnitudes of 25 mag. This framework can be extended to other upcoming deep
surveys such as the Legacy Survey of Space and Time and those with the Roman
Space Telescope to enable fast source detection and measurement. Our code,
\textsc{DeepDISC} is publicly available at
\url{https://github.com/grantmerz/deepdisc}.
|
G. M. Merz, Y. Liu, C. J. Burke, P. D. Aleo, X. Liu, M. C. Kind, V. Kindratenko, Y. Liu
|
2023-07-11T22:16:59Z
|
http://arxiv.org/abs/2307.05826v1
|
Detection, Instance Segmentation, and Classification for Astronomical Surveys with Deep Learning (DeepDISC): Detectron2 Implementation and Demonstration with Hyper Suprime-Cam Data
###### Abstract
The next generation of wide-field deep astronomical surveys will deliver unprecedented amounts of images through the 2020s and beyond. As both the sensitivity and depth of observations increase, more blended sources will be detected. This reality can lead to measurement biases that contaminate key astronomical inferences. We implement new deep learning models available through Facebook AI Research's Detectron2 repository to perform the simultaneous tasks of object identification, deblending, and classification on large multi-band coadds from the Hyper Suprime-Cam (HSC). We use existing detection/deblending codes and classification methods to train a suite of deep neural networks, including state-of-the-art transformers. Once trained, we find that transformers outperform traditional convolutional neural networks and are more robust to different contrast scalings. Transformers are able to detect and deblend objects closely matching the ground truth, achieving a median bounding box Intersection over Union of 0.99. Using high quality class labels from the Hubble Space Telescope, we find that the best-performing networks can classify galaxies with near 100% completeness and purity across the whole test sample and classify stars above 60% completeness and 80% purity out to HSC i-band magnitudes of 25 mag. This framework can be extended to other upcoming deep surveys such as the Legacy Survey of Space and Time and those with the Roman Space Telescope to enable fast source detection and measurement. Our code, DeepDISC, is publicly available at [https://github.com/grantmerz/deepdisc](https://github.com/grantmerz/deepdisc).
keywords: techniques: image processing - methods: data analysis - galaxies: general - Sky Surveys
## 1 Introduction
The rise of machine learning/artificial intelligence has allowed for rapid advancement in many image analysis tasks to the benefit of researchers who wish to work with large sets of imaging data. This active field of study, known as computer vision, has led to developments in many disciplines including medical imaging (Zhou et al., 2021), urban planning (Ibrahim et al., 2020), autonomous systems (Pavek et al., 2022) and more.
Tasks such as image compression, inpainting, object classification and detection, and many others have been extensively studied. Astronomy is no exception, and many methods that utilize deep learning have been applied to simulations and real survey data for tasks such as object detection, star/galaxy classification, photometric redshift estimation, image generation, deblending and more (see Huertas-Company and Lanusse, 2023 for a comprehensive review). Machine learning methods are already becoming instrumental in handling the large volume of data processed every day in survey pipelines (e.g., Bosch et al., 2018; Russeil et al., 2022; Tachibana and Miller, 2018; Malanchev et al., 2021; Mahabal et al., 2019)
The next generation of astronomical surveys such as the upcoming Legacy Survey of Space and Time (LSST; Ivezic et al., 2019) at the Vera C. Rubin Observatory, the Wide-Field Imaging Survey at the Nancy Grace Roman Space Telescope (_Roman_; Spergel et al., 2013), and _Euclid_(Amiaux et al., 2012) will produce unprecedented amounts of imaging data throughout the 2020s and beyond. LSST will provide incredibly deep ground-based observations of the sky, revealing a map of the universe including objects as faint as \(\sim\)25-27 mag at a 5\(\sigma\) detection for 10 year observing runs. Ground-based surveys such as the Hyper Suprime-Cam Subaru Strategic Program (HSC SSP; Aihara et al., 2018) and the Dark Energy Survey (DES; Dark Energy Survey Collaboration et al., 2016) have already mapped large swaths of the sky and produced catalogs of tens of millions of objects, with HSC depths being comparable to LSST. The astronomical research community is now in an era that demands robust and efficient techniques to detect and analyze sources in images.
Current surveys such as HSC already report large fractions of blended (overlapping) objects. For instance, 58% of objects in the the shallowest field (Wide) of the HSC survey are blended, i.e., detected in a region of sky above the 5\(\sigma\) threshold (26.2 mag) containing multiple significant peaks in surface brightness. As depths increase, line-of-sight projections and physical mergers cause the overall number of blends to increase. This fraction rises to 66% for the Deep and 74% for the UltraDeep layers, which are comparable to LSST depths (Bosch et al., 2018). If blends are not identified, they will bias results from pipelines that assume object isolation. For example, Boucaud et al. (2020) show that the traditional detection/deblending methods can lead to a photometric error of >0.75 mag for \(\sim\)12% of their sample of artificially blended galaxies from the Cosmic Assembly Near-infrared Deep Extragalactic Legacy survey (CANDELS Grogin et al., 2011; Koekemoer et al., 2011). Unrecognized blends can cause an increase in the noise of galaxy shear measurements by \(\sim\)14% for deep observations (Dawson et al., 2016). Deblending, or source separation, has been recognized as a high priority in survey science, especially as LSST begins preparations for first light.
Despite rigorous efforts to deblend objects, the problem of deblending remains, and in some sense will always remain in astronomical studies. Deblending involves separating a mixture of signals in order to independently measure properties of each individual object. This an imaging problem analogous to the "cocktail party problem", in which an attempt is made to isolate individual voices from a mixture of conversations. However, since it is impossible to trace a photon back to an individual source, astronomical deblending is characterized as an under-constrained inverse problem. Deblending methods must rely on assumptions about source properties and models of signal mixing (Melchior et al., 2021).
A first step in deblending is object detection. Many codes have been developed for source detection and classification, including FOCAS (Jarvis and Tyson, 1981), NEXT (Andreon et al., 2000) and SExtractor(Bertin and Arnouts, 1996). SExtractor is widely used in survey pipelines including HSC (Bosch et al., 2018) and DES (Morganson et al., 2018), but can be sensitive to configuration parameters. While SExtractor also deblends by segmenting, or identifying pixels belonging to unique sources, modern deblenders have been developed such as Morphueus (Hausen and Robertson, 2020) and Scarlet, (Melchior et al., 2018) with the latter implemented in HSC and LSST pipelines. With hopes for real-time object detection and deblending algorithms in surveys such as LSST, machine learning applications to crowded fields offer a promising avenue. The use of deep neural networks, or deep learning has seen particular success in image processing. In addition to efficiency and flexibility, neural networks may be able to overcome limitations of traditional peak-finding algorithms due to their fundamentally different detection mechanism.
There is a growing body of deep learning deblending methods in astronomy. Reiman and Gohre (2019) use a Generative Adversarial Network (GAN) to deblend small cutouts of Sloan Digital Sky Survey (SDSS Alam et al., 2015) galaxies from Galaxy Zoo (Lintott et al., 2011). Arcelin et al. (2021) use a variational autoencoder to deblend small cutouts of simulated LSST galaxies. Hemmati et al. (2022) use GANs to deblend images with HSC resolution and recover Hubble Space Telescope resolution. On larger scales, Bretonniere et al. (2021) use a probabilistic U-net model to deblend large simulated scenes of galaxies.
In addition to blending, another pressing issue with increased depth is the presence of many unresolved galaxies in the deep samples of smaller and fainter objects. This will prove difficult for star-galaxy classification schemes that rely on morphological features to distinguish between a point source star or a point source galaxy, although machine learning methods have been employed to combat this problem (Tachibana and Miller, 2018; Miller and Hall, 2021). Muyskens et al. (2022) use a Gaussian process classifier to perform star/galaxy classification on HSC images. This is an important area of study, as misclassifications can introduce biases in studies that require careful measurement of galaxy properties. For instance, it has been shown that stellar contamination can be a significant source of bias in galaxy clustering measurements (Ross et al., 2011). Precise constraints of cosmological models require a correction of this systematic bias in measurements of clustering at high photometric redshifts.
The broader field of computer vision has seen a large growth in object detection, classification, and semantic segmentation models. Object detection and classification consist of identifying the presence of an object in an image and categorizing it from a list of possible classes. Semantic segmentation involves identifying the portion of an image which belongs to a specific class, i.e. deblending. Put together, these tasks amount to _instance segmentation_. This pixel-level masking can be used to deblend objects by selecting the pixels associated with each individual object by class. The benchmark leader in deep learning instance segmentation models has been the Mask-RCNN framework (He et al., 2017).
The Mask R-CNN architecture was implemented in Burke et al. (2019) to detect, deblend, and classify large scenes of simulated stars and galaxies. Other architectures have been tested in astronomical contexts, including You Only Look Once (YOLO Bochkovskiy et al., 2020). He et al. (2021) use a combination of the instance segmentation model YOLOv4 and a separate classification network to perform source detection and classification on SDSS images, and Gonzalez et al. (2018) use a YOLO model to detect and morphologically classify SDSS galaxies. However, these models do not perform segmentation.
The rapid pace of research has led to many new variations and methods that can outperform benchmark architectures. To the benefit of computer vision researchers, Facebook AI Research (FAIR) has compiled a library of next-gen object detection and segmentation models under the framework titled Detectron2(Wet al., 2019). This modular, fast, and well-documented library makes a fertile testing ground for astronomical survey data. In addition to a variety of architectures, pre-trained models are also provided. By leveraging _transfer learning_, i.e., the transfer of a neural network's knowledge from one domain to another, we can cut back on training time and costs with these pre-trained models. It is also possible to interface new models with Detectron2, e.g., Li et al. (2022); Cheng et al. (2022), taking advantage of its modular nature and flexibility1.
Footnote 1: See [https://github.com/facebookresearch/detectron2/tree/main/projects](https://github.com/facebookresearch/detectron2/tree/main/projects) for a comprehensive list of projects.
In this work, we leverage the resources of the Detectron2 library by testing state-of-the-art instance segmentation models on large scenes, each containing hundreds of objects. We perform object detection, segmentation, and classification simultaneously on large multi-band HSC coadds. Many deep learning applications have been tested on simulated images, but methods applied to real data are often limited by a lack of ground truth. Here, we construct a methodology for using instance segmentation models on real astronomical data, and demonstrate the potential and challenges of this framework when applied to deep images. The HSC data is ideal for testing this framework, as it represents the state-of-the-art among wide/deep surveys, and is closest in quality to upcoming LSST data. By interfacing with Detectron2, we are able to test new models as the repository is updated. We compare models with different performance metrics,
and test how robust they are to contrast scalings that alter the dynamic range of the data, which will be important to consider for application to other datasets.
The major contributions of this work can be summarized as 1) Using instance segmentation models to deblend and classify objects in real images from HSC. This demonstrates the feasibility for future integration with wide/deep survey pipelines. We will show that the models can learn inherent features in the data that lead to classification performance gains above traditional morphological methods. 2) Comparing the performances of different models when the input data undergoes different contrast scalings. There is no standard method for scaling image data in astronomical studies that use deep neural networks, so we apply a variety of pre-processing scalings to the data for each model. Dynamic ranges can vary significantly across datasets, and raw data may not be ideal for feature extraction. We test sensitivity to contrast scalings to identify models that will be more easily adapted to different datasets. 3) Interfacing our pipeline with the detectron2 framework to test state-of-the-art models. Of particular note are our tests using transformer-based architectures, an emerging framework in computer vision studies. We will show that these architectures are more robust and accurate than traditional convolutional neural networks in both deblending and classifying objects in large scenes.
This paper is organized as follows. In SS2, we present an overview of detectron2 in which we highlight the flexibility of its modular nature and describe the portion of the available deep learning models we implemented. In SS3, we describe the curation of our datasets, production of ground truth labels, data preparation and our training procedure. In SS4 we present the results of training our suite of models and assess performance with different metrics. SS5, we discuss the differences in model capabilities, compare the performance of our pipeline to existing results, and discuss the benefits and drawbacks of our method. In SS6, we contextualize our findings and conclude.
## 2 Detectron2 Framework
We leverage the modular power of detectron2 by implementing models with varying architectures. The pre-trained models we test in Detectron2's Model Zoo have a structure that follows the GeneralizedRCNN meta-architecture provided by the codebase. This architecture is a flexible overarching structure that allows for a variety of changes, provided they support the following components: (1) a per-image feature extraction backbone, (2) region-proposal generation, (3) per-region feature extraction/prediction. The schematic of this meta-architecture is shown in Figure 1.
The feature extraction backbone takes an input image and outputs "feature maps" by running the input through a neural network, often composed of convolutional layers. In our tests, we use ResNet backbones and transformer-based backbones. ResNets are convolutional neural networks that utilize _skip connections_ that allow for deep architectures with many layers without suffering from the degrading accuracy problem known to plague deep neural networks (He et al., 2016). In this paper we explore a few different ResNet backbones: ResNet50, ResNet101 and ResNeXt. A ResNet50 network consists of 50 total layers, with two at the head or "stem" of the network and then four stages consisting of 3, 4, 6 and 3 convolutional layers, respectively. Each stage includes a skip connection. A ResNet101 network is similar to a ResNet50 setup, but with each stage consisting of 3, 4, 23 and 3 convolutional layers, respectively. Subsequent layers undergo a pooling operation that reduces the input resolution. We refer the reader to He et al. (2016) for details regarding these layers. ResNeXt layers work similar to ResNet layers, but include grouped convolutions which add an extra parallel set of transforms (Xie et al., 2017). We also test a network with deformable convolutions, in which the regularly spaced convolutional kernel is deformed by a pixel-wise offset that is learned by the network (Dai et al., 2017).
The stages of a ResNet backbone produce feature maps, representing higher level image aspects such as edges and corners. While one can simply take the feature map outputted by the last layer of the backbone, this can pose a challenge in detecting objects of different scales. This motivates the extraction of features at different backbones stages (and thus scale sizes). A hierarchical feature extractor known as a _feature pyramid network_ (FPN Lin et al., 2017) has seen great success in object detection benchmarks. The FPN allows each feature map extracted by a ResNet stage to share information with other feature maps of different scales before ultimately passing on to the Region Proposal Network (RPN).
After the image features have been extracted, the next stage of Generalized-RCNN networks involves region proposal. This stage involves placing bounding boxes at points in the feature maps and sampling from the proposed boxes to curate a selection of possible objects. After this sampling has been done, bounding boxes are once again proposed and sent to the Region of Interest (ROI) heads, where they are compared to the ground truth annotations. The annotations consist of bounding box coordinates, segmentation masks, and other information such as class labels. Ultimately, many tasks can be done on the objects inside these regions of interest, including classification, and with the advent of Mask-RCNN frameworks, semantic segmentation. We do not include the details of the RPN and ROI heads, as these structures largely remain the same in our tests. We do test architectures with a cascade structure (Cai and Vasconcelos, 2018) which involves iterating the RPN at successively higher detection thresholds to produce better guesses for object locations. For specifics, we refer the reader to Girshick (2015), He et al. (2017) and the detectron2 codebase.
We train a suite of networks to allow for several comparisons. We use a shorthand to denote network configurations as follows.
* R101c4: A ResNet50 backbone that uses features from the last residual stage
* R101fpn: A ResNet101 backbone that uses a FPN
* R101dc5: A ResNet101 backbone that uses a FPN with the stride of the last block layer reduced by a factor of two and the dilation increased by a factor of two
* R50def: A ResNet50 backbone that uses a FPN and deformable convolutions
Figure 1: Generalized RCNN meta-architecture. A multi-channel image along with ground truth object annotations is fed to the backbone feature extractor. These features are passed to the RPN and ROI heads to predict object locations and annotations.
* R50cas: A ResNet50 backbone that uses a cascaded FPN
* X101fpn: A ResNeXt101 backbone that uses a FPN
In addition to these ResNet based models, we also test transformer based architectures. A transformer is a encoder-decoder model that employs _self-attention_. Briefly, self-attention consists of applying linear operations to an encoded sequence to produce intermediate "query, key and value" tensors. A further series of linear operations and scalings are done to these intermediate tensors to produce an output sequence, and then a final linear operation is performed on the entire output sequence. Transformer models have exploded in popularity in the domain of natural language processing due to their scalability and generalizability on sequences, which translates well to language structure. Recently, transformers have been used in computer vision tasks such as image classification and object detection. These models been shown to be competitive with the dominant convolutional neural networks, and are seeing rapid advances in performance measures (Dosovitskiy et al., 2020; Caron et al., 2021; Oquab et al., 2023; Liu et al., 2021; Li et al., 2022). For example, MViT2 utilizes multi-head pooling attention (MHPA Fan et al., 2021) to apply self-attention at different image scales, allowing for the detection of features of varying sizes. To obtain the input encoded sequences, an image is first divided into patches which are flattened and sent through a linear layer. MHPA is applied to the sequences to produce the image features. In an object detection context, these features are input to an FPN in the same way as features obtained from a ResNet in RCNN models. Another modern transformer model, the Swin Transformer (Liu et al., 2021), applies multi-head attention to image patches, but rather than a pooling operation, use patch merging to combine features of different image patches. Swin models also use shifted window attention to allow for efficient computation and information propagation across the image. We test both MViT2 and Swin backbones in our implementation.
## 3 Implementation
### HSC coadds
In this work, the data we use consist of multi-band image coadds of roughly 4000 pixels2 from the Deep and Ultra-Deep fields of the Hyper Suprime Cam (HSC) Subaru Strategic Program (SSP; Aihara et al., 2018) Data Release 3 (Aihara et al., 2022). The HSC SSP is a three-tiered image survey using the wide-field imaging camera HSC. The HSC instrument (Miyazaki et al., 2017) consists of a 1.77 deg\({}^{2}\) camera with a pixel scale of 0.168", attached to the prime focus of the Subaru 8.2 m telescope in Mauna Kea. The Deep+UltraDeep component of the HSC survey covers \(\sim\)36 deg\({}^{2}\) of the sky in five broad optical bands (\(grizy\); (Kawanomoto et al., 2018)) up to a full 5\(\sigma\) depth of \(\sim\)27 mag (depending on the filter). Despite limitations (e.g., sky subtraction and crowded field issues), the HSC DR3 data provides the closest match among all currently available deep-wide surveys to the expected data quality of LSST wide fields. The Deep/Ultra-Deep field properties are listed in Table 1. We use the \(g\), \(r\) and \(i\) bands.
Footnote 2: [http://www.cds.org/](http://www.cds.org/)
Given the large depth of the survey, a significant portion of objects are blended in comparison to other ground-based surveys such as the Dark Energy Survey (Dark Energy Survey Collaboration et al., 2016). For reference, 58% of objects in the the shallowest field (Wide) of the HSC survey are blended. While a significant challenge, this lends the HSC fields to be an excellent set of data for testing deblending algorithms, particularly those suited for crowded fields. The pipeline to produce the image coadds is described in detail in Bosch et al. (2018). There are two sets of sky-subtracted coadds. The first set consists of global sky-subtracted coadds. The second set also uses the global sky-subtracted images, but an additional local sky subtraction algorithm is applied. This is to remove the wings of bright objects, artifacts that can cause problems in object detection algorithms. However, this process creates a trade-off with removing flux from extended objects, and Aihara et al. (2018) empirically find a local sky subtraction scale of 21.5 arcseconds to be a good balance. Ultimately, we use these local sky-subtracted images, as bright wings and artifacts can introduce problems of over-deblending or "shredding" and we want our "ground truth" detections to be as clean and accurate as possible. To further ensure a clean training set, we apply a few quality cuts to the sample. Some images suffer from missing data in one or more bands, especially at the edge of the imaging fields. We use the bitmasks provided in the coadd FITS files to exclude images with \(\times\)30% of the pixels assigned a NO_DATA flag. Given that the neural network takes multi-band images, if one of the \(g\), \(r\) or \(i\) band images is flagged in this way, we exclude the other bands as well. There remain some imaging artifacts and issues, such as saturated regions around bright stars, and we discuss how these affect network performance in Section 4.2.
### Ground Truth Generation
We must provide ground-truth object locations and masks to the network to perform pixel-level segmentation. We utilize the multi-band deblending code scarlet(Melchior et al., 2018) to produce a model for each individual source from which we create an object mask. scarlet utilizes constrained matrix factorization to produce a spectral decomposition of an object. It is a non-parametric model that has been demonstrated to work well on individual galaxies and blended scenes. Before we run scarlet, we extract an object catalog using sep, the python wrapper for SExtractor. Then, each identified source is modelled and the "blend" or composition of sources is fit to the coadd image data. Once the final blend model is computed, the mask is determined by running sep on each individual model source and setting a mask threshold of 5\(\sigma\) above the background. Both the scarlet modelling and mask thresholding are done on the detection image, i.e., the sum over all bands. The run time of this process increases with the number objects in an image. In order to reduce run-time, we divide the 4k stitched coadd images into 16 images of \(\sim\)1000\(\times\)1000 pixels2. While scarlet on its own is a powerful deblender, the fits can take up to \(-\)30 minutes depending on the number of objects in the image, which motivates the use of efficient neural networks. After this process is complete, we compile a training set of 1000 1k\(\times\)1k pixels2 images. The distribution of the number of sources per image is shown in Figure 2.
Footnote 2: [http://www.cds.org/](http://www.cds.org/)
The trade-off in using real over simulated data is that in supervised tasks, there is a lack of predetermined labels. For the classification task, we produce object labels with a catalog match to the HSC DR3 catalogs. We convert each detected source center to RA and DEC coordinates and then run the match_to_catalog_sky algorithm in astropy to find objects in the HSC catalog within 1 arcsecond. Then, we compare the \(i\)-band magnitude of the deblended source to the "cmodel" magnitude of the catalog objects and pick the object with the smallest magnitude difference. If no objects are within 1 arcsec
\begin{table}
\begin{tabular}{l c c c} \hline \hline \multicolumn{2}{c}{median exposure (min)} & seeing (\({}^{\circ}\)) & depth (mag) \\ \hline g & 70 & 0.83 & 27.4 \\ r & 66 & 0.77 & 27.1 \\ i & 98 & 0.66 & 26.9 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Properties of the HSC Deep/UltraDeep images
ond or no objects have a magnitude difference smaller than 1, we discard the object from our labelled set. Once an object is matched, we use the HSC catalog "extendedness value" to determine classes, which is based on differences in PSF magnitudes and extended model magnitudes. While yielding high accuracy at bright magnitudes, this metric becomes unreliable for star classification around a limiting magnitude of 24 mag in the i band (Bosch et al., 2018). We additionally discard objects with NaN values in the DR3 catalog, as the class is indeterminate. We show an example image and the results of our labelling methodology in Figure 3, with color-coded classes.
### Data Preparation
We employ three common methods for scaling the raw data from the coadd FITS files to RGB values. These are: a z-scale, a Lupton scale, and a high-contrast Lupton scale. The z-scale transformations are commonly employed in computer vision tasks and are given by
\[\begin{split} R&=A(i-\tilde{I})/\sigma_{I}\\ G&=A(r-\tilde{I})/\sigma_{I}\\ B&=A(g-\tilde{I})/\sigma_{I}\\ \end{split} \tag{1}\]
where \(I=(i+r+g)/3\) with a mean \(\tilde{I}\) and standard deviation \(\sigma_{I}\), \(R\) is pixel values in the red channel (and similarly for the green \(G\) and blue \(B\) channels using the \(r\) and \(g\) -bands respectively). We set \(A=10^{3}\) for the training and cast the images to 16-bit integers. In addition to z-scaling, we also apply a Lupton scaling from Lupton et al. (2004). This is an asinh scaling with
\[\begin{split} R&=i(\text{asinh}(Q(I-\text{minimum})/ \text{stretch})/Q\\ G&=r(\text{asinh}(Q(I-\text{minimum})/\text{stretch })/Q\\ B&=g(\text{asinh}(Q(I-\text{minimum})/\text{stretch }))/Q.\\ \end{split} \tag{2}\]
We use a stretch of 0.5 and \(Q=10\) and set the minimum to zero and cast the images to unsigned 8-bit integers. Lupton scaling brings out the fainter extended parts of galaxies while avoiding saturation in the bright central regions. These augmentations preserve the color information of objects to aid in classification. Lastly, we also use a high-contrast Lupton scaling, in which image brightness and contrast is doubled after applying the Lupton scaling. We test all of these scalings for each network architecture. In Figure 4, we show an example image and a histogram of pixel values in \(i\), \(r\) and \(g\) bands (corresponding to RGB colors)
We apply data augmentation to the training and test sets. Data augmentation has become a staple of many deep learning methods. It allows the network to "see" more information without needing to store extra images in memory. We employ spatial augmentations of random flips and 90\({}^{\circ}\) rotations. We do not employ blurring or noise addition, as the real data we train on is already convolved with a PSF and contains noise. For future generalizations of this framework to different datasets, then blur/noise augmentations may be useful, but
Figure 3: The ground truth masks and bounding boxes on an example image in the test set of our HSC Deep/UltraDeep field data. As this set is class-agnostic, we use white markings for every object. The image without overlaid masks/boxes in shown below for clarity. A Lupton contrast scaling is used in this visualization. Galaxies are colored green, and stars are colored red.
Figure 2: Histogram of the number of objects detected at \(>\)5\(\sigma\) above the background for HSC images in the training set. The images are taken from both the Deep and UltraDeep fields.
for inference purposes on test data taken under the same conditions as the training data, spatial augmentations are sufficient. We also employ a random 50% crop on each image during training so that the data can fit into GPU memory. We considered applying all contrast scalings as a data augmentation, but did not find a significant improvement in network performance. However, this could be used in future work to reduce the training costs, as results were on par with networks trained with only one contrast scaling.
### Training
Training is done using stochastic gradient descent to update the network weights by minimizing a loss function. The loss functions of these Mask-RCNN models is
\[L=L_{\rm cls}+L_{\rm box}+L_{\rm mask} \tag{3}\]
where the classification loss \(L_{\rm cls}\) is \(-\log p_{u}\) or the log of the estimated probability of an object belonging to its true class \(u\). Discrete probability distributions are calculated per class (plus a background class) for each ROI. \(L_{\rm box}\) is a smoothed L1 loss calculated over the predicted and true bounding box coordinates as given in Girshick (2015). Finally, the mask loss \(L_{\rm mask}\) is the per-pixel average binary cross-entropy loss between the ground truth and predicted masks.
All networks are pre-trained on either the MS-COCO (Lin et al., 2014) or ImageNet-1k (Deng et al., 2009) datasets of terrestrial images, and so we use transfer learning to apply these models to the our astronomical datasets. Transfer learning is a technique in deep learning where networks can generalize knowledge of one task to complete a different but related task (See Tan et al., 2018 for an overview of deep transfer learning). It is often used when applying a pre-trained deep learning model to a different domain than the one seen during training. By using pre-trained weights as initial conditions, training is likely to converge faster and be less prone to over-fitting. We use weights provided by Detectron2 as the starting point for our training procedure. We then train the networks for 50 total epochs, i.e. the entire training set is seen 50 total times by the network. In order to facilitate the transfer of knowledge, we first freeze the feature extraction backbones of the models and only train the head layers in the ROI and RPN networks for 15 epochs. We use a learning rate of 0.001 for this step. Then, we unfreeze the feature extraction backbone and train the entire network for 35 epochs. We begin this step with a learning rate of 0.0001 and decrease by a factor of 10 every 10 epochs.
We use two NVIDIA Tesla V100 GPUs in HAL system (Kindratenko et al., 2020) to train on 1,000 images of size 500 pixels2 paired with object annotations. When trained in parallel on each GPU, our models take roughly \(\sim\)3 hours to complete. Transformer architectures tend to use more memory, and thus are trained on 4 GPUs for roughly 4 hours.
Figure 4: Top row: RGB images in the HSC DR3 dataset with different contrast scalings. The scalings are, from left to right: Lupton, Lupton high contrast, and z-scale. Bottom: Histograms of pixel values to the corresponding image in the top row. Red, green, and blue represent values in the \(i\), \(r\), and \(g\) filters, respectively.
## 4 HSC Results
After training, we evaluate network performance on the test set of HSC images. The test set is taken from the patches in the UltraDeep COSMOS (Scoville et al., 2007) field and consists of 95 images of 1000 pixels\({}^{2}\). No test set images were seen during training. A benefit of the instance segmentation models used in this work is their ability to infer on images of variable size. Thus, despite the need to crop images during training, we are still able to utilize the full size of the images in the test set.
We evaluate classification performance with precision and recall, given by
\[p=\frac{\text{TP}}{\text{TP}+\text{FP}}, \tag{4}\]
\[r=\frac{\text{TP}}{\text{TP}+\text{FN}}. \tag{5}\]
True positives (TP) are counted as a detection that has a confidence score outputted by the network above a certain threshold and additionally can be matched to a ground truth object by having an Intersection over Union (IOU) above another threshold. False negatives (FN) are those ground truth objects that do not have a corresponding detection. False Positives (FP) are those detections with a high confidence score but do not have a matching ground truth. The IOU is defined as
\[\text{IOU}=\frac{area(\text{box}_{\text{predicted}}\cap\text{box}_{\text{ truth}})}{area(\text{box}_{\text{predicted}}\cup\text{box}_{\text{truth}})}. \tag{6}\]
or the area of the intersection over the area of the union of the predicted and ground truth bounding boxes. Precision and recall are often broken down by class, or combined into one value, the AP score,
\[\text{AP}=\frac{1}{51}\sum_{r\in\{0,0.02,...,1.0\}}p(r) \tag{7}\]
where \(p(r)\) is maximum the precision in a recall bin of width \(\Delta r\). AP scores are computed for IOU thresholds of {0.5,0.55...0.95} and averaged.
AP scores on the HSC COSMOS test set are reported for all network configurations in Table 2. We report the per-class AP score for stars and galaxies separately, as well as the Small, Medium, and Large AP scores, defined by the object bounding box size of 0-32 pixels\({}^{2}\), 32-96 pixels\({}^{2}\) and \(>\)96 pixels\({}^{2}\), respectively. For galaxies and stars, AP score can vary significantly across network configurations. For ResNet-based architectures, AP for galaxies is consistently higher than stars, which may be due to the higher sample size of galaxies and morphological features that make galaxies easier to distinguish than compact stars. Among ResNet-based networks, a Lupton high-contrast scaling generally gives the highest galaxy AP score, while a z-scaling always gives the highest star AP score. It appears that these networks are very sensitive to the contrast scaling used, which is not desirable for application to other datasets with different dynamic ranges. However, transformer-based architectures perform more robustly with varying contrast scalings, and outperform ResNet architectures in almost all cases. For these networks, galaxy AP scores all lie within \(\sim\)50-52, showing a gain of about 5 over the highest performing ResNet configuration. Stellar AP scores for Lupton and z-scalings lie within \(\sim\)33-35, with high-contrast Lupton scalings performing worse by an AP of \(\sim\)8. Among the Small, Medium, and Large AP metrics, transformer-based networks also outperform ResNet-based networks, in some cases seeing massive gains in AP score. The networks generally perform better on Small and Large object categories over Medium objects, again likely due to sample size.
Many studies of instance segmentation models use the MS-COCO or ImageNet-1k datasets as a benchmark to judge performance through the AP score. These data consist of terrestrial images with many object classes, so it can not necessarily be used as a comparison for our AP scores calculated on astronomical survey images with only 2 classes. However, to give a reader a sense of the range of typical values, the AP scores for models trained on terrestrial data typically range from \(\sim\)35-45 for convolutional backbones and push to \(\sim\)55 for transformer backbones (see the DETECTRON2 repo for results). For a more fair comparison, we look to Burke et al. (2019) in which instance segmentation models were tested on the simulated observations from the Dark Energy Camera (DECam Flaugher et al., 2015). The authors report an AP score for galaxies of 49.6 and score of 48.6 for stars, averaged to a combined score of 49.0. We also train our suite of models on the DECam dataset and report the results in Appendix A. More recently, He et al. (2021) use a combination of the instance segmentation model YOLOv4 (Bochkovskiy et al., 2020) and a separate classification network to perform source detection and classification on SDSS images. They report an AP score of 52.81 for their single-class detection network.
### Incorrect Label Bias Mitigation
There is an inherent bias in our measure of AP scores due to incorrect object class labels. In measurements described above, we test the network abilities to infer classes based on labels generated from HSC catalogs. However, these labels are known to become unreliable, especially for stars, around \(i\)-band magnitudes of \(\sim\)24 mag (Bosch et al., 2018). We use HSC coadds in the COSMOS field for our test dataset, and attempt to mitigate this mislabelling bias by exploiting the overlap of this field with space-based observations using the Advanced Camera for Surveys (ACS) on the Hubble Space Telescope (HST). Because of the lack of atmospheric seeing, morphological classification of stars/galaxies using the HST COSMOS catalog data is much more precise for faint objects, and can be used as ground truth instead of HSC labels. This will test how much poor classification behaviour is due to label generation as opposed to limitations of the models. We generate HST labels by cross-matching detected sources to the catalog of Leuthaud et al. (2007) within 1 arcsecond. If there is no object within 1 arcsecond, we discard the object. There is not necessarily a one-to-one match of HSC versus HST labels, as we are cross-matching to different catalogs, but the number of objects per image remains roughly the same for either labelling scheme. We will refer to this as the HST COSMOS test set.
This small set is not sufficient to train a network, so instead of training on HST-labelled data, we take the models trained on HSC-labelled data and test their evaluation performance on the HST COSMOS test set. To highlight the differences in class label generation, in Figure 5 we show the number of stars and galaxies as a function of HSC \(i\)-band magnitude for the COSMOS set for both HSC and HST class labels. The unreliable quality of HSC labels at faint magnitudes is reflected in the increased counts of stars, especially the bump in stellar counts beginning at \(i\)-25 mag. Also of note is the fewer amounts of star counts in the HSC COSMOS set at bright magnitudes. This is likely due to our HSC label generating procedure of discarding objects with NaN values in the HSC catalog. Bright stars are likely to have saturated pixels in their centers, causing these error flags to appear. With HST labels. we can test with a more astrophysically accurate baseline.
Using this new test set, we present AP scores in Table 3. The results for galaxy/star AP scores are in line with the previous results on the HSC COSMOS test set. In all cases, transformer architectures outperform ResNet architectures and are more robust to different contrast scalings. AP scores for Small bounding box objects improves for all network configurations, Medium bounding box AP score roughly remains the same, and Large bounding box AP score worsens. The decrease in Large bounding-box AP scores is likely due to the initial label generation step with sep that over-deblends or "shreds" large extended galaxies and saturated regions around stars. With our HSC label generation, we exclude many of the shredded regions by enforcing the i-band \(\Delta 1\) mag criterion and discarding labels matched to saturated catalog objects with NaN values. However, our HST label generation is solely based on a distance matching criterion, and so some of these shredded regions are included in the ground truth labels in the HST COSMOS test set. These spurious extra labels can lead to lower AP scores if the networks avoid shredding these regions at inference. In the next section, we examine metrics other than AP score that are less susceptible to this effect.
### Missing and Extra Label Bias Mitigation
Since we have done the labelling ourselves using sep, scarlet and catalog matching to produce ground truth detections, masks and classes, traditional metrics of network performance may not be the best choice in characterizing efficacy. Consider the precision/recall and AP metric. An implicit assumption in these metrics is the completeness and purity of the ground truth labels. This assumption holds for large annotated sets of terrestrial images such as the MS-COCO set (Lin et al., 2014) commonly used as a benchmark in object detection/segmentation studies. It also holds for simulated datasets of astronomical images (Burke et al., 2019) as the ground truth object locations, masks, and classes are all known _a priori_ when constructing the training and test set labels. However, real data of large astronomical scenes presents a challenge. Given that we must generate labels without a known underlying truth, any comparisons to this "ground truth" are really comparisons to the methods used to generate these labels. Issues in the label generating procedures will propagate to the performance metrics.
First, the ground truth detections are produced from running sep using a detection threshold of 5\(\sigma\) above the background. This causes a lack of complete labels, as some objects are missed. We could lower this threshold, but then run the risk of further over-deblending extended/saturated objects. This leads to the second issue in that there will still remain some level of shredding that will cause spurious extra objects to appear in the ground truth set, i.e, a lack of pure labels. If the networks do not shred extended/saturated objects as much as sep, (which is a desirable feature of the networks) then the AP metric will be _lower_ due to less spurious network detections than the ground truth. Finally, the object detection mechanisms of the neural networks used in this work are fundamentally different from the peak-finding detection used in sep.
These issues lead to cases in which the neural networks detect objects that are not labelled in our ground truth catalog, despite being actual objects, or cases in which the networks do not detect unphysical objects that are in the ground truth. Any metric that considers true/false detections is subject to this effect. We do not wish to count these cases of fake true/false positives, as this would lead to a reduction in performance metrics that does not reflect network classification/detection accuracy, but rather the limitations of our label generation. Therefore, we construct a set of metrics similar to the canonical precision and recall, but slightly alter our definitions of positive and negative detections. We use equations 4 and 5, but we limit our metrics to the set of objects D that are matched to a ground truth detection. The set of matched detections D is determined by selecting the inferred bounding box with the highest IOU to a ground truth bounding box, above a threshold of 0.5. Then for a given class C, true positives are the objects in D that are correctly classified, false positives are objects that are incorrectly assigned class C, and false negatives are matched objects with a ground truth class C that the network assigns to a different class. With these metrics, precision and recall measure purely the classification power of the network, without bias from missing labels or extra false labels. If we assume that the network's ability to classify remains consistent for objects outside of the matched set, we can generalize these metrics to overall classification performance.
We combine precision and recall into one metric to judge classification power, the F1 score, which is given by the harmonic mean
\begin{table}
\begin{tabular}{c c c c c c c c|c c} \hline \hline & & \multicolumn{6}{c}{ResNets} & \multicolumn{3}{c}{Transformers} \\ & & R101C4 & R101dc5 & R101fpn & R50cas & R50def & X101fpn & MViTv2 & Swin \\ \hline Galaxies & Lupton & 23.7 & 24.6 & 40.9 & 46.3 & 41.7 & 41.4 & **51.7** & 50.8 \\ & LuptonHC & 26.1 & 28.0 & 43.6 & 46.0 & 43.2 & 43.1 & **50.9** & 50.3 \\ & zscale & 22.9 & 30.7 & 40.2 & 39.6 & 21.8 & 34.1 & **52.7** & 52.5 \\ \hline Stars & Lupton & 10.3 & 9.6 & 7.3 & 7.4 & 4.3 & 2.5 & **34.1** & 33.9 \\ & LuptonHC & 2.4 & 5.1 & 6.1 & 8.1 & 5.5 & 8.3 & **28.0** & 25.0 \\ & zscale & 15.6 & 10.5 & 17.9 & 25.5 & 12.7 & 17.2 & **35.8** & 33.9 \\ \hline Small & Lupton & 17.6 & 18.0 & 26.1 & 28.0 & 24.6 & 23.7 & **43.7** & 43.1 \\ & LuptonHC & 14.8 & 17.2 & 25.9 & 27.7 & 25.4 & 26.9 & **40.1** & 38.4 \\ & zscale & 19.7 & 21.5 & 30.2 & 33.2 & 18.1 & 26.8 & **44.8** & 43.8 \\ \hline Medium & Lupton & 8.7 & 11.9 & 14.4 & 11.5 & 13.7 & 11.7 & **17.4** & 16.1 \\ & LuptonHC & 7.8 & 11.1 & 13.4 & 12.7 & 10.3 & 12.6 & **16.3** & 15.5 \\ & zscale & 3.8 & 9.0 & 7.2 & 7.3 & 1.6 & 3.6 & **15.1** & 14.9 \\ \hline Large & Lupton & 16.4 & 30.9 & 18.9 & 14.3 & 19.6 & 9.3 & **43.1** & 41.5 \\ & LuptonHC & 15.3 & 22.8 & 14.9 & 15.0 & 11.6 & 13.0 & 38.6 & **39.7** \\ & zscale & 0.7 & 3.6 & 3.8 & 5.2 & 0.1 & 0.9 & **37.8** & 37.0 \\ \hline \end{tabular}
\end{table}
Table 2: AP scores on COSMOS HSC set for all network configurations (larger is better). Galaxy and Star AP scores are calculated separately, whereas Small (0-32 pixels\({}^{3}\)), Medium (32-96 pixels\({}^{2}\)) and Large (\(>\)96 pixels\({}^{2}\)) object AP scores are averaged across both classes. The best result for each row is emphasized in bold. The MViTv2 backbone gives the best results in all cases except for one.
between precision and recall,
\[\text{F1}=2\times\frac{p*r}{p+r}. \tag{8}\]
The F1 score balances the trade-off between precision and recall, with a value close to unity being desirable. We report the F1 scores for the networks on the HST COSMOS test set in Table 4. The best performing configuration among ResNet architectures is the R50cas network with a z-scale scaling. A Swin network with a Lupton scaling achieves the highest overall galaxy and star F1 scores, although the MViTv2 architecture remains competitive. Nearly all transformer networks configurations perform better on star/galaxy classification than ResNet-based networks. Classification power of transformer-based networks is again more robust to contrast scalings than ResNet-based networks.
To examine network performance on faint objects, we show precision and recall as a function of \(i\) band magnitude for the HST COSMOS test set in Figure 6. Galaxy recall maintains a value close to one for all objects regardless of magnitude, with some fluctuations of a few percent for some models. Galaxy precision dips for some models at bright magnitudes, which may be due to compact galaxies with bright cores resembling stars. However, these dips are more likely due to inherent limitations of the models rather than label generation, as transformer architectures produce high galaxy precision and recall across magnitude bins compared to ResNet architectures. Most ResNet architectures suffer with stellar recall, with many showing poor performance even at bright magnitudes. Stellar precision reaches near unity at bright magnitudes for all architectures, but many networks configurations begin to drop in performance around \(i\) band magnitudes of 21 mag. The best performing networks maintain a stellar precision above 0.8 out to \(\sim\)25 mag in the \(i\) band. The transformer models we trained are able to achieve a 99.6 percent galaxy recall, 99.2 percent galaxy precision, 85.4 percent stellar recall and 91.5 percent star precision on our HST COSMOS test set, averaged over the whole magnitude range. For comparison, He et al. (2021) perform deep neural network object detection and classification of stars, galaxies, and quasars in large SDSS images. With their sample of objects that covers an \(r\) band magnitude range of 14-25 mag, they report a galaxy recall of 95.1 percent, galaxy precision of 95.8 percent, stellar recall of 84.6 percent and stellar precision of 94.5 percent.
### Deblending
In order to quantify deblending performance of the networks, we compute IOU scores for matched objects. The process is similar to the matching done in computing classification precision/recall. We first set a detection confidence threshold of 0.5 and then compute the bounding box IOUs for all detected and ground truth objects. For each ground truth object, we take the corresponding detected object with the highest IOU above a threshold of 0.5. We employ this threshold to avoid the biases discussed in Section 4.2. An IOU of one indicates a perfect match between the ground truth box and the inferred box. In addition to bounding box IOU, we also compute the segmentation mask IOU, which follows from Equation 6, but uses the area of the true and predicted segmentation masks. We report the median IOU for all matched objects in Table 5, and show the distributions in Figure and 7. Transformer-based networks generally produce a higher bounding box IOU than ResNet-based networks, although
\begin{table}
\begin{tabular}{c c c c c c c c|c c} \hline \hline & & \multicolumn{8}{c}{ResNets} & \multicolumn{4}{c}{Transformers} \\ & & R101C4 & R101dc5 & R101fpn & R50cas & R50def & X101fpn & MViTv2 & Swin \\ \hline Galaxies & Lupton & 0.96 & 0.98 & 0.98 & 0.98 & 0.98 & 0.98 & **0.99** & **0.99** \\ & LuptonHC & 0.97 & 0.98 & 0.98 & 0.98 & 0.98 & 0.98 & **0.99** & **0.99** \\ & zscale & 0.98 & 0.98 & 0.98 & 0.99 & 0.97 & 0.98 & **0.99** & **0.99** \\ \hline Stars & Lupton & 0.46 & 0.47 & 0.33 & 0.33 & 0.21 & 0.15 & **0.88** & **0.88** \\ & LuptonHC & 0.23 & 0.33 & 0.32 & 0.40 & 0.29 & 0.37 & **0.80** & 0.75 \\ & zscale & 0.69 & 0.57 & 0.61 & 0.76 & 0.60 & 0.64 & **0.87** & **0.87** \\ \hline \hline \end{tabular}
\end{table}
Table 4: F1 scores for star and galaxy classes in the HST COSMOS test set, computed for all network configurations. Transformer networks outperform convolutional networks in all cases, especially for stars.
\begin{table}
\begin{tabular}{c c c c c c c c|c c} \hline \hline & & \multicolumn{4}{c}{ResNets} & \multicolumn{4}{c}{Transformers} \\ & & R101C4 & R101dc5 & R101fpn & R50cas & R50def & X101fpn & MViTv2 & Swin \\ \hline Galaxies & Lupton & 25.9 & 26.8 & 42.9 & 49.4 & 43.5 & 42.8 & 51.8 & **52.4** \\ & LuptonHC & 27.4 & 30.0 & 46.2 & 50.2 & 46.7 & 44.3 & 51.5 & **51.6** \\ & zscale & 25.5 & 32.5 & 42.7 & 41.5 & 23.0 & 35.6 & 52.2 & **52.9** \\ \hline Stars & Lupton & 16.2 & 15.0 & 10.9 & 10.9 & 7.1 & 3.8 & 52.9 & **53.7** \\ & LuptonHC & 4.2 & 7.9 & 11.2 & 14.2 & 9.4 & 13.9 & **42.1** & 37.7 \\ & zscale & 28.3 & 19.1 & 29.3 & 41.6 & 23.8 & 29.0 & **53.9** & 52.6 \\ \hline Small & Lupton & 22.0 & 22.1 & 29.3 & 31.4 & 27.0 & 25.2 & 54.0 & **54.7** \\ & LuptonHC & 16.4 & 19.9 & 30.0 & 33.3 & 29.4 & 30.7 & **48.2** & 46.0 \\ & zscale & 28.0 & 27.1 & 37.8 & 42.9 & 24.8 & 34.1 & **54.7** & 54.3 \\ \hline Medium & Lupton & 8.3 & 11.7 & 13.8 & 11.0 & 13.1 & 11.1 & **16.3** & 15.2 \\ & LuptonHC & 7.5 & 10.8 & 12.7 & 12.2 & 9.9 & 12.0 & **15.4** & 14.6 \\ & zscale & 3.7 & 8.5 & 7.3 & 7.4 & 1.7 & 3.6 & **14.1** & **14.1** \\ \hline Large & Lupton & 6.2 & 11.1 & 7.2 & 5.9 & 7.2 & 3.6 & **15.1** & 15.0 \\ & LuptonHC & 5.4 & 7.9 & 5.3 & 4.8 & 4.4 & 4.8 & 13.7 & **14.0** \\ & zscale & 0.3 & 1.2 & 1.3 & 1.9 & 0.1 & 0.2 & **13.6** & 13.5 \\ \hline \end{tabular}
\end{table}
Table 3: Same as Table 2, but with the COSMOS HST test set.
the R50cas, R101fpn and X101fpn networks remain competitive. Segmentation mask IOUs are lower than bounding box IOUs in all cases. This indicates that while the networks are able to identify overall object sizes quite well, the finer details of object shapes within the bounding boxes are not as well inferred.
The median IOUs measure the ability of the network to detect and segment objects, but it does not fully capture the deblending power of the networks. We examine the cases of a few close blends to get a sense of the ability of the networks to distinguish large overlapping objects. We demonstrate the deblending capabilities of the different networks in Figure 8. In very crowded scenes, the networks are able to distinguish the individual sources, and even pick up objects that are not present in the labelled set, which may present an advantage for studies of low surface-brightness galaxies. As discussed in Section 4.2, this is likely due to the difference in object detection abilities of the Region Proposal Networks compared to peak-finding methods, and highlights that the models are not limited by the training data, but are able to extrapolate beyond it. It is also possible to alter inference hyperparameters such as IOU or detection confidence thresholds, which could allow for more or less detections or overlap between detections. In Figure 9 we demonstrate the effect of lowering the confidence threshold hyperparameter, allowing for more low-confidence detections. While not equivalent, this is similar to lowering the detection threshold in peak-finding algorithms. There are cases in which deblending is poor, and these are typically very large galaxies with one or more very large and very close companions. In such instances, it may be better to use a different contrast scaling. In Figure 10, a Lupton contrast scaling prevents the network from deblending multiple large sources. With the same IOU/confidence score thresholds, a z-scaling works to better isolate the two sources. This is likely due to much larger dynamic range of our z-scaling, which allows for less smearing of the sources and more distinguishing power in this case. Overall, there does not seem to be a one-size-fits-all network configuration for the cases of very large and very close blends. Training on more data would likely improve the ability to detect and segment these objects.
## 5 Discussion
The effectiveness of instance segmentation models has been proven in many domains, boosted by the ability of networks to work "out-of-the-box" and without much fine-tuning. It has been shown that an object detection model based on the Mask R-CNN framework performs well in the classification and detection/segmentation of simulated astronomical survey images (Burke et al., 2019). In this work, we have trained and tested a broad range of state-of-the-art instance segmentation models on real data taken from the HSC SSP Data Release 3 to push the direction of deep learning based galaxy detection, classification, and deblending towards real applications. Network training and evaluation performance is limited by the efficacy of our label generation methodology, a task not easily formulated when the ground truth is not completely known. This limitation also affects the choices of metrics we use to measure network performance. Often, classification and detection power are combined into the AP score, used throughout instance segmentation literature. However, this may not the best choice of metric for comparisons, as it implicitly assumes the completeness and correctness of the ground truth labels. To attempt to mitigate the effects of incorrect labels on performance metrics, we construct a test set of objects with class labels determined from more accurate space-based HST observations. However, since the AP metric artificially suffers from the detections of "false positives" that are true objects simply missing from the labelled set and/or the presence of spurious ground truth detections, we further attempt to mitigate this bias by constraining performance metrics to detected objects that have a matched ground truth label.
We find that all networks perform well at classifying galaxies, even out to the faintest in the sample. Despite the wide variety of colors, sizes, and morphologies in the real imaging data, our models can identify these objects. Stellar classification is worse, likely due to the smaller sample size in the training and test set. Transformer based networks generally outperform ResNet based networks in classification power of both stars and galaxies. They also appear to be more robust classifiers as magnitudes become fainter. Transformer based models maintain near 100% completeness (recall) and purity (precision) of galaxy selection across the whole sample and above 60% completeness and 80% purity of stars out to i-band magnitudes of 25 mag. These models are able to outperform the extendedness classifier used in the HSC catalogs, which depending on cuts yields near 100% galaxy purity, roughly 90% galaxy completeness, stellar completeness slightly above 50% and stellar purity slightly above
Figure 5: Galaxy and star counts for our COSMOS set, with labels generated from HSC and HST catalogs. The extra counts of HSC stars at faint magnitudes is due to galaxy contamination when classification is based on the extendedness metric. The low sample of bright HSC stars follows from our catalog matching procedure of excluding objects with NaN values.
40% at i-band magnitudes of 25 mag (Bosch et al., 2018). The performance increase of our models is especially noteworthy because they are able to surpass the HSC class labelling despite being trained with it. Transformer models are also more robust to different contrast scalings than traditional convolutional neural networks, indicating that they may be more applicable to a wide range of images across surveys with different dynamic ranges.
The detection/deblending capabilities are measured by the median bounding box IOUs of the networks. Again, transformer based networks generally outperform convolutional ResNet based networks. The improved performance of transformer networks over convolutional based ones may be attributable to the ability of different attention heads to encode information at different image scale sizes (Dosovitskiy et al., 2020), allowing for more overall global information propagation than CNNs. While a convolutional neural network is able to learn spatial features through sliding a kernel across an image, a transformer learns features over the entire input at once, removing any limitations due to kernel sizes. It is possible that the transformer backbones are implicitly utilizing large scale features in the images such as the spatial clustering of objects, background noise or seeing and using these bulk properties to inform the network.
We examine a few cases of close blends to qualitatively see how the networks distinguish objects. There are cases in which the networks do not detect close objects, but these can sometimes be mitigated by altering the confidence and NMS IOU threshold hyperparmeters (which can be done after training). In other cases, using a different contrast scaling helps to isolate closely blended objects.
There is room to improve both classification and segmentation of these models in future work. One possibility is constructing a larger training set with more accurate labels. With better and larger samples of stars/galaxies, networks may perform better on classification. The more close blends of large galaxies seen during training, the more likely the networks will be able to distinguish these scenes. There could be more fine-tuning of hyperparameters done to the architectures before training, rather than running them out-of-the-box. Additionally, the use of more photometric information could help
Figure 6: Top: Galaxy precision/recall metrics as a function of object magnitude in the HST i-band. The colors correspond to individual backbone architectures and are shown in the legend. Linesytles represent different network architectures following the legend, and colors indicate which contrast scaling was used (red for Lupton, blue for LuptonHC and black for z-scale). The black vertical line indicates the Deep/UltraDeep i-band 5\(\sigma\) magnitude of 26.9 mag. The y-axis is truncated to better show the differences across the models. Bottom: Stellar precision/recall metrics as a function of object magnitude in the HST i-band.
\begin{table}
\begin{tabular}{l c c c c c c c|c c} \hline \hline & \multicolumn{6}{c}{ResNets} & \multicolumn{4}{c}{Transformers} \\ & R101C4 & R101dc5 & R101fpn & R50cas & R50def & X101fpn & MV1fv2 & Swin \\ \hline Lup & 0.75 (0.61) & 0.78 (0.57) & 0.93 (0.63) & **0.94** (0.62) & 0.93 (0.64) & 0.93 (0.64) & **0.94** (0.64) & **0.94** (0.64) \\ LupHC & 0.76 (0.61) & 0.79 (0.58) & 0.93 (0.64) & **0.94** (0.64) & 0.93 (0.64) & 0.93 (0.64) & **0.94** (0.64) & **0.94** (0.64) \\ Zscale & 0.78 (0.61) & 0.81 (0.59) & 0.92 (0.62) & 0.93 (0.63) & 0.82 (0.65) & 0.91 (0.64) & **0.94** (0.65) & **0.94** (0.65) \\ \hline \hline \end{tabular}
\end{table}
Table 5: Median bounding box IOUs for matched objects in the COSMOS HST test. The best bounding box IOU for each row is emphasized in bold. Also shown in parentheses are the median segmentation mask IOUs. An IOU above 0.5 is considered to be a good match, and a score of 1.0 is a perfect overlap of ground truth and inference.
in all tasks. We use the \(i\), \(r\) and \(g\) bands on the HSC instrument in this work, corresponding to RGB color images, but could further investigate the performance if we include the \(z\) and \(y\) bands.
It is possible that these networks need to be trained longer, or that the fundamentally different properties of astronomical images over terrestrial ones limits the abilities of these architectures in extracting useful features for classification. Despite our attempts to mitigate measurement biases arising from label generation, classification remains a challenge for these models at faint magnitudes. A machine learning model has already been used to classify HSC data using photometry information with better accuracy than morphological methods, but relies on the upstream task of detection (Bosch et al., 2018). The instance segmentation models presented in this work are able to identify and assign classes after training using only an image as input.
## 6 Conclusions
It is already a necessary consequence of the current epoch of astronomical research for machine learning algorithms to parse through massive sets of images. A first step in catalog construction is detecting these objects from imaging data. Advancements in the broader computer vision community have given rise to a large ecosystem of models that perform many necessary tasks at once, including detection, segmentation, and classification. While tried and tested on terrestrial data and shown to work on simulated astronomical data, the application on real survey images remains a work in progress. Many methods rely on the object detection stage to produce measurements of individual objects. In this work, we employ a variety of instance segmentation models available through Detectron2 to perform the detection task as well as deblending and object classification simultaneously on images taken from the HSC-SSP Data Release 3. We carefully construct ground truth labels with existing frameworks and catalog matching, and caution that real data gives no straightforward way of producing labels. We find that the best networks perform well at classifying the faintest galaxies in the sample, and perform better than traditional methods at classifying stars up to \(i\)-band magnitudes of \(\sim\)25 mag. We find that even if trained on less accurate class labels, the neural networks still pick up on useful features that allow inference of the true underlying class. We expect more data with accurate labels to improve performance. The best performing models are able to detect and deblend by matching ground truth object locations and bounding boxes. Transformer networks appear to be a promising avenue of exploration in further studies.
There are many other areas for future study. While we tested a variety of models, there are many within Detectron2 that we did not implement. Some architectures are quite large and require significant resources to train. For example, we attempted to implement ViT backbones (Dosovitskiy et al., 2020) among our set of transformer-based architectures, but were limited by the available GPU memory.
Figure 7: Bounding box IOUs of each detected object that is matched a a ground truth object. Rows show the results for different transformer backbones. Top: results for ResNet backbones. Bottom: results for transformer backbones. The left column represents Lupton scaling, the middle Lupton high-contrast and the right z-scaling.
Many models, especially transformers, are trained with state-of-the-art computing resources at FAIR or other organizations, and subsequently retraining them demands significant resources. Tests could be done on other sets of real data, with other downstream tasks in mind. For example, Gonzalez et al. (2018) investigate the application of instance segmentation models on SDSS data to classify galaxy morphologies. It would be straightforward to add additional classes, or implement a redshift estimation network using the modular nature of detectron2. In future work we plan to add a photo-z estimator branch to the Mask R-CNN/transformer networks and interface with the LSST software RAIL (Redshift Assessment Infrastructure Layers)2. The availability of realistic LSST-like simulations (LSST Dark Energy Science Collaboration (LSST DESC) et al., 2021) for training will allow us to avoid biases from label generation. The efficiency of neural networks and the ability to perform multiple tasks at once is now a necessity with the amount of survey data pouring into pipelines.
Footnote 2: [https://github.com/LSSTDESC/RAIL](https://github.com/LSSTDESC/RAIL)
As surveys push deeper into the sky, they will produce unprecedented amounts of objects that will be necessary to process LSST will provide the deepest ground-based observations ever, and survey terrabytes of data every night, highlighting a need for accurate and precise object detection and classification, potentially in real-time. Correctly classifying and and deblending sources will be necessary for a wide range of studies, and deep instance segmentation models will be a valuable tool in handling these tasks.
Figure 8: Inference on a close blend. The ground truth is shown on the left. RGB images are created with a Lupton contrast scaling. Other panels show model inference of segmentation maps and classes. Top row, left to right: R101C4, R101dc5, R101fpn. Bottom row, left to right: R50cas, R50def, X101fpn. The colors indicate classes, green for galaxy and red for star. Differences in detections are solely due to the different backbones. While the networks do not pick up every ground truth object, they are also able to detect real objects that were missed by our ground truth labelling.
Figure 9: Inference on the same close blend as Figure 8, but only with a Swin architecture. The ground truth is shown on the left most panel, and the effect of lowering the detection confidence threshold to 0.5, 0.4, 0.3 is shown in left to right, respectively. As the threshold is lowered, objects within a larger footprint are detected.
## Acknowledgements
We thank Dr. S. Luo and Dr. D. Mu at the National Center for Supercomputing Applications (NCSA) for their assistance with the GPU cluster used in this work. We thank Y. Shen for helpful discussion on the HST observations of the COSMOS field. G.M., Y.L., Y.L. and X.L. acknowledge support from the NCSA Faculty Fellowship and the NCSA SPIN programs.
This work utilizes resources supported by the National Science Foundation's Major Research Instrumentation program, grant #1725729, as well as the University of Illinois at Urbana-Champaign. We acknowledge use of Matplotlib (Hunter, 2007), a community-developed Python library for plotting. This research made use of Astropy,3 a community-developed core Python package for Astronomy (Astropy Collaboration et al., 2013; Price-Whelan et al., 2018). This research has made use of NASA's Astrophysics Data System.
Footnote 3: [http://www.astropy.org](http://www.astropy.org)
The Hyper Suprime-Cam (HSC) collaboration includes the astronomical communities of Japan and Taiwan, and Princeton University. The HSC instrumentation and software were developed by the National Astronomical Observatory of Japan (NAOJ), the Kavli Institute for the Physics and Mathematics of the Universe (Kavli IPMU), the University of Tokyo, the High Energy Accelerator Research Organization (KEK), the Academia Sinica Institute for Astronomy and Astrophysics in Taiwan (ASIAA), and Princeton University. Funding was contributed by the FIRST program from Japanese Cabinet Office, the Ministry of Education, Culture, Sports, Science and Technology (MEXT), the Japan Society for the Promotion of Science (JSPS), Japan Science and Technology Agency (JST), the Toray Science Foundation, NAOJ, Kavli IPMU, KEK, ASIAA, and Princeton University.
This paper makes use of software developed for the Large Synoptic Survey Telescope. We thank the LSST Project for making their code available as free software at [http://dm.lsst.org](http://dm.lsst.org)
The Pan-STARRS1 Surveys (PS1) have been made possible through contributions of the Institute for Astronomy, the University of Hawaii, the Pan-STARRS Project Office, the Max-Planck Society and its participating institutes, the Max Planck Institute for Astronomy, Heidelberg and the Max Planck Institute for Extraterrestrial Physics, Garching, The Johns Hopkins University, Durham University, the University of Edinburgh, Queen's University Belfast, the Harvard-Smithsonian Center for Astrophysics, the Las Cumbres Observatory Global Telescope Network Incorporated, the National Central University of Taiwan, the Space Telescope Science Institute, the National Aeronautics and Space Administration under Grant No. NNX08AR22G issued through the Planetary Science Division of the NASA Science Mission Directorate, the National Science Foundation under Grant No. AST-1238877, the University of Maryland, and Eotvos Lorand University (ELTE) and the Los Alamos National Laboratory.
Based [in part] on data collected at the Subaru Telescope and retrieved from the HSC data archive system, which is operated by Subaru Telescope and Astronomy Data Center at National Astronomical Observatory of Japan.
This research has made use of the NASA/IPAC Infrared Science Archive, which is funded by the National Aeronautics and Space Administration and operated by the California Institute of Technology.
|
2301.08054
|
Self-doped graphite nanobelts
|
We report that mechanical deformation of graphite with cavity shock waves
introduces a new group of charge carriers, with both effective mass and native
concentration one order of magnitude above those found in the pristine
material. Their nature, however, remains quasi-2D. Our results show that
defects introduced during mechanical exfoliation have the potential to unlock
oscillatory behavior above 50 T in graphite, thus providing a new probe for
field-induced electronic phase transitions in the material.
|
Bruno Cury Camargo, Banan El-Kerdi, Andrei Alaferdov, Walter Escoffier
|
2023-01-19T13:00:13Z
|
http://arxiv.org/abs/2301.08054v1
|
# Self-doped graphite nanobelts
###### Abstract
We report that mechanical deformation of graphite with cavity shock waves introduces a new group of charge carriers, with both effective mass and native concentration one order of magnitude above those found in the pristine material. Their nature, however, remains quasi-2D. Our results show that defects introduced during mechanical exfoliation have the potential to unlock oscillatory behavior above 50 T in graphite, thus providing a new probe for field-induced electronic phase transitions in the material.
+
Footnote †: journal: Carbon
## 1 Introduction
Although recent years have seen a steady progress in graphene manufacturing [1; 2], scalable applications relying on its electronic properties are still scarce. Among other reasons, this happens because substrates severely limit the electronic properties of 2D crystals, and because large-scale graphene manufacturing processes often yields multigraphenes or poor-quality material [2; 3]. Workarounds these issues often involve expensive and labour-intensive top-bottom encapsulation and exfoliation procedures which, albeit yielding impressive results, are restricted to small-scale laboratory use [4; 5; 6].
A possible alternative to such an issue is to employ multilayer graphenes, or thin graphite flakes in lieu of graphene. Graphite is a quasi-compensated
semimetal composed of stacked sheets of graphene. Although single crystals of this material are yet to be achieved and characterized, it is generally accepted that its charge carriers exhibit both low concentration (\(\approx 10^{10}\) cm\({}^{-2}\)) and effective masses (\(m\approx 0.05\)\(m_{e}\)) [7].
Sub-nanometer screening length in this material warrants a better tolerance against impurities and irregularities arising from substrates, while keeping some characteristics of graphene almost intact (such as high electronic mobility, quasi-linear dispersion and electron-hole compensation) [7; 8]. This approach, however, comes at the cost of the in-situ control the material's electronic properties, which is a more arduous task in bulk systems. Although feasible on mesoscopic devices properly encapsulated between insulating materials, the necessity of top and bottom gate electrodes makes this process as labour-intensive and non-scalable as in graphene [9].
A perhaps naive solution to this dilemma is to directly control the charge carrier concentration in the specimens by modulating graphite's native charge carrier concentration. Due to the high temperatures necessary to synthesize graphite, however, this is not a simple task [10]. The presence of dopants usually interferes with the graphitization process, and end up being either eliminated from the final product, or generating graphite of inferior quality [11; 12]. Luckily, the introduction of defects in graphite also has the potential to introduce charge carriers in the material. It has been long demonstrated that vacancies induced by neutron radiation, for example, produce a small charge imbalance in graphite towards holes [13]. The self-doping imposed by defects in graphite has some advantages over doping achieved with foreign elements. Besides impeding migration (as in the case of mobile interstitial ions [14]), the lack of foreign elements guarantees that no contamination would seep out of graphite in various environments [14]. It also ensures a better overall chemical compatibility between graphite and its surroundings.
Here, we take this approach and demonstrate mechanically-treated graphite flakes obtained through liquid phase exfoliation [15] exhibiting a group of quasi-2D charge carriers with concentration much above those of pristine oriented
graphite. Quantum oscillations in this material persist to unusually high magnetic fields, in excess of 50 T. In this field range, they seem to overlap with electronic phase transitions associated to multibody effects in pristine graphite. The latter, denoted by the presence of a magnetic-field-induced high resistance state (HRS), is currently described as a Fermi surface instability triggered exclusively above the the quantum limit. As we shall see, however, the Landau quantization regime in our system surprisingly coexists with the HRS, making this material an ideal test subject for models aiming at describing the high magnetic field behavior of graphite.
## 2 Results and discussion
Samples used here were composed by previously-synthesized narrow graphite belts, few micrometers wide and several micrometers long [15]. In short, they were obtained by chemically-assisted, liquid-phase exfoliation of natural graphite flakes, followed by a brief annealing treatment at 2950 \({}^{o}C\) for 10 sec. This resulted on small graphite belts with thickness varying between 10 nm and 100 nm, which were then deposited atop 300 nm SiO\({}_{2}\)-coated Si substrate, and individually contacted with thermally-evaporated Pd/Au electrodes in a standard 4-probe configuration. A sample picture is shown on Fig. 1. After contacting, the samples were subjected to resistance vs. temperature and magnetic field measurements (R(T) and R(B), respectively) in the temperature range 300 mK \(\leq T\leq\) 300 K, for \(B\) up to 60 T. In total, three samples were measured. They presented the same qualitative results, the most representative of which is shown here.
The samples exhibited insulating-like R(T) curves (\(dR/dT<0\)), with saturation below 40 K. Such a behavior is typical of disordered graphites [10], with the insulating-like dependency being attributed to a reduction of the carrier mobility with temperature in disordered systems [16]. At intermediate temperatures (between 40 K and 200 K), the resistance followed an \(R(T)\approx log(1/T)\) dependence. This functional form suggests a granular material in which the
charge carrier concentration increases with T [17]. An in-depth analysis of such a behavior in this kind of samples will be published elsewhere [18].
The sample magnetoresistance was also typical of various types of graphite [19; 20; 21]. A superlinear \(R\propto B^{1.2}\) behavior was observed at low magnetic fields, followed by saturation and a region of negative magnetoresistance above 20 T. Between 35 T and 51 T, a high resistance state (HRS) was observed. The HRS is a characteristic feature of graphite and believed to be associated to a c-axis density-wave transition in the material [21; 22]. Superimposed to this non-monotonic MR background, however, an unexpected oscillations could be resolved, which persisted above the temperatures necessary to suppress the HRS. It started around 15 T, and remained up to the highest measured magnetic field. This effect is shown in figs. 2 and 3, depending on temperature and relative sample orientation to the magnetic field. In order to separate this oscillating component of the magnetoresistance (\(\Delta R\)) from the rest of the data,
Figure 1: Resistance vs. temperature measurements for the sample presented here. The red line is a guide to the eye. The bottom left inset contains a picture of the device, superimposed to a false color map showing the ratio between Raman’s D and G peaks in the region between the voltage electrodes. The dashed line indicates the sample boundary, and the resolution of the Raman map is \(0.5\times 0.5\)\(\mu\)m\({}^{2}\). The top right inset shows a typical Raman spectrum of the sample. In it, the G, D and 2D peaks of graphite are indicated. Additionally, a Raman peak of the Si substrate at 500 cm\({}^{-1}\) is also observed, demonstrating that the measurement samples the entire sample cross section at the LASER spot.
two different approaches were taken, as illustrated in fig. 3. In one of them, a smooth decaying background was considered above 25 T (background 1), whereas on the other a non-monotonic decay was assumed, following the mean of the oscillatory behavior (background 2). The latter accounted for the removal of both of the region of negative magnetoresistance as well as the HRS feature, at the cost of the accuracy of the quantum oscillations' amplitude for \(B>30\) T. Regardless of the method employed, \(\Delta R\) was found periodic with the inverse of the magnetic field (see fig. 2), akin to the Shubnikov-de-Haas effect (SdH), and apparently exhibited two frequencies: 86 T and its double 172 T.
This quantum oscillatory behavior scaled with the quantizing magnetic field in tilted magnetoresistance measurements, as shown in fig. 3. A doubling of the oscillating peak was observed at \(B\cos(\theta)=25\) T (\(1/B\approx 0.04\)) by increasing \(\theta\), which was defined as the angle between the applied field and the sample's c-axis. Such a result suggests that the 172 T component to the quantum oscillations was associated to the Zeeman splitting of the 86 T component. The scaling of the oscillatory behavior with \(B\cos(\theta)\) further indicates that the associated group of charge carriers is confined in-plane, and/or forms a highly anisotropic pocket in graphite's Fermi surface. Considering graphite's band structure [7], a 86 T oscillating frequency corresponds to an in-plane carrier concentration of \(2.3\times 10^{12}\) cm\({}^{-2}\).
Their source, however, is unlikely to be associated interstitial and substitutional dopants, as extensive x-ray diffractometry and x-ray photoemission spectroscopy (XPS) did not reveal any sizable quantity of foreign elements (see the SI and ref. [15]). Back-gating with \(V_{G}=\pm 50\) V yielded no effects in varying the quantum oscillations in the material either, thus suggesting that the phenomenon is associated to bulk charge carriers, rather than associated to quasi-2D charge carriers located at the interface between graphite and the substrate. Raman mapping of the sample, which probed its entire volume (as evidenced by the presence of a Raman line of the underlying Si during measurements on graphite's surface, see fig. 1), also revealed a homogeneous intensity for graphite's D peak (\(\approx 1350\) cm\({}^{-1}\)) throughout the device. The ratio between
the D and G peaks was nearly constant, below 1/10. These results indicate some evenly-distributed disorder on sp\({}^{2}\) bonding across the sample, which did not show a "hot" region of defects [23].
Curiously, the charge carrier concentration estimated from the 86 T component of the quantum oscillations approximately matched the average density of stacking defects in the material, of \(\approx 2\times 10^{19}\) cm\({}^{-3}\). These defects, composed by interstitial non-bonded graphene edges and interconnecting planes [15], were directly imaged through TEM measurements (see ref. [15] and the SI). An estimation of their density in our samples was performed by sampling their numbers in cross-sectional TEM images of different nanobelts of the same batch.
The Lifshitz-Kosevich (L-K) model for quantum oscillations in solids describes reasonably well the oscillatory component of the magnetoresistance, down to inverse magnetic fields of \(B^{-1}\approx 0.035\;\mathrm{T}^{-1}\). Taking the quantum oscillations frequency at 86 T, this magnetic field corresponds to the emptying of the \(n=3\) LL in the material (see fig. 4). Below this value of \(B^{-1}\), a clear frequency doubling was observed, accompanied by a quantum oscillation ampli
Figure 2: Magnetoresistance curves measured at different temperatures. The inset shows their oscillating component \(\Delta R\) vs. the inverse magnetic field. The dashed lines are evenly spaced and are a guide to the eye. A background subtraction attempting to remove the contribution of the HRS was employed (“background 2” - see fig. 3 and the main text). All curves have been displaced vertically for clarity.
tude halving. Such a behavior can be attributed as a consequence of a Zeeman splitting \(\Delta_{s}=g\mu_{B}B\) at a half-integer ratio to the cyclotron energy [24] in the material (\(g\) the gyromagnetic factor and \(\mu_{B}\) the Bohr magneton), thus corroborating our angle-dependent magnetoresistance measurements (fig. 3). Indeed, the quantum oscillations in this field range were better described by a modified L-K model accounting for a large relative spin splitting [25]:
\[\begin{split}\Delta R\propto\sum_{s=1}^{\infty}(-1)^{s}\exp{ \left[-2\left(\frac{\pi s}{\omega_{c}\tau_{Q}}\right)\right]}\frac{2s\pi^{2} k_{B}T/\hbar\omega_{c}}{\sinh 2s\pi^{2}k_{B}T/\hbar\omega_{c}}\\ \times\cos{\left(\frac{2s\pi B_{0}}{B}\right)}\cos{\left(\frac{ \pi s\Delta_{s}}{\hbar\omega_{c}}\right)},\end{split} \tag{1}\]
where \(\tau_{Q}\) is the quantum lifetime of carriers, \(\omega_{c}=eB/m^{*}\) the cyclotron
Figure 3: Oscillating component of magnetoresistance \(\Delta R\) as a function of the reciprocal quantizing magnetic field \(1/B\cos(\theta)\). The angle \(\theta\) corresponds to the angle between the sample’s c-axis and the applied magnetic field. Curves have been displaced vertically for clarity. The contribution due to the magnetoresistance anomaly was not removed (“background 1” in the inset). Its onset is pointed by black arrows. Measurements were performed at T = 1.6 K. The dashed lines are guides to the eye, and point regions where a splitting of the maxima are observed with the increase of sample tilting. The inset shows a raw R(T) measurement, obtained at T = 1.6 K and \(\theta=0\) deg. Superimposed to the experimental data are lines used to subtract the non-monotonic MR background while ignoring the HRS (red (darker) line, background 1) and while attempting to remove it (cyan (lighter) line, background 2).
frequency of the carriers with effective mass \(m^{*}\), \(k_{B}\) the Boltzmann constant, and \(1/B_{0}\) the quantum oscillations frequency. Unfortunately, due to its proximity with the HRS, the background subtraction for magnetic fields \(B^{-1}<0.035\) T\({}^{-1}\)is not as accurate as for the remainder of the experimental window (see fig. 3), and the amplitude of the quantum oscillations in this region could not be quantitatively evaluated.
Yet, the main oscillatory component of MR in this region allowed the estimation of the spin-splitting parameter \(S\equiv(1/2)g(m^{*}/m_{e})=[B_{0}\Delta(1/B)]^{-1}\). Here, \(\Delta(1/B)\) is the distance between the split peaks of the same Landau level (main sub-peaks, labelled \(3+\) and \(3-\) in fig. 4). The electronic Lande g-factor obtained using this relation had a value of \(g\approx 2.1\pm 0.2\), which is - within accuracy - the same value obtained through spin resonance measurements in pristine graphite [26]. We note, however, that the qualitative behavior in this field range is clearly more convoluted, and not fully captured by our simplified approach. For example, each of the two main peaks is apparently composed of two maxima. These fine structures can have different origins, which are beyond the scope of this study.
Outside this field range, the effective electronic masses were extracted from the variation of the amplitude of the quantum oscillations (QO) with temperature (see the SI). They yielded an effective value of ca. \(m^{*}=0.6\pm 0.1\)\(m_{e}\), where \(m_{e}\) is the bare electron mass. The value is about one order of magnitude above the effective mass of carriers commonly found in pristine graphite [7]. Meanwhile, the quantum scattering rates - extracted from the decay of \(\Delta(MR)\times 1/B\) at low temperature - yielded values of \((1.35\pm 0.4)\times 10^{-13}\) s, which is within the vicinity of \(\approx 1.7\times 10^{-13}\) s previously reported for pristine graphite [29].
Combined, these results suggest that the new group of charge carriers constitutes a separate band in the graphite samples. These carriers have a larger mass, a smaller electronic mobility, and one order of magnitude higher charge carrier concentration than those found in the pristine material. Their presence is not usually resolved or discussed in measurements performed on bulk or mesoscopic samples [20, 29, 30, 22, 31, 32], being a particularity of our mechanically-treated
devices.
An interesting consequence of the results shown here is that the HRS might occur outside the quantum limit in graphite. While this is certainly a possibility (the HRS is triggered concomitantly with the crossing of the \(n=3+\) landau subband, therefore occurring outside the quantum limit), we are not able to discard the possibility that small parts of the sample retain their pristine graphite behavior. Therefore, although the triggering of the HRS coincides with the depopulation of the 3+ sublandau band for the carriers discussed here, it is possible that both phenomena are independent. Regardless, the presence of quantum oscillations in this magnetic field range in our samples provide an excellent playground to study subtleties of the magnetic-field-triggered electronic instabilities in graphite.
Figure 4: Oscillating component of magnetoresistance (red points) at T = 1.6 K, obtained after the removal of a non-monotonic background accounting for the HRS (background 2, see fig. 3). The blue line is a fit using the L-K model in eq. 1, with parameters \(m=0.59~{}m_{e}\), \(B_{0}=86\) T, and \(\tau=1.35\times 10^{-13}\) s. The numbers are tentative indexes for the landau levels resolved from the data. The black line is an envelope function of the type \(exp(-1/\omega_{c}\tau_{Q})\). The shaded area corresponds to the region where the HRS is observed, outside of which the L-K is satisfactory. The inset shows the same data, with a different background correction (background 1, see fig. 3).
The new group of charge carriers reported here is not usually seen. On some occasions (e.g. in refs [33; 34]) and upon close inspection (e.g.refs [31; 32; 35] - see the SI), however, it is possible to resolve sample-dependent oscillating features above 20 T in several devices, both periodic in B and 1/B. When periodic in 1/B, their frequencies vary between 30 T and above 100 T [31; 32; 35] (see the SI). Such a behavior, which is seldom addressed, might reflect different degrees of disorder in the material, not always captured by its mosaicity - the preferred parameter to infer sample quality [36]. A general lack of reports on the high frequency oscillating components on the MR of graphite (or their feeble contribution) might stem, therefore, from the fact that most works regarding the high-magnetic field properties of this material attempt to assess the properties of the HRS, whence utilizing the best-quality available bulk samples. Conversely, here, we utilized shockwave-treated micro-graphite which, albeit possessing high crystallinity, had an inherently higher disorder in comparison with pristine natural flakes [15].
## 3 Conclusions
In short, in this report, we demonstrated the presence of a new group of charge carriers in shockwave-treated graphite, with density and effective mass ten times above those found in the pristine material. Our results suggest that the introduced carriers maintain graphite's quantum scattering rates and highly anisotropic band structure, whilst possessing much higher effective mass. Such an approach opens routes towards the study of the electronic correlations occurring in the high magnetic field phase of graphite, as well as for the fabrication of multilayer-graphene-based devices with hard-tuned charge carrier concentration - a desirable feature on carbon-based electronics and analog processing circuits.
## Acknowledgments
We would like to thank Y. Kopelevich, C. Precker for fruitful discussions and J. Binder for assistance with Raman measurements. This work was sup
ported in part by the National Science Center, Poland, research project no. UMO-2016/23/P/ST3/03514. We acknowledge the support of LNCMI-CNRS, a member of the European Magnetic Field Laboratory (EMFL) under the proposal number TSC09-119.
|
2306.04811
|
Generative Text-Guided 3D Vision-Language Pretraining for Unified
Medical Image Segmentation
|
Vision-Language Pretraining (VLP) has demonstrated remarkable capabilities in
learning visual representations from textual descriptions of images without
annotations. Yet, effective VLP demands large-scale image-text pairs, a
resource that suffers scarcity in the medical domain. Moreover, conventional
VLP is limited to 2D images while medical images encompass diverse modalities,
often in 3D, making the learning process more challenging. To address these
challenges, we present Generative Text-Guided 3D Vision-Language Pretraining
for Unified Medical Image Segmentation (GTGM), a framework that extends of VLP
to 3D medical images without relying on paired textual descriptions.
Specifically, GTGM utilizes large language models (LLM) to generate
medical-style text from 3D medical images. This synthetic text is then used to
supervise 3D visual representation learning. Furthermore, a negative-free
contrastive learning objective strategy is introduced to cultivate consistent
visual representations between augmented 3D medical image patches, which
effectively mitigates the biases associated with strict positive-negative
sample pairings. We evaluate GTGM on three imaging modalities - Computed
Tomography (CT), Magnetic Resonance Imaging (MRI), and electron microscopy (EM)
over 13 datasets. GTGM's superior performance across various medical image
segmentation tasks underscores its effectiveness and versatility, by enabling
VLP extension into 3D medical imagery while bypassing the need for paired text.
|
Yinda Chen, Che Liu, Wei Huang, Sibo Cheng, Rossella Arcucci, Zhiwei Xiong
|
2023-06-07T22:20:51Z
|
http://arxiv.org/abs/2306.04811v1
|
# Generative Text-Guided 3D Vision-Language Pretraining for Unified Medical Image Segmentation
###### Abstract
Vision-Language Pretraining (VLP) has demonstrated remarkable capabilities in learning visual representations from textual descriptions of images without annotations. Yet, effective VLP demands large-scale image-text pairs, a resource that suffers scarcity in the medical domain. Moreover, conventional VLP is limited to 2D images while medical images encompass diverse modalities, often in 3D, making the learning process more challenging. To address these challenges, we present **G**enerative **T**ext-**G**uided 3D Vision-Language Pretraining for Unified **M**edical Image Segmentation (GTGM), a framework that extends of VLP to 3D medical images without relying on paired textual descriptions. Specifically, GTGM utilizes large language models (LLM) to generate medical-style text from 3D medical images. This synthetic text is then used to supervise 3D visual representation learning. Furthermore, a negative-free contrastive learning objective strategy is introduced to cultivate consistent visual representations between augmented 3D medical image patches, which effectively mitigates the biases associated with strict positive-negative sample pairings. We evaluate GTGM on three imaging modalities - Computed Tomography (CT), Magnetic Resonance Imaging (MRI), and electron microscopy (EM) over 13 datasets. GTGM's superior performance across various medical image segmentation tasks underscores its effectiveness and versatility, by enabling VLP extension into 3D medical imagery while bypassing the need for paired text.
## 1 Introduction
Vision-Language Pretraining (VLP) has achieved significant progress in Radford et al. (2021); Li et al. (2022); Xue et al. (2022); Alayrac et al. (2022), owing to its capabilities in learning visual representations from textual descriptions of images without annotations. While VLP has been introduced to 2D medical image analysis recently, existing medical VLP works rely heavily on textual descriptions written by experienced experts, and the domain of 3D medical VLP remains largely unexplored Zhang et al. (2020); Huang et al. (2021); Boecking et al. (2022); Wang et al. (2022); Zhou et al. (2023). Despite the fact that 3D medical images, such as Computed Tomography (CT), Magnetic Resonance Imaging (MRI), and electron microscopy (EM), typically contain more valuable and clinically relevant information compared to 2D images, their utilization is hindered primarily due to the lack of associated 3D medical image-text datasets. Additionally, certain modalities of medical imaging, like EM, often do not have corresponding textual descriptions in real-world applications.
To address the above challenges, we propose a framework called **G**enerative **T**ext-**G**uided 3D Vision-Language Pretraining for Unified **M**edical Image Segmentation (GTGM). GTGM leverages the power of VLP in 3D medical image analysis by employing large language models (LLM) to generate medically relevant textual descriptions for 3D medical images. The main goal of GTGM is to learn general and robust 3D representations from these synthetic textual descriptions rather than specific organs and modalities. The differences between GTGM and existing image self-supervised algorithms can be summarized in Figure 1. GTGM integrates two learning objectives: acquiring visual-textual invariants from 3D medical images and synthetic text and extracting visual invariants from augmented 3D medical images. To learn general 3D visual representations, we introduce a negative-free contrastive learning strategy. This strategy aims to disentangle the variables in the latent space, rather than following traditional contrastive learning, which may carry biases due to the stringent assumption of one-to-one positive sample pairings.
We evaluate GTGM across various medical image segmentation tasks, covering commonly used modalities like CT and MRI over 10 datasets. We also extend our evaluation to the challenging modality of EM, especially the neuron segmentation task, over three datasets and multiple species. It is noteworthy that the inherent challenges of EM tasks, such as complex neuronal structures, inconsistent image quality, dense packing, and structural heterogeneity, make them significantly more difficult than other modalities Liu et al. (2022); Huang et al. (2022). Impressively, our GTGM attains state-of-the-art (SOTA) results on all EM tasks and nearly all CT and MRI tasks, despite not utilizing real textual descriptions during VLP. This accomplishment underscores the efficacy of synthetic text in guiding 3D medical VLP, which indicates the adaptability and potential of GTGM across a broad spectrum of 3D medical VLP applications. The contributions of this paper are as follows:
* Our GTGM framework is the first to showcase the effectiveness of 3D medical VLP, capable of learning 3D visual representations independent of specific organs or modalities. This fills the void in 3D medical visual learning within the scope of VLP. Furthermore, GTGM's ability to pretrain without the need for expert-generated real text significantly alleviates one of the major challenges in medical VLP: the lack of large-scale image-text pairs.
* GTGM demonstrates superior performance and versatility across various medical image segmentation tasks, supporting different modalities like MRI, CT, and EM, and is adaptable to different species such as human and _Drosophila_. Moreover, GTGM excels in segmenting extremely small and densely packed structures in EM neuron images, expanding its applicability beyond organ and lesion segmentation in CT and MRI images.
* GTGM's performance across diverse modalities, organs, and species, as well as its ability to handle varying densities and sizes of segmented objects, indicates its proficiency in learning a comprehensive and robust 3D medical image representation. In other words, GTGM provides an opportunity to generalize novel tasks through text-driven zero-shot medical image segmentation.
## 2 Related Work
Image Self-supervised LearningSelf-supervised learning (SSL) has made significant advancements in computer vision by leveraging pretraining tasks without the need for annotations, as demonstrated by various pretext tasks Doersch et al. (2015); Gidaris et al. (2018); Noroozi and Favaro
Figure 1: Comparison between our proposed approach and mainstream self-supervised learning (SSL) methods, where \(X\) and \(T\) represent images and text, respectively. (a)Image-only SSL with augmented views. (b) Pretraining with paired images and corresponding text. (c) Our GTGM framework pretrained with synthetic text-guided VLP and augmented-guided SSL.
[2016], Zhang et al. [2016]. Recently, contrastive learning has emerged as the standard method in SSL Grill et al. [2020], Zbontar et al. [2021], Misra and Maaten [2020], Chen and He [2021], Bardes et al. [2022]. To address the limitations of traditional contrastive learning, such as the requirement for large batch sizes and strong augmentations He et al. [2020], Chen et al. [2020], BYOL and BarlowTwins Grill et al. [2020], Zbontar et al. [2021] employ a dual-branch structure to align the embeddings of two augmented images, eliminating the need for negative samples in contrastive learning. SimSiam Chen and He [2021] demonstrates the importance of the stop-gradient mechanism on the dual-encoder, introducing a model without negative samples. In the context of SSL tailored for medical images, PCRLv2 Zhou et al. [2023b] combines contrastive learning with reconstruction pretasks. However, PCRLv2 has limitations in generalization across modalities, particularly in the case of extremely dense and small structures in EM images.
Medical Vision-Language PretrainingMedical VLP Zhang et al. [2020] has been introduced to integrate textual information into medical image SSL. However, the exploration of medical SSL VLP is primarily limited to 2D images, mainly due to the intricacy of medical reports and the scarcity of large-scale medical image-text datasets. Nonetheless, in the medical domain, 3D medical images (such as MRI, CT, and EM) assume a vital role and offer richer and more valuable information compared to their 2D medical images. Studies such as Zhang et al. [2020], Huang et al. [2021], Wang et al. [2022], Tiu et al. [2022] concentrate on the chest X-ray (CXR) domain; however, their applicability to other medical image modalities, including MRI, CT, and various 3D medical images, is yet to be established. In their work Liu et al. [2023], the authors develop a CT segmentation method that incorporates manually generated text describing the organs present in the image, based on corresponding annotations. However, their approach is limited by full supervision and the scale of annotations. Furthermore, their method can only process CT images. In recent works Butoi et al. [2023], Ye et al. [2023], methods are proposed that can process 3D medical segmentation tasks with different modalities. However, these approaches require large-scale well-annotated 3D medical images for supervised pretraining. Moreover, UniSeg Ye et al. [2023] lacks the ability to learn rich textual information as their manually designed prompts only indicate the type of task without describing the images. Despite its importance, the generalizability of VLP to a wider range of medical applications is limited by the absence of publicly available datasets containing 3D medical image-text pairs, as well as the inability of experienced experts to describe certain modalities such as EM images.
## 3 Method
### Overview
Our GTGM model is designed to learn general representations of unannotated 3D medical images from synthetic textual descriptions. Like other VLP models, GTGM incorporates both a visual encoder \(f_{I}(\cdot)\) and a text encoder \(f_{T}(\cdot)\) to extract representations from images and text respectively. However, GTGM uniquely leverages synthetic textual descriptions rather than the real paired text of 3D medical images, given the lack of public 3D medical image-text datasets. The framework is depicted in Figure 2.
### Generating Textual Descriptions
In the process of generating textual descriptions for medical images, we designate a generator \(G(\Theta)\), initialize with BLIP Li et al. [2022a] pretrained weight. This generator is subsequently finetuned on the MedICAT Subramanian et al. [2020] dataset, endowing the synthetic textual descriptions with biomedical style. It is crucial to underscore that MedICAT Subramanian et al. [2020] solely comprises 2D images drawn from biomedical literature, with the associated captions serving as textual descriptions rather than the real description from clinical expertise. Consequently, the generation phase is not limited to any real radiology datasets. During the finetuning phase, we consider a 2D medical image \(I\) and its corresponding textual description \(T=\{t_{1},t_{2},...,t_{n}\}\) from MedICAT Subramanian et al. [2020], where \(n\) is indicative of the textual description's length. The primary objective here is to amplify the conditional probability of the text given one image \(I\):
\[P(T\mid I)=\prod_{i=1}^{n}P\left(t_{i}\mid I,t_{<i}\right), \tag{1}\]
where \(t_{<i}=\{t_{1},t_{2},\ldots,t_{i-1}\}\) denotes the generated text tokens. The conditional probability of each token \(t_{i}\) can be computed as:
\[P\left(t_{i}\mid I,t_{<i}\right)=softmax\left(W_{o}h_{i}+b_{o}\right), \tag{2}\]
where \(W_{o}\) and \(b_{o}\) represent the weights and biases of the output layer of \(G(\Theta)\), respectively, and \(h_{i}\) is the hidden state of the \(G(\Theta)\) at the \(i^{th}\) time step, which incorporates information from both the image \(I\) and the generated text tokens \(t_{<i}\).
The learning objective in the finetuning generator stage is:
\[\mathcal{L}_{Cap}=-\sum_{i=1}^{n}\log P\left(t_{i}\mid I,t_{<i}\right). \tag{3}\]
In the generation phase, we take a set of \(N\) 3D medical images, \(X=\{x_{1},x_{2},...,x_{N}\}\). For each 3D volume \(x_{i}\), we randomly sample a 2D slice as an input for the finetuned generator \(G(\Theta)\), which then generates the textual description \(T_{i}\) of 3D volume \(x_{i}\). This can be mathematically formulated as \(T_{i}=G(x_{i}|\Theta)\). After the generation phase, we filter out duplicate descriptions and remove certain fixed vocabulary that lacks information using regular expressions. To enhance the accuracy and distinctiveness of the textual descriptions, we prepend the name of the dataset to the beginning of each description. Each 3D medical image is then paired with a synthesized textual description, forming the image-text pairs \(D=\{(x_{1},T_{1}),(x_{2},T_{2}),...,(x_{N},T_{N})\}\) for subsequent representation learning. Examples of the generated text can be found in the appendix.
### 3D Visual-Textual Representation Learning
Given the dataset \(\mathcal{D}\) of 3D medical images paired with synthetic text, we aim to learn the visual-textual representation via the image encoder be \(f_{I}(\cdot)\) and the text encoder be \(f_{T}(\cdot)\). The text encoder \(f_{T}(\cdot)\), initialized with the weights from BioBERT Lee et al. (2020), is frozen during pretraining to maximize computational efficiency during the extraction of text embeddings.
Figure 2: (a) The pipeline of GTGM, where we perform random cropping on medical images and utilize a pretrained generator to generate corresponding text. GTGM parallel learns visual invariants (feature-wise) and visual-textual invariants (instance-wise). (b) The finetuning process, where we employ the image encoder weights obtained during pretraining, along with a limited amount of labeled data, to perform downstream medical image segmentation tasks, including CT, MRI, and EM modalities. The \(\overleftarrow{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{ \text{\text{\
For a sample batch of image-text pairs \((X_{i},T_{i})\), we first compute their respective feature representations: \(v_{e,i}=f_{I}(X_{i})\) and \(t_{e,i}=f_{T}(T_{i})\).
we employ a contrastive learning objective to predict the matched pair \((v_{e,i},t_{e,i})\) among \(N\times N\) potential image-text pairs, while concurrently ensuring that \(N^{2}-N\) negative pairs are distinctly separated. Concretely, we utilize two non-linear visual and text projectors, \(\mathcal{F}_{l}\) and \(\mathcal{F}_{v}\), to transform \(\mathbf{v}_{e,i}\) and \(\mathbf{t}_{e,i}\) into the same dimensional space \(d\), where \(\hat{\mathbf{v}}_{e,i}=\mathcal{F}_{I}\left(\mathbf{v}_{e,i}\right)\), \(\hat{\mathbf{t}}_{e,i}=\mathcal{F}_{T}\left(\mathbf{t}_{e,i}\right)\), and \(\{\hat{\mathbf{v}}_{e,i},\hat{\mathbf{t}}_{e,i}\}\in\mathbb{R}^{d}\). Subsequently, we generate image vectors \(\left[\hat{\mathbf{V}}_{e,i}\right]_{i=1}^{N}\) and text vectors \(\left[\hat{\mathbf{T}}_{e,i}\right]_{i=1}^{N}\) within a training batch to compute cosine similarities:
\[\mathcal{L}_{v}^{v2t}=-\log\frac{\exp\left(s_{i,i}^{v2t}/\sigma_{1}\right)}{ \sum_{j=1}^{K}\exp\left(s_{i,j}^{v2t}/\sigma_{1}\right)},\ \mathcal{L}_{t}^{t2v}=-\log\frac{\exp\left(s_{i,i}^{t2v}/\sigma_{1}\right)}{ \sum_{j=1}^{K}\exp\left(s_{i,j}^{t2v}/\sigma_{1}\right)}, \tag{4}\]
where \(\mathcal{L}_{v}^{v2t}\) and \(\mathcal{L}_{t}^{t2v}\) are image-text and text-image InfoNCE Oord et al. (2018) contrastive loss, respectively. \(s_{i,i}^{v2t}=\hat{\mathbf{v}}_{e,i}^{\top}\hat{\mathbf{t}}_{e,i}\) and \(s_{i,i}^{t2v}=\hat{\mathbf{t}}_{e,i}^{\top}\hat{\mathbf{v}}_{e,i}\) represent image-text and text-image similarities. \(K\) is the batch size of each step. \(\sigma_{1}\) is the temperature hyper-parameter set to 0.07 in our experiments.
The loss function can be articulated as:
\[\mathcal{L}_{VLP}=\frac{1}{2N}\sum_{i=1}^{N}\left(\mathcal{L}_{v}^{v2t}+ \mathcal{L}_{t}^{t2v}\right). \tag{5}\]
Through overall loss \(\mathcal{L}_{\mathrm{VLP}}\), the model learns maximal mutual information between the matched multi-modal pairs containing cross-view attributes within a batch.
### 3D Visual Representation Learning
Image contrastive learning, commonly deployed to learn visual invariants, typically involves defining one positive sample (such as an augmented view) and treating the remainder of the batch's samples as negatives. Nevertheless, this rigid 1-to-n positive-negative pairing tends to introduce substantial bias when learning 3D visual representation, particularly because in 3D medical imaging, each sample represents a patch of the original volume. Consequently, as shown in Figure 3, two slices could both represent normal organ semantics, even if their source volumes contain abnormal organs. Moreover, 1-to-n contrastive learning requires a large batch size Grill et al. (2020); Zbontar et al. (2021), which is not feasible in 3D visual learning tasks Liu et al. (2021).
To address these challenges, we introduce a negative-free learning objective instead of the rigid positive-negative based loss. This objective aims to disentangle the latent space feature-wisely and maximize the information in each feature dimension Zbontar et al. (2021).
Figure 3: Graphically explanation of the bias introduced by instance-wise 3D image SSL and the effectiveness of our novel feature-wise 3D image SSL in mitigating bias arising from positive and negative sample pairs.
We first generate two distinct views \(X^{1}\) and \(X^{2}\) of the medical volume \(X\) through random data augmentation. We initiate by normalizing the augmented embedding pairs \(\{\mathbf{Ve^{1}},\mathbf{Ve^{2}}\}\in\mathbb{R}^{N\times d}\) along the batch \(K\) dimension. This normalization ensures each feature dimension has a zero-mean and \(1/\sqrt{K}\) standard deviation distribution, resulting in \(\tilde{\mathbf{V}}_{\mathbf{e}}\). Subsequently, we compute their cross-correlation \(\tilde{\mathbf{V}}_{\mathbf{e}}^{\text{corr}}=\tilde{\mathbf{V}}_{e}^{1\text{ T}}\tilde{\mathbf{V}}_{e}^{2}\). The following defines the feature-dimension decorrelation formulas:
where \(N\) represents the batch size. Our objective is to minimize the off-diagonal elements of the cross-correlation matrix \(\tilde{V}_{e}^{corr}\) and maximize the diagonal elements. The loss function can be formulated as:
\[\mathcal{L}_{VR}=\frac{1}{D_{i}}\left\{\underbrace{\underbrace{ \sum_{j}^{D^{\prime}}\left(1-\sum_{i}^{K}\tilde{\mathbf{V}}_{e,i}^{1,j} \tilde{\mathbf{T}}\tilde{\mathbf{V}}_{e,i}^{2,j}\right)^{2}}_{\text{cross-view invariants}}+ \underbrace{\lambda_{1}\sum_{j}^{D^{\prime}}\sum_{i\neq j}^{K}\tilde{\mathbf{V }}_{e,i}^{1,j}\tilde{\mathbf{T}}\tilde{\mathbf{V}}_{e,i}^{2,j}}_{\text{cross-view superfluity reduction}}\right\},\quad\tilde{\mathbf{V}}_{e}=\frac{\mathbf{V}_{\mathbf{e}}-\mu_{K}(\mathbf{V}_{e})}{ \sigma(\mathbf{V}_{\mathbf{e}})\sqrt{K}}. \tag{6}\]
Here, \(\lambda_{1}\) is a non-negative hyperparameter used to adjust the trade-off between learning invariants and reducing superfluity in Equation 6 We set the value of \(\lambda_{1}\) according to the default setting used in Zbontar et al. (2021). The first term is crafted to learn a visual-invariant representation by optimizing the diagonal elements of the cross-correlation matrix \(\hat{V}_{e}^{corr}\) to be close to one. The second term is designed to lessen the correlation between distinct latent variables, thereby encouraging maximal information in each latent dimension by minimizing the off-diagonal elements in \(\hat{V}_{e}^{corr}\). Finally, the loss is normalized along the feature dimension \(d\).
The overall loss function can be articulated as:
\[\mathcal{L}=\lambda_{1}\mathcal{L}_{VLP}+\lambda_{2}\mathcal{L}_{VR}, \tag{7}\]
The coefficients \(\lambda_{1}\) and \(\lambda_{2}\) are used to control the weights, and we set \(\lambda_{1}\) and \(\lambda_{2}\) to be 1 and 0.01.
## 4 Experiments
We conduct extensive experiments on a large number of cross-modal, cross-species, and cross-organ medical images. Generally speaking, image pretraining often encounters information bottlenecks, and the performance of downstream tasks does not improve with increasing amounts of data. However, due to the introduction of generated textual information, GTGM achieves good performance across a wide range of downstream tasks. In this section, we provide a detailed description of the pretraining and downstream task datasets, parameter settings, and report our experimental results.
### Dataset and Metrics
**Dataset.** Our dataset encompasses imaging data from three modalities: CT, MRI, and EM. The primary sources of CT and MRI data are the Medical Segmentation Decathlon (MSD) Antonelli et al. (2022) competition, which includes 3D or 4D imaging of 10 different organs. During pretraining, we utilize all ImageTr and ImageTs, intentionally excluding labels. For the downstream segmentation tasks, we divide the ImageTr and corresponding ImageTs data into an 80% training set and a 20% test set. We conduct experiments using 1%, 10%, and 100% of the training set data (excluding the 1% setting when the data is insufficient to form a complete file). This allows us to evaluate the algorithm's performance under conditions of both label scarcity and abundance.
EM data primarily originate from large-scale EM datasets, namely the Full Adult Fly Brain (FAFB) Schlegel et al. (2021), MitoEM Wei et al. (2020), FIB-25 Takemura et al. (2017), and Kasthuri15 Kasthuri et al. (2015). These datasets contain images from diverse organisms, including _Drosophila_, mice, rats, and humans. For the downstream segmentation tasks, we evaluate the algorithm's performance using three datasets from the CREMI competition Funke et al. (2016). The CREMI dataset consists of three subsets: A, B, and C, each containing 125 images. We select the last 50 images from each subset for testing, while training is conducted using either the first 75 images or the first 10 images from each subset.
Metrics.Our primary evaluation of the algorithm's performance is conducted through downstream segmentation tasks. Among these, tasks involving CT and MRI scans fall into the category of semantic segmentation. Given the relatively small proportion of the entire volume occupied by the organs in these cases, we employ the Dice coefficient as a performance metric. In contrast, EM tasks are considered instance segmentation tasks. Here, there is no background in the volume and the neurons to be segmented are densely packed. Consequently, we use two metrics, Variation of Information (VOI) Nunez-Iglesias et al. (2013) and Adjusted Rand Index (ARAND) Arganda-Carreras et al. (2015), to evaluate the segmentation performance.
### Implementation Details
Our pretraining is conducted on 8 NVIDIA A100 GPUs, including both text-image and image-only pretraining. The batch size is set to 16 per GPU, with an initial learning rate of 2e-5 and a learning rate decay of 5e-2. All pretraining tasks are iterated for 100k iterations. For all downstream tasks, we use 2 NVIDIA RTX 3090 GPUs or 1 NVIDIA A100 GPU. For CT and MRI tasks, we train for 40k iterations, while for EM tasks we train for 100k iterations. We utilize the AdamW optimizer with beta coefficients set to 0.9 and 0.999 for all tasks.
### Experiment Results
We conduct extensive experiments on downstream applications involving organ-wise, modality-wise, and species-wise segmentation tasks. During the pretraining phase, we train the vision encoder using the same dataset as described earlier. In the finetuning phase, we concurrently update the parameters of the pretrained vision encoder and a randomly initialized decoder, using various label ratios. Our framework is compared with state-of-the-art self-supervised algorithms, including BYOL Grill et al. (2020), BarlowTwins Zbontar et al. (2021), and SimSiam Chen and He (2021) for natural images, as well as the latest SOTA algorithm specifically designed for medical images, PCRLV2 Zhou et al. (2023b). Due to the fact that these baselines either do not provide a 3D vision encoder Grill et al. (2020), Zbontar et al. (2021), Chen and He (2021) or are only pretrained on limited 3D datasets Zhou et al. (2023b), we replicate these algorithms on our pretraining dataset and evaluate their performance on downstream tasks using the same experimental setup. To ensure compatibility with 3D medical images and enable a fair comparison, we adopt the 3D ResNet-50 He et al. (2016) as the vision encoder in both the pretraining and fine-tuning stages for all experiments. For pixel-level instance segmentation in downstream tasks, we employ a U-Net-style decoder. The results obtained using the Swin-Transformer-base vision encoder Liu et al. (2021) can be found in the appendix.
Experimental Results for CT Segmentation.CT imaging plays a crucial role in the medical field, particularly in lesion segmentation. However, the limited contrast differences between different tissues in CT images can lead to blurry boundaries between lesions and surrounding normal tissues. The segmentation results of six datasets of CT images are presented in Table 1. We achieve state-of-the-art (SOTA) performance on all datasets except for Hepatic Vessel. This discrepancy may be attributed to the fact that in the pretraining dataset, the Hepatic Vessel often coexists with the liver, and textual descriptions tend to focus more on the larger scale of the liver itself, resulting in a decline in the effectiveness of pretraining.
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{2}{c}{Liver} & \multicolumn{3}{c}{Pancreas} & \multicolumn{3}{c}{Lung} \\ & 1 \% & 10\% & 100\% & 1\% & 10\% & 100\% & 10\% & 100\% \\ \hline \hline Random & 45.21 & 51.24 & 61.34 & 37.07 & 56.21 & 63.96 & 57.36 & 73.49 \\ \hline BYOL Grill et al. (2020) & 45.11 & 52.33 & 61.67 & 39.83 & 56.8 & 64.51 & 59.84 & 76.41 \\ SimSiam Chen and He (2021) & 48.22 & 51.29 & 62.39 & 40.03 & 54.82 & 64.69 & 59.71 & 79.43 \\ BarlowTwins Zbontar et al. (2021) & 50.13 & 55.85 & 64.93 & 39.67 & 57.01 & 63.59 & 55.22 & 71.37 \\ PCRLV2 Zhou et al. (2023b) & 51.69 & 56.63 & 65.19 & 39.80 & 56.05 & 63.38 & 55.30 & 74.19 \\ \hline \hline GTGM & 52.46 & 58.67 & 65.61 & 40.55 & 59.96 & 65.61 & 61.3 & 80.19 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Mean Dice scores (%) of CT image segmentation results. Red and blue entries denote the best and second-best results, respectively.
Experimental Results for MRI Segmentation.In comparison to CT imaging, MRI imaging typically involves four dimensions and exhibits lower imaging resolution, as well as more artifacts and noise in the images. Consequently, lesion segmentation in MRI images presents greater challenges. In our pretraining approach, tailored to 3D imaging, we extract the last three dimensions from the MRI images as input to the network. Despite these challenges, our approach has delivered promising experimental results, as depicted in Table 2. Consistently, our approach achieves either optimal or near-optimal solutions on the MRI dataset. We observe that numerous image-based self-supervised approaches yield degraded outcomes (with Dice scores lower than random initialization) due to variations in image dimensions within the MRI dataset. However, our approach exhibits robustness and effectively mitigates the degradation of the pretrained network by incorporating text as guidance.
Experimental Results for EM Neuron Segmentation.Electron Microscopy (EM) is an imaging technique with a resolution approximately a thousand times greater than CT and MRI, permitting the examination of structures at the nano and sub-nanometer levels. The ultra-high resolution makes neuron segmentation tasks in EM particularly challenging due to the densely packed structures. The typical methodology for EM neuron segmentation involves neural network-based affinity prediction, followed by post-processing with methods like WaterZ Funke et al. (2018) for instance segmentation. Due to the complexity of neurons, commonly used Vision-Language Pretraining (VLP) methods are not applicable. However, our proposed GTGM overcomes this limitation and demonstrates the effectiveness of generated text as a form of self-supervision training guidance. GTGM achieves state-of-the-art results on three neuron datasets in two settings. Please refer to Table 3 for specific numerical results.
Visual Results.GTGM demonstrates significant improvements in medical instance segmentation, particularly in terms of the connectivity of segmented surfaces, as shown in Figure 4. Our approach effectively segments liver tumors, and surface geometric structures of the left atrium, and exhibits the strongest integrity in neuronal segmentation.
## 5 Further Analysis
### Ablation Study of Component Design
Table 4 presents the results of the ablation study for our proposed three components. We utilize 3D ResNet-50 He et al. (2016) as the vision backbone and conduct instance segmentation tests on three representative datasets (Liver, Prostate, CREMI C). As shown in Table 4, the impact of utilizing the synthetic text-guided VLP is clearly evident in the significant improvements observed in downstream
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{2}{c}{HepaticVessel} & \multicolumn{2}{c}{Colon} & \multicolumn{2}{c}{Spleen} \\ & 1 \% & 10\% & 100\% & 1\% & 10\% & 100\% & 10\% & 100\% \\ \hline \hline Random & 49.84 & 51.53 & 64.56 & 50.29 & 50.8 & 50.6 & 73.85 & 84.92 \\ \hline BYOL Grill et al. (2020) & 49.67 & 58.85 & 65.57 & 50.1 & 50.29 & 50.22 & 77.64 & 85.98 \\ SimSiam Chen and He (2021) & 50.07 & 52.34 & 63.78 & 50.27 & 51.18 & 53.8 & 81.93 & 83.49 \\ BarlowTwins Zbontar et al. (2021) & 51.08 & 59.21 & 64.77 & 50.62 & 51.46 & 51.61 & 86.43 & 87.91 \\ PCRLv2 Zhou et al. (2023b) & 50.12 & 58.82 & 64.97 & 50.08 & 51.43 & 53.13 & 84.32 & 85.12 \\ \hline \hline GTGM & 50.39 & 59.74 & 65.13 & 51.12 & 51.74 & 53.88 & 86.95 & 89.64 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Mean Dice scores (%) of MRI image segmentation results. Red and blue entries denote the best and second-best results, respectively.
tasks. The combination of all components of GTGM achieves the highest performance, clearly demonstrating the efficacy of GTGM. Learning only visual invariants or visual-textual invariants exhibits varying performance across different datasets. Notably, only learning visual invariants during pretraining proves to be more effective for large-scale and dense instance segmentation, while visual-textual invariants excel in guiding small and sparse segmentation. This difference in performance can be attributed to the fact that the former captures more intricate structural information, which is challenging to concisely describe through text alone.
\begin{table}
\begin{tabular}{c c c|c c c c|c c c} \hline \hline \multicolumn{3}{c}{Training tasks} & \multicolumn{3}{c}{Liver (Dice \(\uparrow\))} & \multicolumn{3}{c}{Prostate (Dice \(\uparrow\))} & \multicolumn{3}{c}{CREMI C (VOI \(\downarrow\))} \\ \(\mathcal{L}_{Cap}\) & \(\mathcal{L}_{VLP}\) & \(\mathcal{L}_{VR}\) & 1 \% & 10\% & 100\% & 10\% & 100\% & 100\% & 10 & 75 \\ \hline \hline & & ✓ & 50.11 & 55.91 & 64.87 & 33.37 & 40.15 & 1.483 & 1.303 \\ & ✓ & & 46.84 & 51.95 & 61.21 & 33.67 & 39.21 & 1.871 & 1.413 \\ ✓ & ✓ & & 51.89 & 57.63 & 64.39 & 38.91 & 41.39 & 1.497 & 1.333 \\ ✓ & ✓ & ✓ & & 52.46 & 58.67 & 65.61 & 40.93 & 44.24 & 1.422 & 1.280 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Ablation study of our framework is conducted on CT, MRI, and EM datasets, reporting Dice scores for CT and MRI, and VOI results for EM. Red and blue entries indicate the best and second-best results, respectively.
Figure 4: Visualization Results of 3D Instance Segmentation. ‘GT’ indicates the ground truth (Neurons are distinguished by their geometric shapes rather than color labels for the same instance).
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Method} & CREMI A 10 & CREMI A 75 & CREMI B 10 & CREMI B 75 & CREMI C 10 & CREMI C 75 \\ & VOI & Arand & VOI & Arand & VOI & Arand & VOI & Arand & VOI & Arand & VOI & Arand \\ \hline \hline Random & 1.051 & 0.184 & 0.744 & 0.104 & 2.181 & 0.234 & 1.560 & 0.261 & 1.987 & 0.145 & 1.424 & 0.140 \\ \hline BYOL Grill et al. (2020) & 0.961 & 0.206 & 0.764 & 0.119 & 1.581 & 0.155 & 1.441 & 0.142 & 1.672 & 0.196 & 1.326 & 0.124 \\ SimSiam Chen and He (2021) & 0.985 & 0.171 & 0.770 & 0.100 & 1.511 & 0.125 & 1.332 & 0.150 & 1.578 & 0.178 & 1.364 & 0.130 \\ BarlowTwins Zbontar et al. (2021) & 0.987 & 0.200 & 0.743 & 0.101 & 1.584 & 0.185 & 1.291 & 0.187 & 1.483 & 0.147 & 1.303 & 0.129 \\ PCRL\(\nu\)2 Zhou et al. (2023b) & 0.921 & 0.189 & 0.738 & 0.100 & 1.568 & 0.158 & 1.374 & 0.157 & 1.596 & 0.178 & 1.326 & 0.127 \\ \hline GTGM & 0.902 & 0.166 & 0.728 & 0.092 & 1.525 & 0.117 & 1.279 & 0.106 & 1.422 & 0.137 & 1.280 & 0.118 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Neuron segmentation results of three EM datasets. Red and blue entries denote the best and second-best results, respectively (The performance is better when the values of VOI and Arand are smaller).
### Analysis of the Trade-off between Pretrain and Downstream tasks
The difference in objective functions between the pretraining and finetuning phases can lead to suboptimal performance in downstream tasks, despite the convergence of the loss function during pretraining. This highlights the existence of a trade-off between these two stages. To showcase this trade-off, we conduct experiments on the representative Pancreas dataset, and the results are presented in Table 5. Notably, the bold values in the table indicate the optimal segmentation results obtained under the current settings. These results signify that the number of iterations in our pretraining phase closely aligns with the performance achieved in the downstream task. This observation highlights the strong alignment between our objective function design and the requirements of the downstream task.
### Analysis of Error Bars
Table 6 presents the error bars of our segmentation results on three modalities. We conduct three runs for each task and compute the mean and standard deviation of results. As observed from Table 6, our results demonstrate relatively minor variations, indicating the stability of GTGM's performance in downstream tasks.
## 6 Conclusion
This work presents GTGM, a generative text-guided 3D vision-language pretraining framework. It accomplishes both instance-level visual-textual alignment and feature-level visual representation alignment using only 3D medical image inputs. GTGM delivers outstanding performance on 13 diverse medical datasets, tackling a variety of segmentation tasks with different data ratios. This demonstrates the efficiency and effectiveness of GTGM. Our work not only achieves the best performance but also opens up new opportunities to apply VLP to 3D medical images without relying on paired text. The broader impact and limitations are shown in the Appendix.
\begin{table}
\begin{tabular}{c c c} \hline \hline Dataset & 10 \% & 100\% \\ \hline \hline Heart & 86.33 \(\pm\) 0.41 & 94.71 \(\pm\) 0.39 \\ \multicolumn{2}{c}{Spleen} & 86.95 \(\pm\) 0.29 & 89.64 \(\pm\) 0.43 \\ \hline \hline Metrics & CREMI C 10 & CREMI C 75 \\ \hline \hline VOI & 1.422 \(\pm\) 0.031 & 1.28 \(\pm\) 0.025 \\ \multicolumn{2}{c}{Arand} & 0.137 \(\pm\) 0.011 & 0.118 \(\pm\) 0.008 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Error bars of our methods across three modalities.
\begin{table}
\begin{tabular}{c|c c c} \hline \hline \multirow{2}{*}{Iters} & \multicolumn{3}{c}{Pancreas} \\ & 1 \% & 10\% & 100\% \\ \hline \hline
3.5 k & 38.93 & 56.7 & 64.73 \\
7 k & 39.26 & 56.09 & 64.97 \\
15 k & 39.67 & 56.83 & 64.95 \\
50 k & 40.13 & **56.96** & **65.61** \\ Last & **40.55** & 56.87 & 65.47 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Analysis of the trade-off between pretraining and finetuning.
Overview
In the supplementary material, we provide detailed explanations of our models and discuss the technical aspects involved. We have also included additional experimental results showcasing the performance of the Transformer backbone. Moreover, we have included numerous visualizations to enhance the understanding of our approach. To facilitate readability, we have also provided pseudo code for the core procedures.
## Appendix B Latent Representation of Pretrained Model
We implement various pretraining approaches Zhou et al. (2023); Zbontar et al. (2021); Grill et al. (2020); Chen and He (2021) and GTGM on diverse 3D medical image datasets with annotation. Then we utilize the pretrained visual encoder to extract the latent representation from three distinct medical image modalities, namely CT, MRI, and EM. Subsequently, we subject the representation to dimensionality reduction using the t-SNE algorithm Van der Maaten and Hinton (2008). The resulting 2D representations obtained from different models are depicted in Figure 5.
Based on the visualization results obtained from t-SNE in Figure 5, it is evident that in scenarios with multiple modalities, the representation extracted through pretraining tend to exhibit confusion. Specifically, the CT and MRI modalities, compared to EM, display significantly lower resolution by several orders of magnitude. Although most pretraining models can effectively discriminate EM from the other two modalities, distinguishing between CT and MRI becomes challenging. This observation provides an explanation for the occurrence of model collapse phenomena in our experiments, wherein pretrained weights sometimes perform worse than random initialization.
Among all the models evaluated, only BarlowTwins and GTGM demonstrate effective differentiation among the three modalities. To further investigate this, we compute the density of the 2D representation, represented as the reciprocal of the k-nearest neighbor distances (higher values indicating tighter clustering). Interestingly, our model exhibits superior class separability compared to BarlowTwins, indicating that our text-guided approach captures informative features more effectively. In contrast, BarlowTwins' representation tend to be sparser, leading to potential overlap among modalities. This finding demonstrates the efficacy of our generative text-guided framework in capturing discriminated aspects among the three data modalities.
The additional analysis on density further supports the superiority of our approach in terms of class separability. The density measurements confirm that our model achieves tighter clustering, which is beneficial for class discrimination and avoiding confounding among the modalities. Overall, these findings highlight the effectiveness of our approach in capturing informative features and facilitating
Figure 5: 2d Visualization of volume Representation. All visualizations are rendered using t-SNE.
discrimination among the three different data types, surpassing the performance of the BarlowTwins method.
## Appendix C Discussion
### Limitation
In the context of ablation experiments discussed in the main text, it was observed that for large-scale or higher-dimensional datasets, utilizing text generated through slicing for text-to-image pretraining yielded marginal information gain. Conversely, pretraining based on multi-view features derived from images proved more effective. In the future, it may be worthwhile to consider augmenting textual data appropriately or generating additional alternative texts through multiple slicing techniques, thereby harnessing the information contained within the text more comprehensively.
### Broader Impact
The potential of vision-language multimodal pretraining has been widely recognized. Our proposed generation approach offers the possibility of joint training for datasets lacking textual descriptions. In addition to medical datasets, this approach can be extended to train on challenging datasets such as videos, point clouds, and light fields that are difficult to describe. Furthermore, directly fusing multimodal features at the downstream stage has been shown to significantly improve model performance. For instance, Liu et al. (2023) achieved remarkable performance gains by introducing simple text prompts at the downstream stage, although the use of generated text to directly guide downstream tasks has yet to be validated.
Moreover, our approach provides a means to leverage large language models (LLMs) effectively in computer vision tasks, leveraging existing pretrained weights. In the future, exploring the synchronization of LLM models with techniques like Stable Diffusion can be pursued to achieve zero-cost acquisition of high-quality data through text-guided image generation.
### Future Work
Investigating these methodologies on varied medical data types, such as electrocardiograms linked with clinical monitoring records and multilingual reports, presents a fascinating future trajectory Li et al. (2023), Wan et al. (2023). Moreover, the alignment of heterogeneous modality data can be perceived as a data fusion task, an issue frequently tackled in the field of physics Cheng et al. (2023, 2022), Liu et al. (2022) or recommendation system Wan et al. (2022).
## Appendix D Transformer-based Results
To validate the robustness and stability of our approach, we conducted experiments using a Transformer-based backbone. Taking inspiration from the network architecture of SwinUNETR Tang et al. (2022), we fine-tuned our model on an electron microscopy dataset. The experimental results, as shown in Table 7, demonstrate the effectiveness of our approach.
Based on our experimental results, it can be observed that the performance of using Swin Transformer as a backbone is slightly inferior to that of using ResNet50 as a backbone. Additionally, the segmentation results of Swin Transformer are worse when dealing with a small amount of data. However, in comparison to the ResNet backbone, the gains obtained from pretraining for downstream tasks are more significant. Therefore, when larger datasets are available, Swin Transformer as a backbone holds tremendous potential.
## Appendix E Visualization
### Visualization of Segmentation Results
We present the visualization results of the segmentation task on the MSD dataset Antonelli et al. (2022), as shown in Figure 6, 7. Our approach demonstrates superior capability in capturing detailed
anatomical structures (tumors) compared to other pretrained methods. Specifically, our method exhibits noticeable qualitative improvement, particularly for challenging instance segmentation tasks involving intricate structures such as Hepatic Vessels. The enhanced visual results highlight the efficacy of our approach in accurately delineating the morphological characteristics of organs (tumors) within the medical imaging context.
### Caption Generate
We present the generated medical text descriptions in Figure 8. The descriptions generated by the untuned large-scale language models (e.g., BLIP Li et al. (2022)) exhibit limited information, often resembling natural image descriptions, and contain numerous errors. For example, despite correctly identifying the CT image, there are instances where lung slices are incorrectly labeled as brain slices, introducing misleading information.
However, with our proposed fine-tuning approach, the generated text contains a substantial amount of useful information. Furthermore, utilizing our introduced filter, redundant and repetitive information in the text is eliminated (as indicated by the red text in the figure), thereby enhancing the information density and descriptive accuracy of the generated text.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Method} & CREMI A 10 & CREMI A 75 & CREMI B 10 & CREMI B 75 & CREMI C 10 & CREMI C 75 \\ & VOI & Arand & VOI & Arand & VOI & Arand & VOI & Arand & VOI & Arand & VOI & Arand \\ \hline Random & 3.720 & 0.898 & 0.967 & 0.244 & 4.495 & 0.646 & 1.988 & 0.289 & 5.082 & 0.770 & 1.537 & 0.175 \\ \hline BYOL Grill et al. (2020) & 1.943 & 0.559 & 0.894 & 0.220 & 4.400 & 0.670 & 1.902 & 0.256 & 2.570 & 0.448 & 1.539 & 0.210 \\ SimSiam Chen and He (2021) & 1.832 & 0.531 & 0.927 & 0.192 & 3.381 & 0.442 & 1.871 & 0.282 & 2.419 & 0.428 & 1.442 & 0.166 \\ Barlowing Zohater et al. (2021) & 2.104 & 0.615 & 0.956 & 0.194 & 3.292 & 0.590 & 1.851 & 0.201 & 2.419 & 0.434 & 1.437 & 0.162 \\ PCRLv2 Zhou et al. (2023) & 2.094 & 0.640 & 0.881 & 0.184 & 3.174 & 0.442 & 1.594 & 0.241 & 2.219 & 0.298 & 1.474 & 0.164 \\ SwinUNETR Tang et al. (2022) & 1.913 & 0.579 & 0.855 & 0.177 & 3.813 & 0.617 & 1.859 & 0.208 & 2.419 & 0.434 & 1.423 & 0.160 \\ \hline \hline GTGM & 1.693 & 0.529 & 0.832 & 0.180 & 2.949 & 0.419 & 1.423 & 0.160 & 2.188 & 0.304 & 1.393 & 0.151 \\ \hline \hline \end{tabular}
\end{table}
Table 7: Experimental Results of Swin Transformer as a Backbone for Electron Microscope Neuron Segmentation. Red and blue entries denote the best and second-best results, respectively. Among these models, SwinUNETR refers to the implementation of the original pretraining approach described in the respective paper, with results reproduced on our pretraining dataset.
Pseudo Code
The core code for our pretraining process is outlined in Algorithm F.
```
#img1,img2,caption:Twodataaugmentationsforavolume,| #andthegenerateddescription whileinter<self.max_iterations: epoch_loss=0 epoch_loss_BT=0 epoch_loss_clip_diag=0 #getrawvolumeandgeneratedescriptions img1,img2,caption=train_provider.next() #getimage img1=img1.to(torch.float32).to(self.device).contiguous() img2=img2.to(torch.float32).to(self.device).contiguous() self.optimizer.zero_grad() #ampstyle(mightdecreaseprecision) withautocast(): imp_tokenize_output=self.model.module_tokenize(caption) input_ids=imp_tokenize_output.input_ids.to( self.device).contiguous( attention_mask=imp_tokenize_output.attention_mask.to( self.device).contiguous() output_dict=self.model(img1,img2,input_ids,attention_mask) img_emb1,img_emb2=output_dict['img_emb1'] output_dict['img_emb2'] proj_img_emb1,proj_img_emb2=output_dict['proj_img_emb1'] output_dict['proj_img_emb2'] proj_text_emb=output_dict['proj_text_emb'] loss_clip_diag1,acc1_1=self.clip_loss(x=proj_img_emb1,y=proj_text_emb) loss_clip_diag2,acc1_2=self.clip_loss(x=proj_img_emb2,y=proj_text_emb) acc1=(acc1_1+acc1_2)/2 cov_loss=self.covar_loss(img_emb1,img_emb2)*0.01 loss=loss_clip_diag1+loss_clip_diag2+cov_loss #accumulatelossforlogging epoch_loss+=loss.item() epoch_loss_clip_diag+=loss_clip_diag1.item()+loss_clip_diag2.item() epoch_loss_BT+=cov_loss.item() scaler.scale(loss).backward() scaler.step(self.optimizer) scaler.update()
```
**Algorithm 1** PyTorch pseudo code of GTGM
Figure 6: Visualization results of the first 5 tasks of MSD. ‘GT’ indicates ground truth.
Figure 7: Visualization results of the last 5 tasks of MSD
**Figure 8**: Generated medical description
|
2302.07637
|
A Survey on Process Variants Meta-modelling Approaches
|
This paper introduces the concept of process variants in process-aware
information systems (PAIS) during the design-time phase, where multiple
variants of a single process must be specified. Today's organizations have to
manage multiple variants of a given process, such as multiple order processes
or payment processes for a specific product or service they offer. Traditional
business process management tools lack in adequately capture and represent
explicitly these variants. Hence, for more than a decade an array of approaches
have been proposed to tackle this gap. A reference or customizable process
model has been introduced to model these variants collections in a way that
each variant could be derived by inserting/removing an activity according to a
process context. This survey reviews current literature by providing an
overview of meta-modelling approaches that have been extended in order to
capture the variations of business processes. Moreover, we give a comparative
analysis of these approaches based on different criteria we identified from the
inventory activity, providing insights into their strengths and limitations.
This paper concludes that current approaches to process variants meta-modelling
provide a comprehensive view of the conceptual level of process variants and
the control-flow process perspective. While some approaches go a step further
by capturing variability in resources or specialization among
activities/processes.
|
Lisana Berberi
|
2023-02-15T13:12:36Z
|
http://arxiv.org/abs/2302.07637v1
|
# A Survey on Process Variants Meta-modelling Approaches
###### Abstract
This paper introduces the concept of process variants in process-aware information systems (PAIS) during the design-time phase, where multiple variants of a single process must be specified. Today's organizations have to manage multiple variants of a given process, such as multiple order processes or payment processes for a specific product or service they offer. Traditional business process management tools lack in adequately capture and represent explicitly these variants. Hence, for more than a decade an array of approaches have been proposed to tackle this gap. A reference or customizable process model has been introduced to model these variants collections in a way that each variant could be derived by inserting/removing an activity according to a process context. This survey reviews current literature by providing an overview of meta-modelling approaches that have been extended in order to capture the variations of business processes. Moreover, we give a comparative analysis of these approaches based on different criteria we identified from the inventory activity, providing insights into their strengths and limitations. This paper concludes that current approaches to process variants meta-modelling provide a comprehensive view of the conceptual level of process variants and the control-flow process perspective. While some approaches go a step further by capturing variability in resources or specialization among activities/processes.
Keywords:Process variant meta-model reference or customizable process model.
## 1 Introduction
In many process-aware information systems (PAIS) during design-time phase, many variants of the same process often have to be specified. Here, we introduce the basic notion of a process variant as follows. A process model variant or shortly named process variant is an adjustment of a particular process to specific requirements building the process context [14]. A process context is directly related to all the elements that comprise a business process. This includes several contextual properties, such as e. g., process domain properties, control flows, goals specified, resources assigned, organizational units associated, etc. Depending on the process context type, different variants of our process are required, whereas
the context is described by country-specific, order-specific, invoice-specific, and payment-specific variables. There exist two general approaches that allow the modelling of these generated variants: _multi-model_ and _single model_. The former one (e.g. variants in our motivating example) as classified by authors in [14], which means that they are designed and kept separately resulting in data redundancy as often model variants are similar or identical for most parts. Furthermore, is far from trivial to combine existing variants to a new one (semi-)automatically. This solution is feasible only if few variants exists or if they differ significantly from each other.
Whereas, in the latter one, these variants might be expressed in a single process definition with the excessive use of XOR-Splits. The resulting processes are large, difficult to understand and to communicate and overloaded, and new process definitions still comprise of all the past processes definitions they should replace [7]. Moreover, it isn't possible to distinguish between normal and variant-specific branchings (e. g., our _PayInvoice_ process includes a decision to pay by bank transfer, i. e., perform activity _Fill in the settlement info_ if bank transfer choice is selected and if activity _Request payment by bank transfer_ is either performed or skipped, whereas in the model-side it's ambiguous and mixed with the "normal" process logic), unless these variant-specific conditions are marked and represented explicitly using special conventions [14].
To address these shortcomings a significant research efforts have been triggered and thus an array of approaches have been published. Hence, is crucial to provide a list of these approaches and provide an comparison among them. The remaining sections of this paper are as follows: section 2 gives a motivating example we use throughout of this paper, section 3 gives a state-of-the-art of process variants meta-modelling approaches, section 4 provides a comparative analysis of them and section 5 draws conclusions.
## 2 Motivating example
Let's assume we have an illustrative example of a core business process, e. g., _Processing Customer Invoice Payments_ of a financial administration agency that is modeled as a collaboration between two processes named _ReceiveInvoice_ and _PayInvoice_. _ReceiveInvoice_ process consists of a set of activities that sends an invoice (either e-invoice or hard-copy) to a customer (buyer) with/without requesting a payment apriori for ordering its goods or services. Whereas, _PayInvoice_ process consists of a set of other activities that submit or complete with the payment (either by cash, bank transfer, credit-card or paypal) after a customer invoice is received. We express these variants using BPMN notation 1 which is now standardized by [16] to bridge the gap between business process design and process implementation. We use this process with the set(family) of its variants as a running example throughout the paper. Usually, dozens up to thousands of variants may exist of the same business process depending on different factors.
For example, in our running example variability is caused by the order customer choice (e. g., either via online shops or call centers) and/or method of invoice payment (e. g., either cash or credit-card) or invoice type (e. g., either hard-copy or e-invoice) of the designed process models in different branches of different cities. Therefore, we represent these variants as shown in Figure 1, Figure 2 and Figure 3. These five variants share some similarities highlighted with light gray, but they show differences, too. A detail description about these variants is as follows. All variants start with activity _Place order_ by a customer for ordering his/her goods and services.
In variants 1-4 the order is received (i. e., activity _Receive order_ depicted with a rounded rectangle) via an online shop, whereas in variant 5 via a call
Figure 1: Process Variants (1)
center. In the first three variants the processing of customer invoice payments is shown in different branches of the same city Villach, whereas variants four and five show how these processes are modelled in an agency located in a different city, e. g., in Klagenfurt. After the order is received, either a request payment is followed (as in variant 1) or directly customer receives an e-invoice or hardcopy invoice. Then, _Update profile_ activity followed by a decision point (depicted with an X diamond) where subprocess _Make billing inquiry_ is executed if customer is not ready to pay otherwise activity _Manage payment_ (which is a common activity among all variants) is performed and the the flow is shifted to the second process _PayInvoice_ via a message start event (depicted with an envelope inside a circle). This subprocess deals with two types of inquiry by the customer: _self-service_ or via a _place call_ for further investigation related to invoice. If no billing adjustment are needed then the invoice can be paid executing the intermediate event to shift the control to process _PayInvoice_ otherwise activity _Make billing adjustment_ is performed by a billing specialist of the online shop to adjust billing items and afterwards the altered invoice is sent back to the customer. An expanded view of this subprocess is modeled as a separate business process diagram depicted in the bottom of Variant 3. Figure 2. In process _PayInvoice_, customer has different options to pay for his/her invoice, either by credit-card, (the credit is charged if customer has enough credit to its account), cash, bank transfer or PayPal, and finishes by _Verify successful payment_ activity. The order is successfully completed if the verification of the payment was successful otherwise is cancelled (e.g., due to insufficient credit amount). Otherwise customer receives a notification about its insufficient credit amount following the cancellation of its invoice order (depicted with an x circle). After customer order is received activity _Request payment by bank transfer_ is performed. Afterwards, a hard copy invoice is sent to the customer (_Receive hard-copy invoice_) generating a data object _Hardcopy invoice_ which serves as input object for activity _Review invoice_. Then, a payment sheet named _Payment details sheet_ is generated as an output data object of activity _Create payment_ performed by an employee of the Online shop. In contrary to variant 1, the payment should be done by bank transfer as requested in the process _ReceiveInvoice_. Accordingly, (i. e.,_Fill in the settlement info_) is executed followed by activity _Verify successful payment_ otherwise the invoice is canceled if the bank transfer isn't settled with the right information. Based on the chosen invoice either payment by credit-card or payment by bank transfer is possible as expressed via the (i. e., decision point named _Pay method?_). Again, after the verification of the payment the process completes with the event _Invoice paid_ for successful payments or event _Cancel invoice_ for unsuccessful ones.
In variant 4 a pre-request payment is not required by the online shop (instead a call center is used in variant 5), which means the decision is left to the customer after receiving an e-invoice or hard copy invoice. Here, possible payment methods are by credit-card, bank transfer or by third-party such as Paypal.
Figure 2: Process Variants (2)
Figure 3: Process Variants (3)
## 3 State-of-the-art of process variants meta-modelling
This section provides an overview of cutting-edge meta-modelling approaches to capture variability of business process models for modelling and/or managing configurable processes. Instead of visualizing all proposed meta-models we show how our running example is represented in their solutions or through their case studies. Therefore, we exclude from our analysis approaches that do not propose a meta-model for process variants. Furthermore, in the following subsections, from sections 3.1 to 3.4 we classify proposed approaches realized by means of different \(\ll\)_variability mechanisms\(\gg\)_ suitable for business processes. A variability mechanism is defined as a technique for the derivation of process model variants from existing process models. We identified four different variability mechanisms: Inheritance and parameterization, Adaptation pattern-based, Template method pattern-based and Node configuration.
### Inheritance and parameterization variability mechanism
As discussed by authors in [19] these two variability mechanisms introduce: _Inheritance_ that allows for the replacement or addition of a model element, e. g., activity by the specialized one; _Parameterization_ that allows for controlling the behaviour of single execution step in a process by configuring the process with corresponding parameter values. To introduce variability and configuration modelling to the processes in PESOA domains, [4] proposed a conceptual model with variation points where fixed activities are marked with stereotypes applied in both UML ADs (Activity Diagrams) and BPMN. Their approach is called variant-rich process modelling (see Figure 4). The stereotype \(\ll\)_VarPoint\(\gg\)_ is assigned to activities of a process model in which variability can occur. An _abstract_ activity is represented by a variation point, such as "Customer info" that is specialized with one or more of the concrete variants (variants are inclusive). For example, "Review order info" and "Capture customer info" assigned with the stereotype \(\ll\)_Variant\(\gg\)_ are specializations of "Customer info". With the stereotype \(\ll\)_Abstract\(\gg\)_ are marked also abstract activities "Invoice type" and "Payment method" in our example where variation points are resolved by selecting only one of the concrete variants.
Figure 4 shows the process model for processing invoice payments in _PESOA-BPMN_ where activities have been marked as variation points with their variants, e. g., "Identify or verify credit-card info" marked as default activity of "Payment method", as being the most common choice in this process. In this case, a variation point with the stereotype \(\ll\)_Default\(\gg\)_ represents the default variant. Whereas, activity "Processing order using" is assigned with the stereotype \(\ll\)_NULL\(\gg\)_ to indicate its optional behaviour to one of the specialized activities annotated with \(\ll\)_Optional\(\gg\)_ stereotype. During customization variability in this point can be resolve by selecting one of its specialized variants such as "Request payment by credit-card" or "Request payment by bank transfer" or may completely be dropped from the process model. Accordingly, Figure 4 (b)
shows an excerpt fragment of the configurable BPMN process model for the derived process variant of ordering e-invoice via online shops with a pre-request payment by credit-card.
Another BPFM (Business Process Family Model) approach classified in this group is proposed by authors [18] as a two-level approach. They capture customizable process models using an extended version of UML ADs. They claim
Figure 4: (a)Processing invoice payment example in PESOA-BPMN; (b) A customized model
to have systematically conduct the realization of the variability in processes in different abstract levels in comparison with PESOA research project. They represent variability using not only variation point and variant but also variation point type, boundary, and cardinality (see Figure 5). At the first level, an activity can be defined as _common_ if it cannot be customized or _optional_ if it can be omitted during customization.
Figure 5(a) above, shows a customized model in which the first variation point has been customized to a decision between variants "Request payment by credit-card", "Request payment by bank transfer" and of another parallel exe
Figure 5: (a) Processing invoice payment example in BPFM; (b) A customized model
cution of the other two remaining variants. While the second and the last has been customized to one of the specialized activities, i. e., "Receive e-Invoice" and "Identify or verify credit-card info".
In Figure 5 (b) the first activity "Process order" indicate an open variation point of type flow with a decision pattern between activities "Request payment by credit-card", "Request payment by bank transfer", "Review order info", and "Capture customer info" in the associated variants region (depicted with a double rectangular). The second level selects one of the specialized variants, i. e., concrete activities, which is represented by a variation point, i. e.,abstract activity. Variation points can be assigned only to activities. Authors in [18] identified three types of variation points (_vpType_): _Boolean_ exactly one variant is selected from specialization; _Selection_ at least one variant is selected from a number of variants denoted with a cardinality (e. g., 1..2); _Flow_, a set of activities (expressed in a variant region) without a specified flow relation. Whereas the second activity "Invoice type" indicate a boolean variation point, where only one of the variants can be selected. And finally activity "Payment method" is of type boolean. Four variants are assigned to it, i. e., "Identify or verify credit-card info", "Pay cash", "Paypal account sign-in", and "Fill in the settlement info" and only one can be selected during customization. In our example, we don't indicate an activity of _selection_ type to express the fact that at least one or more variant may be assigned to it, as is the case of an OR-decision.
To abstract from the configurable process model and its variation points during configuration authors in [24] propose to use feature models (not presented due to page limits) in contrary to PESOA where authors in [4] stated that the abstraction and transformation to derive process variants from a configurable process is out of their scope. A feature model is represented graphically by one or more feature diagrams. A feature can be mandatory or optional (i. e., it can be deselected). It can be bound to other features via constraints (i. e., propositional logic expressions over the values of features).
For example, the subfeature "Request payment by bank transfer" of "Pre-request payment" must be deselected if the subfeature "e-invoice" of "Invoice type" of "Order" is not selected (see Figure 6). The relation between features and subfeatures is modeled using XOR (only one subfeature must be selected), AND (all the subfeatures must be selected), and OR (one or more might be selected). Even a feature is modeled as mandatory (arcs depicted with oval arrows) it can still be excluded if it has an XOR/OR relation with its sibling features. This is the case of the subfeatures of "e-invoice" which is mandatory and still is excluded in the configured feature diagram when subfeature "Pre-request by bank transfer" is selected. However, a guidance is missing on how to perform the selection of a suitable set of features.
An association marked with the stereotype \(\ll\)_Parameterization\(\gg\)_ is used instead if a misinterpretation exist between the attribute and its corresponding element [19]. The associations are used also to link data objects that contain the possible parameters to the grouping box that surrounds the attribute, see Figure 7. This figure shows the parameterization of two different attribute where the lower one offers an alternative for the _Datetime_ attribute of the intermediate timer event. The alternative behaviour triggers the event at the end of each month whereas the default behaviour triggers the event at the end of each quarter. In the upper side of this figure the _ConditionExpression_ attribute of a sequence
Figure 6: (a) A feature diagram for invoice payments (b) A possible feature configuration for invoice payments feature diagram
flow is parameterized. The default parameterized attribute _Amount_ of an invoice order serves as a sentinel that activates the sequence flow if order amount is greater than a value (in this case, greater than \(\in\) 150). Accordingly, the sequence flow is activated and a bonus is calculated for the customer. An alternative parameterization changes this attribute to activate the sequence flow if the order amount is greater then \(\in\) 500.
Whereas, authors in [5] proposed specialization and generalization notions between activities and so-called generic activities among process variants. We define generic activities (depicted with bold line rounded rectangular) as typical places where variation occurs among process variants. We have recently published how we semi-automatically derive the process variant hierarchy among activities and processes in this publication [6].
Author annotates each of these connections between generic activities and specialized activities (either elementary or subprocess) with _variant_specialization_ stereotype, to distinguish from the 'normal' sequence connector in process modelling (see Figure Figure 8). For each realization of a generic activity to one of the specified activities every occurrence of a generic activity is substituted with the occurrence of respective activity from a concrete process.
In this figure we give an illustration of customizable invoice payment process as a reference (or so-called generic) process model with generic activities. For example, activity "G: Receive e-invoice" is renamed by preceding letter 'G' as G: Receive e-invoice, where G can be bound to one of the activities G1, G3, G4, G5 of the respective process variant (in variant two there is not such activity) it belongs. A generalization hierarchy can be generated from a meta-model of business process models which introduces the notion of generic activities which generalize a set of activities (e. g., pay by credit card, by check, or by third-party
Figure 7: Parameterization of two different attributes in BPMN
(PayPal) could all be generalized to an activity payment). Based on these given hierarchies of activities we can define generalization hierarchy of processes for the "process" dimension of a process warehouse. This hierarchy can then be used to roll-up or drill down when analysing the logs of the executions of the various process variants and it makes it much easier to compare key-performance indicators between different variants at different levels of genericity.
Figure 8: Reference invoice payment process model with generic activities
### Adaptation variability mechanism
Adapter design patterns are based on information hiding and inheritance like the Strategy design pattern[19]. These patterns are used to represent processes using a combination of encapsulation and inheritance between process variants. Instead, design patterns like 'Template Method' allows for controlling the behaviour of certain steps called 'placeholders' deferred to process runtime [19]. Template approach is proposed for configuring a reference process based on a set of related business process models with an a-priori known variability [17] as well as on superimposed variants [9]. Authors in [10] proposed the vBPMN(variant BPMN) approach to define the modelling of workflow variants by pattern- and rule-based adaptation in BPMN [8]. Their approach consists of firstly, marking adaptive segments(variants) in a reference process, secondly, a BPMN2 adaptation pattern catalogue for realizing behaviour deviations and last, rules formulated in an event-condition-action (ECA) format applied to which adaptive segments and in what data-context.
Figure 9: An adapted process model in vBPMN
To indicate the start and end of an _"adaptive segment"_ within a BPMN process definition two new nodes are introduced in vBPMN. An adaptive segment is structured as a single-entry, single exit point to facilitate the use of adaptation patterns. Figure 9 shows our example of processing invoice payments with basically three adaptive segments, each of them marked between two opening/closing square brackets (depicted as intermediate events) indicating the modification of this segment using an adaptation pattern. Another annotation proposed by authors in [10] is by assigning a black diamond in the upper left corner of a single task. Each pattern consists of an implicit parameter \(<\)_AdaptationSegment\(>\)_ relating to which process segment it is applied on and the workflow engine gets notified whenever an adaptive segment is entered or left by explicitly annotating tasks.
To construct new variants the connection between the values of data-context variables and process tailoring operations needs to be established. This is achieved by formulating adaptation rules in an event-condition-action (ECA) format [10]. Each time a token enters an adaptive segment, the context variables are evaluated and the segment potentially becomes subject to immediate adaptations before continuing through the segment.
Another variant may be constructed if we annotate activity "Verify successful payment" as an "adaptive segment" to send an extra message notification of the verified payment to "special" customers (i. e., selected with a high status) for ordering their goods/services via call centers. This is achieved by adding a time-message pattern to the adaptive segment. Another adaptation can be annotated to this adaptive segmented to add an additional task in parallel with sending a message. Parameterized patterns are applied to the adaptive segment by wrapping them around it as extensions as shown in Figure 10:
Rule #1: realizes the send message event contextual facet for special customers placing their orders via call centres as shown in Figure 10 (a).
Rule #2: inserts an additional task for example "Send advertisement" for these type of customers as shown in Figure 10 (b).
**RULE #1**: ON verifyPayment_entry IF orderVia="CallCenter" AND customerStatus="High" THEN APPLY timedMessage(segment="Verification_entry", HandlerTask=
"VerifyPayment", time=10 min)
**RULE #2**: ON verifyPayment_entry IF orderVia="CallCenter" THEN APPLY insert_parallel(segment= "Verification_entry", task="Send Advertisement")
Each adaptation rule has only one context factor, which uniquely assigns the rule to a distinct process variant. However, there is no systematic way explained on how to mark these adaptive segments to capture variability on process models.
Another approach from authors in [14] is proposed, namely **Provop** (PROC Variants by OPtions) for managing large collections of process variants in a single process. A set of change operations (i. e., insert, delete, modify and move) is used to describe the difference between _basic process model_ (i. e., the most frequently executed variant of a process family or process without a specific use case) and its respective variant model. They identified requirements related to
Figure 10: (a) Time message-pattern adaptation of a process in vBPMN; (b) Insert parallel-pattern adaptation of a process in vBPMN
modelling of process variants, linking them to process context, executing them in WfMS, and continuously optimizing to deal with evolving needs.
Although different adaptation patterns such as: insert/delete, replace/move/swap process fragment or embedding the latter in loops, parallel or xor branches have been applied along to the entire process lifecycle [15, 29], they are not yet sufficient to cope with complexity of process families [1]. To this purpose, authors in [3] argued to address variability-specific needs of process families through change patterns that complements these adaptation patterns. Their approach namely **CP4PF** (Change Patterns for Process Families) comprise ten derived change pattern implemented in C-EPC for facilitating variability management in process families. These CPs have been grounded empirically and validated in a real scenario through a case study 'check in process' in airline industry application with the aim to considerably reduce variability management effort.
### Template method pattern-based variability mechanism
Design patterns like 'Template Method' allows for controlling the behaviour of certain steps called 'placeholders' deferred to process runtime [19]. Template approach is proposed for configuring a reference process based on a set of related business process models with an a-priori known variability [17] as well as on superimposed variants [9]. An essential BPMN meta-model is introduced by authors in [17] to capture the fixed behaviour (i. e., process structure) defined as a set of activities and events. A template constitutes of a control flow definition engaging fixed set of activities and events, e. g.,template1 in the following figure. A process structure is specified by multiple TKVs (i. e., a set of activity types). TKV is tuple \(<\)P, C\(>\) where P is a set of abstract activities, i. e., placeholders, and C is set of concrete activities. From TKV definition each placeholder is derived, which can be either an activity or an event. Variants assigned at a placeholder (i. e., part) are modeled explicitly using maps (i. e., a set of mappings describing fitment of a part at a placeholder) [17] Figure 11 shows an excerpt of invoice payments process to illustrate variability and configurability. The process is defined as a control flow of set of activities A={Place order, Receive order, Pre-request payment, Invoice received, Perform tasks, Manage payment,..} as depicted in Figure 11.
A configurable process \(P_{InvPayment}\) = {\(<\)E, A, template1, D, tkv1 \(>\)} of a process family PF ={\(<\) E, A, {template1}, D, TKV \(>\)} where:
A = {Place order, Receive order, Pre-request payment, Invoice type, Perform tasks, Manage payment,..},
E = {},
template1= instance of essential BPMN meta model, and
TKV = {tkv1, tkv2} where tkv1 = {P= {Orders using}, C= {A \(-\) {Orders using}}},
tkv2 = {P={Orders using, Invoice type}, C={A \(-\) {Orders using, Invoice type}}} and
D=data objects.
Here, five _activity maps_ are defined, e. g., aMap1, aMap2, aMap3 are specializations of process structure "ProcessInvoice" if context _Orders using_ is selected. Whereas, aMap4 and aMap5 are specializations of process structure "ProcessInvoice" if context _Invoice type_ is selected. Some different behavioural variances through different configurations are as follows:-
* configuration1=\(<P_{InvPayment}\), {Pre-requestPayment1}, {aMap1}\(>\) where
Figure 11: Process family of processing invoice payments
Pre-requestPayment1={<{}, {Request payment by credit-card }, template2, D, \(\phi>\)}
2. configuration2 =\(<P_{InvPayment}\), {Pre-requestPayment2}, {aMap2}\(>\) where Pre-requestPayment2={<{}, {Request payment by bank transfer }, template3, D, \(\phi>\)}
3. configuration3 =\(<P_{InvPayment}\), {CustomerInfo}, {aMap3}\(>\) where CustomerInfo n={<{}, {Review order info, Capture customer info }, template4, D, \(\phi>\)}
4. configuration4 =\(<P_{InvPayment}\), {CustomerInfo}, {aMap3}\(>\) where CustomerInfo n={<{}, {Capture customer info, Review order info }, template5, D, \(\phi>\)}
The same logic applies for the next context "Invoice type" as a specialization of process structure "ProcessInvoice". A Configuration structure describes the entire configuration context in terms of parts that can be fitted at placeholders. It contains different process structures, in this case six as depicted in figure below. Therefore, a configurable process with placeholders is a specialization of a template. The behaviours of configurable business process _ProcessInvoice_ can differ as different parts can be fitted at defined placeholder, i. e., abstract activity _Pre-request payment_ or _Invoice type_.
### Node configuration variability mechanism
A node of a customizable process model, called configurable node is a variation point assigned to different customize options. Two main approaches fall in this group named Configurable Integrated Event-driven Process Chains (C-iEPC) and Configurable Workflows. Authors in [13, 20, 22, 28] extended the EPC language for configuring a reference process model to capture multiple process variants in a consolidated manner. Reference process models should be distinguished from so-called customizable process models. A customizable process model is a concrete process model intended for a certain context, whereas a reference process model is intended to capture common behaviour or best practices of a family of process variants[12, 23]. In configurable workflows approach, authors in [13, 27] presented C-YAWL(Configurable-YAWL), an extension of the executable process modelling language YAWL2 where variation points in a process are configured using so-called _ports_. Logic connectors (AND, XOR and OR) are integrated in each task in the form of a split (for the outgoing arcs) and a join (for the incoming arcs). In C-YAWL, like in C-EPC each feasible port variation is presented with a process fact. A C-EPC is an EPC in which functions and connectors can be marked as "configurable". A modeler can derive an individualized EPC from a C-EPC by selecting a possible variant for each configurable element[20]. As this approach doesn't present a meta-modelling solution for variants it's outside of the scope of this literature review section for a further discussion. In C-iEPC approach configurable nodes might be activities, gateways, events as well as objects and resources presented with a meta-model.
For each configurable node one customization option is selected to achieve customization. Configurable roles and configurable objects have two dimensions: optionality and specialization. If a configurable role (object) is "optional" (OPT), it can be restricted to "mandatory" (MND), or switched "OFF" to be removed from the process. If it is "mandatory" it can only be switched "OFF" [20]. There are some options for every configurable node, such as _off_ option which means node(s) does not appear in the customized model or _on_, node(s) is being kept in the customized model. Therefore, configurable nodes indicate the differences between process variants. These variations in the extended notation, namely C-iEPC, are captured in the way roles and objects are assigned to activities. To maintain control-flow, resource and object perspectives synchronized is essential to prove the correctness of the individualized process model.
Figure 12: The C-iEPC model representing all invoice payment variants
The number of an OR outgoing flows (if it is a split gateway) or the number of its incoming flows (if a join) can be restricted to any combination (e. g., two flows out of three), including being restricted to a single flow, in which case the gateway disappears [20].
Our case study of processing customer invoice payments is shown in Figure 12 which captures all five variants modeled as separate process models (see Section 2) into one single process model. Here, activities and gateways (i. e., variation points) are marked as configurable with a thicker border. Configurable gateways can be customized to an equal or more restrictive gateway. A configurable OR can be restricted to an XOR or to an AND gateway or can be left as a regular OR (no restriction). For example, we can capture the choice of processing orders via online shops or call centers by customizing the first XOR-split in Figure 12 or we can postpone the decision till runtime. If the choice is "online shops" we can restrict this gateway to the outgoing flow leading to the event "Invoice reviewed". As a result, the branch starting with the sequence flow "Call centers" is removed, and vice versa. Configurable activities can be kept on or switched off. If switched off, the activity is simply hidden in the customized model. In addition, they can be customized to optional. The choice of whether to keep the activity or not is deferred until runtime. For example, the function "Request payment by credit-card" and "Request payment by bank transfer" are configurable nodes in Figure 12; thus, we can switch them off for those orders received via online shops for which a pre-request payment is not required. Configurable elements might be resources (called roles in C-iEPCs) and objects, too. Authors in [20] propose to use logical gateways so-called range connector (i. e., XOR, OR and AND) that allow any combination of the resources and objects connected to activities modeled by a pair of natural numbers, e. g., lower bound (2) and upper bound (5), which means at least 2 and at most 5 resources. For simplicity, Figure 12 depicts only three resources marked as configurable nodes with a thicker border meaning that during customization they can be configured to one of the specialized resources. However, it's out of our scope to demonstrate how the specialization of resources assigned to activities is achieved.
A formalized algorithm with a proven theory is presented to guarantee the correctness of the individualized process model (i. e.,an iEPC with non relevant options being removed) derived from a configurable process model with respect to a valid configuration. Accordingly, functions "Capture customer info", "Review order info", "Request payment by credit-card", "Request payment by cash", "Review invoice" have been switched _OFF_ and thus they have been replaced by an arc. The resulted model is shown in Figure 13 (a) whereas (b) is the result after applying the last step of the individualized algorithm where SESE control-flow connectors are replaced with arcs (this may result in consecutive events and consecutive functions to be interleaved with events have to be removed). C-iEPCs do not provide any execution support, they are formally defined in [20]. An iEPC model is derived from a C-iEPC using an individualization algorithm. In the customized model all nodes that are no longer connected to the initial and
final events via a path are removed and the remaining nodes are reconnected to preserve (structural and behaviour) model correctness. To capture domain properties and their values a questionnaire is linked to configurable nodes of C-iEPC, supported by Synergia3 and Apromore4 toolsets. The resulting customized models have been validated via a case study in the film industry.
Footnote 3: www.processconfiguration.com
Footnote 4: www.apromore.org
Figure 13: The application of the individualization algorithm to a fragment of processing customer e-invoice process model of Figure 12
Instead, we design the questionnaire model based on their proposal to fit our example. It captures processing invoice payment properties as shown in Figure 14 which comprise a set of features called domain facts organized into questions. All questions and facts have been assigned a unique identifier. A domain fact has a default value which is the most common choice for that fact, e. g., _f5:e-invoice_ as most of the invoice payments are e-invoice, then we can assign a false value to the other fact _f6:hardcopy invoice_. Moreover, if a domain fact needs to be explicitly set when answering the question it is marked as mandatory. Otherwise, if a fact is left unset for the corresponding question then its default value can be used to answer the question or it is skipped. In a questionnaire model an order is established for posing questions to users in contrast with the feature model. This is achieved via order dependencies. There are two types of dependencies: _full_ and _partial_ dependency. For example, _q2_ can be posed after _q1_ is answered, this is expressed via the partial dependency between _q1_, _q2_ depicted with a dashed line in Figure 14. Whereas, a full dependency, e. g., _q4_ is posed after _q1_AND _q3_ is answered to capture the mandatory precedence in order to give priority to the most important questions.
Figure 14: An extract of the questionnaire model for configuring e-invoice type process model
After the clear overview of current process meta-modelling approaches we analyse and compare existing approaches by using different criteria.
## 4 Comparative analysis of current approaches
Recently, some comparative studies have been reported in business process variability domain. [21] conducted a systematic inventory of approaches to customizable process modelling. The authors identify and classify major approaches and provide a comparative evaluation with the objective to answer three research questions (such as represent common and distinct features of customizable process modelling approaches and research gaps exists in current Literature review (LR)). Authors in [2] conducted a systematic literature review to evaluate existing variability support across all phases of the business process life cycle. The authors considered and categorized primary studies based on eight research questions (such as underlying business process modelling language used, tools available for enabling process variability, and validation of methods proposed). They developed a framework, called VIVACE to enable process engineers to evaluate existing process variability approaches. Then, they evaluate their framework against three main approaches from LR: C-EPCs, Provop, and PESOA. Our survey differs from theirs, as we restrict our search to only select those approaches (five out of twenty-eight papers from digital libraries) that introduce a meta-model for capturing variants of a business process. Some other work from authors in [26] focus on identifying not only characteristics of business process variability but also challenges in this field through a literature mapping study. But they didn't compare or analyse the surveyed approaches from LR. Whereas, authors in [25] give a comparison on their assessed approaches to make the produced process models artefacts more understandable to business analysts. Whereas, author in [11] compare two approaches (C-YAWL and vBPMN)on the basis of a reference process model but using different types of configuration and adaptation mechanisms. In contrast, we extended the approach from [21] where we first describe each main approach in detail (see Section 3) secondly, applies an example to it, and last draws a comparative analysis based on evaluation of each criterion derived from our LR.
Table 1 summarizes the evaluation results for process variants meta-models approaches. Each column indicates to what extent the approach in question covers each evaluation criterion defined as follows. We used a "+" sign to indicate a criterion that is fulfilled, a "-" sign to indicate a criterion that is not fulfilled, and a "+/-" sign to indicate partial fulfilment. The first column lists the sixth main approaches including our approach. The next sixth columns indicate the coverage of each criterion. The last column indicate the modelling language(s) underlined by each approach.
**RQ1:** Which process types and process perspectives are covered by process variability meta-models? From results of LR in respect to this research question we can conclude that two type of processes exist: design-time(i. e., variations is considered only during process modelling phase) and runtime (i. e., varia
tions is considered only during process enactment for example to handle exceptions). Whereas, process perspectives may categorized the surveyed approaches to mainly functional (what activities are captured) and behaviour perspectives (the control-flow sequence), even though some approaches deals somehow also with some aspects of organizational (resources to be consumed) and informational perspective (consumption of data). Thereof, the criteria derived from **process types** results are:
* **Conceptual**: If an approach is designed to support conceptual modelling only than this means that variability is captured during process definition and these variant models will not be executed on top of a BPMS. Thereof, we say that this approach meets this criterion.
* **Executable**: An approach meets this criterion if variability is considered for process models that are meant to be executed by a typical BPMS. Moreover, during their enactment there are no inconsistencies reported in associating between different elements of a process model.
Therefore, for **process modelling perspectives** results the criteria derived might be:
* **Control-flow**: An approach meets this criterion if variability is captured along with activities and decision gateways that might become variation points (e. g., capture a skipped activity in one of the variants).
* **Resources**: If the variability is captured in the participated resources (human or system) that are planned to perform different tasks. In so doing, resources can become variation points (e. g., a typical resource is not performing in some of the process variants). If the approach does not represent them graphically but it is only mentioned then we say that the approach partially fulfils this criterion [21].
* **Data Objects**: An approach meets this criterion if data objects (i. e., produced-input data objects and consumed-output data objects) might become variation points. For example, a pay invoice confirmation is not captured in one of the variants of a order-to-pay process. If the approach does not represent them graphically but it is only mentioned then we say that the approach partially fulfils this criterion.
**RQ2:** Which supporting technique is used to introduce or capture variability between process models? In respect to this research question, the criteria derived from **supporting technique** results are:
* **behaviour**: The approach takes as input a collection of process variants and derive a process variant by hiding and blocking process elements. Any behavioural anomalies such as deadlocks should be avoided.
* **Structural**: The approach takes as input a base process model and after applying a set of change operations to it a process variant is derived. Any structural anomalies such as disconnected activities should be avoided.
According to the technique supported by specific approach some transformations should be done to process model in order to derive a variant. These transformations (by restriction/extension) might be categorized as criteria for:
* **Restriction**: An approach matches this criterion if a process model is configured by restricting its behaviour.
* **Extension**: An approach matches this criterion if a process model is configured by extending its behaviour
**RQ3:** Which process is a specialization(generalization) of another process? The approach takes as input a collection of process variants and derive a process variant by applying substitution operations to its abstract activities. Then, a valid criteria derived might be:
* **Process Specialization**: Specialization relationship between processes. An approach matches this criterion if a specialization/generalization relationship exists among processes.
From the information of the above table we can conclude that all approaches covers the the conceptual level of process variants and the control-flow process perspective. Whereas, authors in [20] captures variability in the participated resources (human or system)and data objects too. The supporting technique used to introduce or capture variability between process models is proposed by us, authors in [20] and [10] and the latter proposed both _restriction_ and _extension_ to capture variability. And only the method in [5] meets the criterion about _process specialization_.
## 5 Conclusions
In this paper we summarized the list of approaches that have been proposed to explicitly and adequately capture and manage processes with variants. As discussed, reference or customizable process models were introduced to model these variants collections in a way that each variant could be derived by inserting/changing/removing an activity according to a process context. This sur
\begin{table}
\begin{tabular}{c c c c c c c c c c c} \hline Main Approaches & Process & Type & Process Perspective & \multicolumn{2}{c}{Supporting Techniques} & \multicolumn{2}{c}{Variability Type} & Process Specialization & Process modelling Language \\ \hline & \begin{tabular}{c} Conceptual \\tual \\ \end{tabular} & \begin{tabular}{c} ExecutableControl-flow \\ \end{tabular} & \begin{tabular}{c} Recources \\ \end{tabular} & Objects & \begin{tabular}{c} Be- behavioural \\ \end{tabular} & \begin{tabular}{c} Struc- \\ \end{tabular} &
\begin{tabular}{c} Restriction- \\ stention \\ \end{tabular} & & & & \\
**PESOA**[4] & + & – & +/– & – & + & – & – & + & – & – &
\begin{tabular}{c} BPMN, \\ UML ADs \\ \end{tabular} \\
**BPFM**[18] & + & – & +/– & – & – & – & – & + & – & – & UML ADs \\
**vBPMN**[10] & + & +/– & + & – & – & + & + & – & + & – &
\begin{tabular}{c} Block- \\ structured \\ \end{tabular} \\
**PF**[17] & + & – & +/– & – & – & – & – & + & – &
\begin{tabular}{c} BPMN \\ SPMN \\ \end{tabular} \\
**C-iEPC**[20] & + & – & + & + & + & + & + & – & – & C-iEPC \\
**PV Hierarchy**[5] & + & + & + & – & + & – & + & – & + & BPMN \\ \hline \end{tabular}
\end{table}
Table 1: Comparative analysis of approaches for business process variability management
vey reviewed current literature by providing an overview of meta-modelling approaches that have been extended in order to capture the variations of business processes. Moreover, we are a comparative analysis of these approaches based on different criteria we identified from this inventory. A potential new area for future research could include in investigating the scalability of the proposed approaches to handle large and complex process variants.
|
2308.12507
|
The chaotic four-body problem in Newtonian gravity -- II. An
ansatz-based approach to analytic solutions
|
In this paper, we continue our analysis of the chaotic four-body problem by
presenting a general ansatz-based analytic treatment using statistical
mechanics, where each outcome of the four-body problem is regarded as some
variation of the three-body problem (e.g., when two single stars are produced,
called the 2+1+1 outcome, each ejection event is modeled as its own three-body
interaction by assuming that the ejections are well separated in time). This is
a generalization of the statistical mechanics treatment of the three-body
problem based on the density-of-states formalism. In our case, we focus on the
interaction of two binary systems, after which we divide our results into three
possible outcome scenarios (2+2, 2+1+1, and 3+1). For each outcome, we apply an
ansatz-based approach to deriving analytic distribution functions that describe
the properties of the products of chaotic four-body interactions involving
point particles. To test our theoretical distributions, we perform a set of
scattering simulations in the equal-mass point particle limit using FEWBODY. We
compare our final theoretical distributions to the simulations for each
particular scenario, finding consistently good agreement between the two. The
highlights of our results include that binary-binary scatterings act to
systematically destroy binaries producing instead a single binary and two
ejected stars or a stable triple, the 2+2 outcome produces the widest binaries
and the 2+1+1 outcome produces the most compact binaries.
|
Carlos M. Barrera Retamal, Nathan W. C. Leigh, Nicholas C. Stone
|
2023-08-24T02:26:47Z
|
http://arxiv.org/abs/2308.12507v1
|
The chaotic four-body problem in Newtonian gravity - II. An ansatz-based approach to analytic solutions
###### Abstract
In this paper, we continue our analysis of the chaotic four-body problem by presenting a general ansatz-based analytic treatment using statistical mechanics, where each outcome of the four-body problem is regarded as some variation of the three-body problem (e.g., when two single stars are produced, called the 2+1+1 outcome, each ejection event is modeled as its own three-body interaction by assuming that the ejections are well separated in time). This is a generalization of the statistical mechanics treatment of the three-body problem based on the density-of-states formalism. In our case, we focus on the interaction of two binary systems, after which we divide our results into three possible outcome scenarios (2+2, 2+1+1, and 3+1). For each outcome, we apply an ansatz-based approach to deriving analytic distribution functions that describe the properties of the products of chaotic four-body interactions involving point particles. To test our theoretical distributions, we perform a set of scattering simulations in the equal-mass point particle limit using FEWBDDY. We compare our final theoretical distributions to the simulations for each particular scenario, finding consistently good agreement between the two. The highlights of our results include that binary-binary scatterings act to systematically destroy binaries producing instead a single binary and two ejected stars or a stable triple, the 2+2 outcome produces the widest binaries and the 2+1+1 outcome produces the most compact binaries.
## 1 Introduction
The four-body problem has scarcely been studied analytically. This can be understood, at least in part, upon considering the history of the three-body problem, and its notoriety for being a strong example of chaos in nature. With the two-body problem solved, the temptation to find an analytic solution to the three-body problem attracted many researchers. Generally, the goal was to predict the positions of the particles at any future time, for any set of initial conditions. This eventually led to the understanding that the addition of even one extra particle (relative to the two-body problem) renders the number of variables in the equations of motion greater than the number of equations. The problem is unsolvable, and more particles will only make it worse. Consequently, the four-body problem received little attention for centuries (e.g. Nash & Monaghan, 1980).
More recently, computational advances have allowed for numerical studies of the four-body problem (e.g. Harrington, 1974; Saslaw, Valtonen & Aarseth, 1974; Mikkola, 1983, 1984a,b; Rasio, McMillan & Hut, 1995; Fregeau et al., 2004; Leigh & Geller, 2012; Leigh et al., 2016; Ryu, Leigh & Perna, 2017a,b,c). Ignoring planetary dynamics, most of these studied scattering interactions between two binary star systems. For example, Mikkola (1983) confirmed that stable triple systems form during encounters between identical binaries. Mikkola (1984a) extended this result to include binaries with different initial orbital energies, finding in the process that significantly more triples form as the ratio of binding energies increases from unity.
The primary astrophysical motivation for this paper is binary-binary scatterings in dense stellar systems, such as open, globular, or nuclear star clusters. In such systems, the rate of binary-binary scatterings can dominate the rate of binary-single scatterings provided the binary fraction satisfies \(\rm f_{b}\gtrsim 10\%\)(Sigurdsson & Phinney, 1993; Leigh & Sills, 2011). In this case, binary-binary scatterings are the dominant cluster heating source and the 4-body problem's scattering outcomes become critical for understanding the thermodynamic evolution of the host star cluster. Moreover, one possible outcome of a binary-binary scattering (which does not occur for binary-single scatterings in the point particle limit) is the dynamical formation of a stable triple star system (e.g. Leigh et al., 2016). Such triple systems are of great interest for their susceptibility to the Kozai-Lidov mechanism, which can create accreting inner binaries or exotic astrophysical transients (e.g. Perets & Fabrycky, 2009). The decay products of binary-binary scatterings are thought to have been observed directly, both in the form of stable triples (e.g. Leigh & Sills, 2011; Leigh & Geller, 2013) and runaway O/B stars (e.g. Hoogerwerf et al., 2001; Oh et al., 2015).
The strongly chaotic nature of the generic four-body problem makes an analytic solution to a set of specific initial conditions impossible (except for some fine-tuned sets of measure zero). However, chaos becomes a useful tool if we are interested in _probabilistic distributions_ of outcomes corresponding to _distributions_ of initial conditions. Following the pioneering
work of Monaghan (1976a,b, hereafter the "Monaghan formalism"), we employ statistical mechanics to compute distributions of outcomes for the generic problem of binary-binary scatterings. This paper is a more analytic continuation of our previous work, which examined binary-binary scattering using a large suite of numerical integrations (Leigh et al., 2016, hereafter "Paper I").
In SS2, we provide an overview of the geometry and phase space of the problem. In SS 3, 4, and 5, we use the Monaghan formalism to compute phase space volumes and parameter distributions for the three possible outcomes of a binary-binary scattering event involving point particles. In SS 6, we compute branching ratios for these three outcomes. In SS 7 we present the toolkit FEWBOY to describe and explain the numerical simulations we use to test our analytic distributions. In SS 8 we show our results comparing the simulated data with the analytic derivations found in the previous sections. In SS 9 we consider the astrophysical implications of our results, and in SS10 we summarize our work.
## 2 Statistical mechanics of the four-body problem
In this paper we consider the outcome of the generic, non-hierarchical four-body problem. We assume all stars are point particles and neglect tidal forces, physical collisions, the effects of general relativity, and non-gravitational forces. The four interacting stars have masses \(m_{\rm a}\), \(m_{\rm b}\), \(m_{\rm c}\), and \(m_{\rm d}\), which may differ from each other. The interacting system has a conserved total energy, \(E_{0}\), and a conserved total angular momentum, \(\widetilde{L}_{0}\). Our approach, which was first developed in the Monaghan formalism to treat the chaotic three-body problem, is to consider the statistical phase space of different outcomes of the four-body problem. By calculating the phase space volume of a single outcome as a function of parameters of interest (e.g. ejection velocities), we can construct distributions of these outcome parameters. By calculating the relative volumes of different outcomes, we can compute branching ratios between them.
There are four possible outcomes of the generic four-body problem. Following a phase of chaotic, non-hierarchical gravitational interactions, the system eventually forms some combination of hierarchical, bound particles and unbound, escaping particles. The four possible combinations are
* One escaping, unbound star and a hierarchically stable triple (the "3+1" outcome).
* Two escaping, unbound stars and a surviving binary (the "2+1+1" outcome).
* Two surviving binaries which are mutually unbound from each other (the "2+2" outcome).
* Four escaping, unbound stars (the "1+1+1+1" outcome).
In isothermal star clusters with Maxwellian velocity distributions, the 1+1+1+1 outcome is almost always energetically forbidden (Leigh et al., 2016), so we ignore it for the remainder of this work.
The primary assumption we make in analyzing the 3+1, 2+1+1, and 2+2 cases is that the strongly chaotic interactions of a non-hierarchical four-body system will uniformly populate the phase space available in each of these outcomes. The same assumption motivated the statistical mechanical treatment of the three-body problem in Monaghan (1976a,b), and was validated by post-hoc checks using numerical orbit integrations. Because the parameter space of the four-body problem is much larger than that of the three-body problem, we cannot hope to fully cover it with numerical integrations, but we will check our results against a suite of four-body scattering experiments.
Although our primary motivation is to make statistical predictions for the outcomes of binary-binary scatterings, our results should apply equally well to other non-hierarchical four-body systems, such as a strong triple-single scattering, or to other, less probable events (such as a simultaneous encounter between a binary and two single stars).
In the following three sections, we consider the 2+2, 3+1, and 2+1+1 outcomes. It is important to note that variables in these sections may sometimes have the same names but different definitions. We define all variables locally at the beginning of their respective sections.
## 3 The 2+2 outcome
We begin with the 2+2 case, as it has the greatest similarity to the classic 2+1 three-body problem. Fig. 1 shows a cartoon of the 2+2 outcome. The final state here is two binaries; the first, composed of masses \(m_{\rm a}\) and \(m_{\rm b}\), has total mass \(m_{\rm B1}=m_{\rm a}+m_{\rm b}\). The second, composed of masses \(m_{\rm c}\) and \(m_{\rm d}\), has total mass \(m_{\rm B2}=m_{\rm c}+m_{\rm d}\). The binaries have individual reduced masses \(\mathcal{M}_{1}=m_{\rm a}m_{\rm b}/m_{\rm B1}\) and \(\mathcal{M}_{2}=m_{\rm c}m_{\rm d}/m_{\rm B2}\), and a joint reduced mass of \(m=m_{\rm B1}m_{\rm B2}/M\), where \(M=m_{\rm B1}+m_{\rm B2}\).
The total energy of this final state is \(E_{0}=E_{\rm s}+E_{\rm B1}+E_{\rm B2}\), where
\[E_{\rm s}= \frac{1}{2}m\dot{\vec{r}}_{\rm s}^{\,2}-\frac{Gm_{\rm B1}m_{\rm B 2}}{r_{\rm s}} \tag{1}\] \[E_{\rm B1}= \frac{1}{2}\mathcal{M}_{1}\dot{\vec{r}}_{\rm ab}^{\,2}-\frac{Gm _{\rm a}m_{\rm b}}{r_{1}}\] \[E_{\rm B2}= \frac{1}{2}\mathcal{M}_{2}\dot{\vec{r}}_{\rm cd}^{\,2}-\frac{Gm _{\rm c}m_{\rm d}}{r_{2}}.\]
In the above equations, \(\vec{r}_{\rm ab}\) is the separation vector between mass a and mass b, \(\vec{r}_{\rm cd}\) is the separation vector between mass c and mass d, and \(\vec{r}_{\rm s}\) is the separation vector between the two binary centers of mass, as is shown in Fig. 1.
There is likewise a well-defined final state angular momentum, \(\widetilde{L}_{0}=\widetilde{L}_{\rm s}+\widetilde{L}_{\rm B1}+\widetilde{L}_{ \rm B2}\), where
\[\widetilde{L}_{\rm s}= m(\vec{r}_{\rm s}\times\dot{\vec{r}}_{\rm a}) \tag{2}\] \[\widetilde{L}_{\rm B1}= \mathcal{M}_{1}(\vec{r}_{\rm ab}\times\dot{\vec{r}}_{\rm ab})\] \[\widetilde{L}_{\rm B2}= \mathcal{M}_{2}(\vec{r}_{\rm cd}\times\dot{\vec{r}}_{\rm cd}).\]
The two final state binaries in this outcome have characteristic semi-major axes \(a_{1}\) and \(a_{2}\). We choose \(a_{1}>a_{2}\). If \(a_{1}\gg a_{2}\), then the 2+2 problem reduces to the standard Monaghan 2+1 formalism with extra degrees of freedom. More specifically, we can apply the loss cone formalism while treating the second binary as a point particle (although we must account for its reservoir of energy and angular momentum).
Operating first in this limit, we simplify further by working also in the low angular momentum limit. In this case, the
density of escape configurations per unit energy is
\[\sigma=\int...\int\delta\left(E_{\rm s}+E_{\rm B1}+E_{\rm B2}-E_{0} \right){\rm d}\vec{r}_{\rm s}{\rm d}\vec{p}_{\rm s}{\rm d}\vec{r}_{\rm i}{\rm d }\vec{p}_{\rm i}{\rm d}\vec{r}_{\rm i}{\rm d}\vec{p}_{\rm i}{\rm d}\vec{r}_{\rm i }{\rm d}\vec{p}_{\rm i}{\rm d}\vec{p}_{\rm i}. \tag{3}\]
We eliminate three variables of integration with the following simplification:
\[\int\int\int\delta\left(\frac{p_{\rm s}^{2}}{2m}-\frac{Gm_{\rm B1 }m_{\rm B2}}{r_{\rm s}}+E_{\rm B1}+E_{\rm B2}-E_{0}\right){\rm d}\vec{p}_{\rm s}\] \[=4\pi f_{\rm LC}\int_{0}^{\infty}\delta\left(\frac{p_{\rm s}^{2}} {2m}-\frac{Gm_{\rm B1}m_{\rm B2}}{r_{\rm s}}+E_{\rm B1}+E_{\rm B2}-E_{0} \right)p_{\rm s}^{2}{\rm d}p_{\rm s}\] \[=4\pi f_{\rm LC}m\sqrt{2m\left(E_{0}+\frac{Gm_{\rm B1}m_{\rm B2}} {r_{\rm s}}-E_{\rm B1}-E_{\rm B2}\right)}. \tag{4}\]
Under the assumption that \(a_{1}\gg a_{2}\), we use the areal loss cone factor
\[f_{\rm LC}=\frac{\alpha^{2}a_{1}^{2}}{4r_{\rm s}^{2}}. \tag{5}\]
Here \(\alpha\) is a dimensionless fudge factor of order unity that ultimately must be calibrated from numerical scattering experiments. In the classic 2+1 problem, \(\alpha\approx 7\).
We next evaluate
\[\int\int\int\frac{2^{1/2}\pi\alpha^{2}a_{1}^{2}m^{3/2}}{r_{\rm s}^ {2}}\sqrt{E_{0}-\frac{Gm_{\rm B1}m_{\rm B2}}{r_{\rm s}}-E_{\rm B1}-E_{\rm B2}} {\rm d}\vec{r}_{\rm s}\] \[=2^{5/2}\pi^{2}\alpha^{2}m^{3/2}a_{1}^{2}\int_{0}^{R}\sqrt{E_{0}+ \frac{Gm_{\rm B1}m_{\rm B2}}{r_{\rm s}}-E_{\rm B1}-E_{\rm B2}}{\rm d}r_{\rm s}\] \[\approx 2^{7/2}\pi^{2}\alpha^{2}m^{3/2}a_{1}^{2}\sqrt{Gm_{\rm B1}m_{ \rm B2}}. \tag{6}\]
In the last approximate equality, we have taken \(R\lesssim 3a_{1}\). The density of states for this outcome is now
\[\sigma\approx 2^{3/2}\pi^{2}\alpha^{2}m^{2}(GMR)^{1/2}(Gm_{\rm a}m_{\rm b})^{2}\] \[\times\int...\int\frac{{\rm d}\vec{r}_{\rm ab}{\rm d}\vec{p}_{\rm ab }{\rm d}\vec{r}_{\rm c}{\rm d}\vec{p}_{\rm cd}}{|E_{\rm B1}|^{2}}. \tag{7}\]
Further simplification (making use of isotropy) gives
\[\sigma\approx 4a^{2}\pi^{5}(Gm_{a}m_{b})^{7/2}R^{1/2}m_{B}^{3/2}M^{-3/2}m_{e}^ {2}\] \[\times\int\int\frac{{\rm d}E_{\rm B1}{\rm d}E_{\rm B2}}{|E_{\rm B1 }|^{7/2}|E_{\rm B2}|^{3/2}}L_{1}L_{2}{\rm d}L_{1}{\rm d}L_{2} \tag{8}\] \[\sigma\approx 2a^{2}\pi^{5}(Gm_{a}m_{b})^{11/2}R^{1/2}m_{B}^{3/2}M^{-3/2}m_{e }^{2}{\cal M}\] \[\times\int\int\frac{{\rm d}E_{\rm B1}{\rm d}E_{\rm B2}}{|E_{\rm B1 }|^{9/2}|E_{\rm B2}|^{5/2}}e_{1}e_{2}{\rm d}e_{1}{\rm d}e_{2}. \tag{9}\]
It is important to mention that in this derivation we have called \({\rm m}_{e}\) the mass of the object that we consider is the one that escapes (analogous to the three-body case in which the escaper is a single), which in this case is a binary (so \({\rm m}_{B1}={\rm m}_{B2}={\rm m}_{e}\)). For simplicity we will define \({\rm A}:=2a^{2}\pi^{5}(Gm_{a}m_{b})^{11/2}R^{1/2}m_{B}^{3/2}M^{-3/2}m_{e}^{2}{ \cal M}\), however it is important to remember that the value of \({\rm m}_{e}\) and \({\cal M}\) will be different for each of the three different outcomes.
Eq. 9 encodes the final probability distribution of binary energies (Eq. 8 does not because its limits of integration depend on \(E_{\rm B1}\) and \(E_{\rm B2}\)). Specifically, these probability distributions are
\[P(|E_{\rm B1}|{\rm d}|E_{\rm B1}|=A\frac{7}{2}|E_{0}|^{7/2}|E_{ \rm B1}|^{-9/2}{\rm d}|E_{\rm B1}| \tag{10}\] \[P(|E_{\rm B2}|{\rm d}|E_{\rm B2}|=A\frac{3}{2}|E_{0}|^{3/2}|E_{ \rm B2}|^{-5/2}{\rm d}|E_{\rm B2}|. \tag{11}\]
However, in this simple derivation, we have followed (Valtonen & Karttunen, 2006), and have neglected angular momentum conservation, rendering our results inaccurate. We now consider the power-law ansatz of (Valtonen & Karttunen, 2006),
\[P(|E_{\rm B1}|{\rm d}|E_{\rm B1}|= 2a^{2}\pi^{5}(Gm_{a}m_{b})^{11/2}R^{1/2}m_{B}^{3/2}M^{-3/2}m_{e}^ {2}{\cal M}\] \[\times(n-1)|E_{0}|^{n-1}|E_{\rm B1}|^{-n}{\rm d}|E_{\rm B1}| \tag{12}\] \[P(|E_{\rm B2}|{\rm d}|E_{\rm B2}|= 2a^{2}\pi^{5}(Gm_{a}m_{b})^{11/2}R^{1/2}m_{B}^{3/2}M^{-3/2}m_{e}^ {2}{\cal M}\] \[\times(\nu-1)|E_{0}|^{\nu-1}|E_{\rm B2}|^{-\nu}{\rm d}|E_{\rm B2}|. \tag{13}\]
In the zero angular momentum limit of the chaotic three-body problem, \(n=3\), leading us to speculate that here \(n=3\) and \(\nu=1\). This, however, would imply a UV divergence in \(P(|E_{\rm B2}|)\) as \(E_{\rm B2}\rightarrow\infty\). We fix this by truncating at a maximum energy \(E_{\rm max}\) motivated by physics beyond the Newtonian point particle limit of this paper (e.g. physical collisions, tidal interactions, relativistically unstable orbits, etc.). Thus,
\[P(|E_{\rm B2}|{\rm d}|E_{\rm B2}|= 2a^{2}\pi^{5}(Gm_{a}m_{b})^{11/2}R^{1/2}m_{B}^{3/2}M^{-3/2}m_{e}^{ 2}{\cal M} \tag{14}\] \[\times|E_{\rm B2}|^{-1}\ln^{-1}\left(\frac{E_{\rm max}}{E_{0}/2} \right){\rm d}|E_{\rm B2}|.\]
These approximations will begin to break down if \(a_{1}\approx a_{2}\).
## 4 The 3+1 Outcome
In Paper I, we demonstrated that to a good approximation, the distribution of escaper velocities in the 3+1 outcome can be computed with a straightforward application of the 2+1 Monaghan formalism, considering only the binding energy of the inner binary \(E_{\rm B}\). This approach works because the total binding energy, \(E_{1}\), of a hierarchically stable triple (at least in the equal mass case considered in Paper I) is dominated
Figure 1: The configuration of the 2+2 outcome.
by that of the inner binary. Unfortunately, this approach says little about the properties of the outer orbit of the resulting triple, and this adaptation of the Monaghan formalism is only applicable to the inner binary and the escaping single star.
In other words, \(|E_{1}|-|E_{\rm B}|\equiv|E_{\rm T}|\ll|E_{1}|\). A cartoon sketch of the 3+1 outcome is shown in Fig. 2. Masses \(m_{\rm a}\) and \(m_{\rm b}\) form the inner binary component of the stable triple, while \(m_{\rm c}\) is the outer tertiary component. The mass \(m_{\rm d}\) is escaping from the system on an unbound trajectory. We define additional masses \(m_{\rm B}=m_{\rm a}+m_{\rm b}\), \(m_{\rm T}=m_{\rm b}+m_{\rm c}\), \({\cal M}_{\rm B}=m_{\rm a}m_{\rm b}/m_{\rm B}\), and \({\cal M}_{\rm T}=m_{\rm B}m_{\rm c}/m_{\rm T}\).
### Strongly Hierarchical Triples
When masses are comparable, the stable triple produced in the 3+1 outcome can be thought of in the following way: the inner binary contains the bulk of the energy \(E_{1}\), while the outer binary contains the bulk of the angular momentum \(\vec{L}_{1}\). In paper I, we showed that the standard 2+1 Monaghan formalism applies reasonably well to the binding energy distributions, so now applying factor A (with their respective values) we found
\[P(|E_{\rm B}|){\rm d}|E_{\rm B}|=A(n-1)|E_{0}|^{n-1}|E_{\rm B}|^{-n}{\rm d}|E_ {\rm B}|, \tag{15}\]
where \(n=9/2\) if angular momentum conservation is neglected and \(n=3\) in zero-angular momentum ensembles. We can determine the distribution of outer triple binding energies at a similar level of approximation by assuming a thermal distribution of outer eccentricities \(e_{\rm T}\), i.e. \({\rm d}N/{\rm d}e_{\rm T}=2e_{\rm t}\). Then, under the assumption that the outer orbit angular momentum \(L_{\rm T}\gg L_{\rm B}\), we use the relation \(e_{\rm T}^{2}=1-L_{\rm T}^{2}/({\cal M}_{\rm T}^{2}Gm_{\rm T}a_{\rm T})\) to compute
\[\frac{{\rm d}N}{{\rm d}a_{\rm T}}=\frac{{\rm d}N}{{\rm d}e_{\rm T}}\frac{{\rm d }e_{\rm T}}{{\rm d}a_{\rm T}}=\frac{L_{\rm T}^{2}}{{\cal M}_{\rm T}^{2}Gm_{\rm T }a_{\rm T}^{2}}, \tag{16}\]
or, equivalently,
\[\frac{{\rm d}N}{{\rm d}E_{\rm T}}=\frac{2L_{\rm T}^{2}}{{\cal M}_{\rm T}^{2}Gm _{\rm T}m_{\rm B}m_{\rm c}}. \tag{17}\]
For fixed \(L_{\rm T}\), Eq. 17 specifies the distribution of \(|E_{\rm T}|\), which varies from 0 to a maximum value \(\lesssim|E_{1}|/2\).
We can proceed further under the assumption of small angular momentum in the inner binary; in this limit, the angular distribution of \(\tilde{L}_{\rm B}\) will be approximately isotropic with respect to \(\tilde{L}_{\rm T}\). If we define a reference axis \(\tilde{z}\parallel\tilde{L}_{\rm B}\), then the distribution of misalignment angles
\[\frac{{\rm d}N}{{\rm d}\sin\theta}=\frac{1}{2}, \tag{18}\]
where \(\cos\theta\equiv\tilde{L}_{1}\cdot\tilde{L}_{\rm B}=L_{1}^{x}/L_{1}\). In the remainder of this section, we denote the \(z\) component of a vector with a superscript \(z\), and the components orthogonal to \(\tilde{z}\) with a superscript \(\perp\).
We now complete this perturbative calculation: having assumed a distribution of \(\theta\) which is isotropic, we wish to know the distribution of a different misalignment angle, \(\cos\psi\equiv\tilde{L}_{\rm B}\cdot\tilde{L}_{\rm T}=L_{\rm T}^{k}/L_{\rm T}\). In general, \(\psi\approx\theta\), but we aim here to quantify the leading order deviation from isotropy in \(\psi\). Since \(\tilde{L}_{1}=\tilde{L}_{\rm B}+\tilde{L}_{\rm T}\), we can write \(L_{1}^{\rm t}=L_{\rm B}+L_{\rm T}^{\rm t}\) and \(L_{1}^{\perp}=L_{\rm T}^{\perp}\). This yields
\[\cos\psi=\frac{L_{1}\cos\theta-L_{\rm B}}{(L_{1}^{2}+L_{\rm B}^{2}-2L_{1}L_{ \rm B}\cos\theta)^{1/2}}, \tag{19}\]
and the quadratic formula then provides
\[\cos\theta=\frac{L_{\rm B}}{L_{\rm O}}\sin^{2}\psi\pm\cos\psi\sqrt{1-\frac{L_{ \rm B}^{2}}{L_{1}^{2}}\sin^{2}\psi}. \tag{20}\]
The final distribution of interest is \({\rm d}N/{\rm d}\psi=({\rm d}N/{\rm d}\theta)({\rm d}\theta/{\rm d}\psi)\), which evaluates to
\[\frac{{\rm d}N}{{\rm d}\psi}=\frac{1}{2}\cos\theta\left(\frac{{\rm d}\psi}{{ \rm d}\theta}\right)^{-1}, \tag{21}\]
where
\[\frac{{\rm d}\psi}{{\rm d}\theta}=L_{1}^{2}\sin\theta\csc\psi\frac{L_{1}-L_{ \rm B}\cos\theta}{(L_{1}^{2}+L_{\rm B}^{2}-2L_{1}L_{\rm B}\cos\theta)^{3/2}}, \tag{22}\]
and both \(\cos\theta\) and \(\sin\theta\) can be computed from Eq. 20.
## 5 The 2+1+1 Outcome
The 2+1+1 case is in some ways the most distinct from the classic 2+1 problem. It differs not only in additional degrees of freedom, but more fundamentally in its complicated causality. The 2+2, 3+1, and 2+1 scenarios terminate a single impulsive escape, but the 2+1+1 does not, and two different escape events must be considered. In Fig. 3, we show a cartoon of the 2+1+1 outcome. Here masses \(m_{\rm a}\) and \(m_{\rm b}\) form the survivor binary, while masses \(m_{\rm c}\) and \(m_{\rm d}\) are escaping on unbound orbits. The order of escape matters: the distribution of parameters for the first escaper (\(m_{\rm c}\)) will differ from the parameter distribution for the second escaper (\(m_{\rm d}\)).
We begin by making an approximation of _sequential escape_: we assume that in general, a metastable triple is formed after the escape of particle C, and that particle D is only ejected after the gravitational influence of C becomes negligible. This approximation lets us apply the standard 2+1 Monaghan formalism in an iterated way. Based on the numerical scattering experiments of Paper I, we believe it to be well justified for low virial ratios (\(k\ll 1\)) but a poor approximation for high virial ratios (\(k\approx 1\)), when both ejected stars are ejected
Figure 2: The configuration of the 3+1 outcome.
almost simultaneously. We first estimate the distribution of binding energies \(E_{\rm T}\) of the metastable triple:
\[P(|E_{\rm T}|){\rm d}|E_{\rm T}|=\frac{7}{2}|E_{0}|^{7/2}|E_{\rm T}|^{-9/2}{\rm d }|E_{\rm T}|. \tag{23}\]
As before, \(E_{0}\) is the conserved total energy of the four-body encounter. For a given \(E_{\rm T}\) value, we can take the standard 2+1 distribution of binding energies for \(E_{\rm B}\) (the binding energy of the final surviving binary), but if we want a distribution of \(E_{\rm T}\) values, we need to integrate over Eq. 23:
\[P(|E_{\rm B}|){\rm d}|E_{\rm B}|= \int_{|E_{0}|}^{|E_{\rm B}|}\frac{7}{2}|E_{\rm T}|^{7/2}|E_{\rm B }|^{-9/2}P(|E_{\rm T}|){\rm d}|E_{\rm T}|{\rm d}|E_{\rm B}|\] \[= \frac{49}{4}|E_{0}|^{7/2}|E_{\rm B}|^{-9/2}\ln(E_{\rm B}/E_{0}). \tag{24}\]
More generally, if we substitute a power law index \(n\) for the triple binding energy distribution (9/2 above) we find
\[P(|E_{\rm B}|){\rm d}|E_{\rm B}|=A(n-1)^{2}|E_{0}|^{n-1}|E_{\rm B}|^{-n}\ln(E_ {\rm B}/E_{0}). \tag{25}\]
Likewise, we can apply the results of the standard 2+1 formalism.
## 6 Branching ratios
The "branching ratio" defines the probability of obtaining a given outcome, for a given total encounter energy and angular momentum. Hence, for the chaotic four-body problem in the point-particle limit with total energy E \(<0\), there are three branching ratios to consider. In general, the relative fractions for these different outcomes must be determined using numerical scattering simulations. However, as we are about to show, these branching ratios can also be computed analytically, if all particles are identical.
Consider performing \(N_{0}\) simulations of a chaotic four-body interaction involving identical point-particles, with nearly identical initial conditions. Given \(N_{0}\) simulations, we must obtain \(N_{0}\) outcomes. Then, the total number of simulations performed can be written:
\[N_{0}=N_{2+1+1}+N_{3+1}+N_{2+2}, \tag{26}\]
where \(N_{2+1+1}\), \(N_{3+1}\) and \(N_{2+2}\) correspond to the number of simulations resulting in, respectively, the 2+1+1, 3+1 and 2+2 outcomes. Now, by conservation of energy, we must also find that the total amount of energy put in across all simulations is equal to the total energy we get back out. In other words:
\[N_{0}E_{0}= N_{3+1}\int E\frac{dN_{3+1}}{dE}dE+N_{2+1+1}\int E\frac{dN_{2+1+1} }{dE}dE\] \[+N_{2+2}\int E\frac{dN_{2+2}}{dE}dE \tag{27}\]
Similarly, the total angular momentum must be conserved, ensuring that the following must be true:
\[N_{0}L_{0}= N_{3+1}\int L\frac{dN_{3+1}}{dL}dL+N_{2+1+1}\int L\frac{dN_{2+1+1} }{dL}dL \tag{28}\] \[+N_{2+2}\int L\frac{dN_{2+2}}{dL}dL\]
Each term on the right-hand-sides of Equations 27 and 28 must be broken up in to the individual contributions from each decay product (i.e., single, binary and/or triple star(s)). The limits of the resulting integrals must then be chosen appropriately. For example, for the 2+1+1 outcome, we have:
\[\int E\frac{dN_{2+1+1}}{dE}dE= \int E_{\rm S,1}\frac{dN_{2+1+1}}{dE_{\rm S,1}}dE_{\rm S,1}\] \[+\int E_{\rm S,2}\frac{dN_{2+1+1}}{dE_{\rm S,2}}dE_{\rm S,2} \tag{29}\] \[+\int E_{\rm B}\frac{dN_{2+1+1}}{dE_{\rm B}}dE_{\rm B}\]
\[\int E\frac{dN_{2+1+1}}{dE}dE= \frac{1}{2}m_{\rm S,1}\int_{0}^{\infty}f(E_{\rm S,1})v_{\rm S,1}^ {2}dE_{\rm S,1}\] \[+\frac{1}{2}m_{\rm S,2}\int_{0}^{\infty}f(E_{\rm S,2})v_{\rm S,2}^ {2}dE_{\rm S,2} \tag{30}\] \[+\int_{-\infty}^{0}f(E_{\rm B})E_{\rm B}dE_{\rm B},\]
where the indices 1 and 2 correspond to, respectively, the first and second ejected single stars, and all distributions correspond to those presented for the 2+1+1 outcome. Note as well that \(f(E_{\rm S,1})=f(v_{\rm S,1})/(m_{\rm S,i}v_{\rm S,i})\).
If we divide both sides of all three equations by \(N_{0}\), then Equations 26, 27 and 28 constitute three equations, each with the same three unknowns. Hence, this system of equations is solvable. The factor in front of each term corresponds to the branching ratio for that outcome.
We caution that if the particles are not identical, then the formalism presented here for calculating branching ratios for the different outcomes is no longer valid, strictly speaking. We defer this issue to a future paper, along with a more thorough comparison between the predicted branching ratios and the results of numerical scattering simulations.
## 7 Methods
In this section, we describe and present the numerical scattering simulations used to test directly the analytic distribution functions derived in the preceding sections.
Figure 3: The configuration of the 2+1+1 outcome.
### Numerical scattering simulations
The numerical scattering simulations used throughout this paper are the same as presented in Leigh et al. (2016). For completeness, we repeat our description of the code and initial set-up here.
We calculate the outcomes of a series of binary-binary (2+2) encounters using the FEWBODY numerical scattering code1. The code integrates the usual \(N\)-body equations in configuration- (i.e. position-) space in order to advance the system forward in time, using the eighth-order Runge-Kutta Prince-Dormand integration method with ninth-order error estimate and adaptive time-step. For more details about the FEWBODY code, we refer the reader to Fregeau et al. (2004).
Footnote 1: For the source code, see [http://fewbody.sourceforge.net](http://fewbody.sourceforge.net).
The outcomes of these 2+2 encounters are studied for the initial virial ratio \(k=0\), where \(k\) is defined as:
\[k=\frac{T_{1}+T_{2}}{E_{\rm b,1}+E_{\rm b,2}}, \tag{31}\]
here the indexes 1 and 2 correspond to the two initial binaries. The initial kinetic energy corresponding to the centre of mass motion of binary \(i\) is:
\[T_{\rm i}=\frac{1}{2}m_{\rm i}v_{\rm inf,i}^{2}, \tag{32}\]
where \(m_{\rm i}=m_{\rm i,a}+m_{\rm i,b}\) is the total binary mass and \(v_{\rm inf,i}\) is the initial centre of mass velocity for binary \(i\). The initial orbital energy of binary \(i\) is:
\[E_{\rm b,i}=-\frac{Gm_{\rm i,a}m_{\rm i,b}}{2a_{\rm i}}, \tag{33}\]
where \(m_{\rm i,a}\) and \(m_{\rm i,b}\) are the masses of the binary components and \(a_{\rm i}\) is the initial orbital separation. Given this definition for the virial ratio, \(k=0\) corresponds to the binaries starting from rest, and maximizes the fraction of longer-lived chaotic interactions (which is a necessary prerequisite to apply the Monaghan formalism).
All objects are point particles with masses of 1 M\({}_{\odot}\). All binaries have \(a_{\rm i}=1\) AU initially, and eccentricities \(e_{\rm i}=0\). We fix the impact parameter at \(b=0\) for all simulations. The angles defining the initial relative configurations of the binary orbital planes and phases are chosen at random.
We use the same criteria as Fregeau et al. (2004) to decide when a given encounter is complete. To first order, this is defined as the point at which the separately bound hierarchies that make up the system are no longer interacting with each other or evolving internally. More specifically, the integration is terminated when the top-level hierarchies have positive relative velocity and the corresponding top-level \(N\)-body system has positive total energy. Each hierarchy must also be dynamically stable and experience a tidal perturbation from other nodes within the same hierarchy that is less than the critical value adopted by FEWBODY, called the tidal tolerance parameter. For this study, we adopt the tidal tolerance parameter \(\delta=10^{-7}\) for all simulations.2 This choice for \(\delta\), while computationally expensive, is needed to maximize the accuracy of our simulations, and ensure that we have converged on the correct encounter outcome (see Geller & Leigh 2015 for more details).
Footnote 2: The more stringent the tidal tolerance parameter is chosen to be, the closer to a “pure” \(N\)-body code the simulation becomes.
Because of the isotropy of our initial conditions, the typical four-body encounter we simulate has \(L_{0}>0\). If one considers a binary-binary scattering event where the two initial binaries have isotropically oriented angular momentum vectors of magnitude \(L_{1}\) and \(L_{2}\) (\(L_{1}\geq L_{2}\) by assumption), then the total angular momentum \(\vec{L}_{0}=\vec{L}_{1}+\vec{L}_{2}\) spans a range of magnitudes from \(L_{1}-L_{2}\) to \(L_{1}+L_{2}\) with a distribution
\[\frac{\mathrm{d}N}{\mathrm{d}L_{0}}=\frac{L_{0}}{2L_{1}L_{2}}. \tag{34}\]
The first moment of this distribution is
\[\langle L_{0}\rangle=L_{1}+\frac{L_{2}}{3L_{1}}L_{2}. \tag{35}\]
If we specialize now to our initial conditions (equal masses \(m\), equal initial semi-major axes, \(L\equiv L_{1}=L_{2}\)), we find that
\[\frac{\langle L_{0}\rangle}{L_{\rm max}}=\frac{4\sqrt{2}}{15}, \tag{36}\]
where we have followed Valtonen & Karttunen (2006) in defining the maximum system angular momentum \(L_{\rm max}\equiv\frac{5}{2}G\sqrt{m^{5}/|E_{0}|}\). Their numerical fitting formula for the classic 2+1 problem predicts \(n=3+18\tilde{L}^{2}\) for ensembles of resonant three-body encounters with angular momentum \(\tilde{L}=L/L_{\rm max}\). This gives us a naive expectation of \(n\approx 5.6\) for our numerical simulations.
## 8 Results
In this section, we compare the results of our numerical scattering simulations to the fitting formulae presented in the previous sections.
### Comparing to the simulations
In Figure 4, we present the final outcome distributions after the interaction between our two initial binaries. We separate our results for different semi-major axis ratios (a\({}_{1}\)/a\({}_{2}\), indicated on the x-axis) and show the fraction of simulations resulting in each of our three possible outcomes (i.e., 3+1, 2+1+1 and 2+2). For each combination of a\({}_{1}\)/a\({}_{2}\) we perform 10,000 scattering simulations. The results for the 2+2, 3+1 and 2+1+ cases are shown in red, black and blue colour bars, respectively. Note that this colour relationship will be used for all figures in this paper. We note that these branching ratios should be computable explicitly using the equations in Section 6, but we defer this to a future paper, since we first need to re-derive the analytic functions in this paper from first principles as in Stone & Leigh (2019) or Ginat & Perets (2021) to obtain the needed angular momentum dependences.
We see that for the case in which our initial semi-major axes are relatively similar (\(1\leq{\rm a}_{1}/{\rm a}_{2}\leq 4\)) the largest outcome fraction corresponds to the 2+1+1 scenario. This is in agreement with what was shown by Mikkola (1983) and Leigh et al. (2016) for identical initial conditions. The trend changes as the ratio of the semi-major axis increases. For a\({}_{1}\)/a\({}_{2}>8\) the scenario that occurs the most corresponds to the 3+1 case, that is a particle of the system is transformed into an escaper leaving behind a dynamically stable triple system. Finally, the formation of the 2+2 case is always the least probable, tending to decrease as the semi-major axis
ratio increases. This is because for this to occur, both binaries must form simultaneously, which effectively requires that two stars be ejected in similar directions, with similar ejection times and escape velocities, rendering this outcome improbable. Alternatively, this can be viewed as the wider binary struggling more and more to eject the more compact binary (in analogy to the the three-body case) as the ratio of semi-major axes increases at fixed particle mass.
In Figure 5, we present normalized histograms of the final distributions of binary orbital energies, parameterized using the total encounter energy E\({}_{0}\) as z = E\({}_{0}\)/E\({}_{B}\). This is the result of our binary-binary scattering experiments, for different values of the semi-major axis ratio (a\({}_{1}\)/a\({}_{2}\), as in Figure 4). For the 2+2 case, we show the orbital energies of both final binaries, divided according to their final orbital energies using the solid and dashed lines for the compact and wide binary, respectively. For the 3+1 case, we show only the orbital energy of the inner binary of the resulting triple.
For the energy range between \(0\leq|\)E\({}_{0}|\)/\(|\)E\({}_{B}|\leq 1\), the analytical distributions reproduce very well what was obtained through the simulations. Especially for the 2+1+1 case, the curve is wholly reproduced, which shows that our assumption that this outcome can be modeled by applying the three-body disintegration scheme twice, by assuming additionally that each escape is well separated in time, work quite well. The same happens for the 3+1 case, assuming that all the interaction energy is held in the inner binary of the triple system, while all the angular momentum is kept in the outer orbit of the same system. In the 2+2 case, we see that the distribution fits well for the compact binary provided \(|\)E\({}_{0}|\)/\(|\)E\({}_{B}|\)\(\leq\) 1, while for the case of the wider binary the distribution fits poorly. This is likely because our assumptions here begin to break down such that very few of these interactions are truly chaotic, and hence very few of our simulations should actually produce results that agree with theory. This is supported by the fact that the good agreement between our results and theory begins to decrease as the ratio of the semi-major axis increases, particularly for a\({}_{1}\)/a\({}_{2}\)\(\geq\) 8. This last point for the 2+2 configuration, and the distributions for which E\({}_{0}\)/E\({}_{B}\)\(>\) 1, will be taken up again in Section 9.
The binding energy distribution can be used to derive the escape velocity distribution \(f(v_{\rm e})dv_{\rm e}\) for the escaping star(s), as given by Equation 37 with \(m_{\rm e}=m_{\rm b}/3=\) M/4 for the 3+1 case for all identical particles (Leigh et al., 2016). This gives the following functional form (Equations 7.19 and 7.26 in Valtonen and Karttunen, 2006):
\[f(v_{\rm e})dv_{\rm e}=\frac{((n-1)|E_{0}|^{n-1}(m_{\rm e}M/m_{\rm b}))v_{\rm e }dv_{\rm e}}{(|E_{0}|+\frac{1}{2}(m_{\rm e}M/m_{\rm b})v_{\rm e}^{2})^{n}}. \tag{37}\]
In Figure 6 we show, for different initial a\({}_{1}\)/a\({}_{2}\) ratios, the distribution of escape velocities for the single star for a 3+1 outcome (black lines), as well as for both single stars for a 2+1+1 outcome (blue lines), where the solid lines show the simulated data and the dotted lines show the analytic fits. Note that to calculate the distribution analytically using Equation 37, we use the values n = 4.5 to account for the angular momentum dependence and m\({}_{e}\) = m\({}_{b}\)/3 = M/4 for the mass of the escaping particle(s).
Figure 7 shows the distributions of escape velocities for binaries for the 2+2 outcome (red lines). The solid lines show the results of our numerical scattering simulations, whereas the dotted lines show our analytic fits. Note that by conservation of linear momentum, the escape velocity distributions are equivalent for both binaries, since we are dealing with the equal particle mass case.
We clearly see that the analytic distribution of the escape velocities shows a clear agreement with what was seen in the simulations. On the other hand, there is a clear tendency for poorer agreement between the theory and the simulations as the semi-major axis ratio increases (as in Figure 5). The velocity range as well as the corresponding outcome fractions for the 3+1 and 2+1+1 cases show very similar distributions as those found in (Leigh et al., 2016), but in our case for \(n\) = 4.5. The same is true for the 2+2 case where our distribution behaves as expected for \(n=4.5\).
In Figure 8, we show the final distributions of orbital eccentricities for every encounter outcome, including the inner orbits of stable triples. The solid black line shows a thermal eccentricity distribution f(e)de = 2e, which matches the simulated data quite well for all orbits and all encounter outcomes.
## 9 Discussion
In this section, we discuss possible caveats of our work.
First, we note that the agreement between our simulations and the analytic fits is consistently good for small initial semi-major axis ratios but becomes poorer as this ratio increases. This is because, at least in part, a smaller fraction of the interactions become chaotic, which is a prerequisite to performing this comparison. Hence, we are left with fewer simulations that should be in agreement with theory and must omit those simulations that ended deterministically. Most of the interactions that cause this turn out to be simple exchanges. In other words, in the limit of a large semi-major axis ratio, the probability that the compact binary will simply be exchanged into the wide binary becomes high, liberating one single star in the process and forming a dynamically stable hierarchical triple system. Hence, fitting a three-body interaction, treating the more compact binary as a heavy single particle, is a more appropriate model in this limit.
Figure 4: The fraction of different outcomes for our binary-binary scattering simulations as a function of the ratio of the initial semi-major axes of the binaries, or a\({}_{1}\)/a\({}_{2}\). We vary the ratio as a\({}_{1}\)/a\({}_{2}\) = [1, 2, 4, 8, 16, 32].
In Figure 5, we show the simulated distributions of left-over binary orbital energies for the most compact orbit in the final outcome and compare to our analytic fits. We see good agreement between the two for E\({}_{0}\)/E\({}_{\rm B}\)\(<\) 1, since beyond this limit significant angular momentum is contained in the final orbit and our analytic fits do not account for any angular momentum dependence. For example, in the 3+1 case, the inner binary also contains angular momentum but we assume that it does not. For this case, the angular momentum in the inner and outer orbits ultimately limits the minimum binary energy in the interior because the outer orbit cannot contain enough angular momentum to accommodate a dynamically stable hierarchical triple. This itself explains why the 2+2 distributions also tend to zero, although at lower minimum binary energies, since wider binaries can be accommodated in this case, since there is no requirement mentioned above for dynamic stability in the triplet case.
In the same figure, we can see that in the case of the wide binary in the 2+2 scenario, our approximation does not fit the simulated values. This occurs because the observed values are given in the domain E\({}_{0}\)/E\({}_{B}\)\(>\) 1, which means that in this binary system there is a large amount of angular momentum. Therefore, since our analytic formalism does not explicitly take into account the angular momentum dependence, it is perhaps not surprising that we do not see good agreement between the simulated data and theory in this domain.
In Section 7, we show that we expect the analytic distributions to match the simulated ones when n \(\approx\) 5.6. When performing our comparisons, we adopt a value of n = 4.5 which shows the best agreement. However, we note that our expected value of n = 5.6 also does a good job of describing the data.
This difference is probably due to the fact that we are not considering the total angular momentum dependence in
Figure 5: The distributions of final binary binding energies are shown for each encounter outcome, parameterized using the total encounter energy E\({}_{0}\) as z = E\({}_{0}\)/E\({}_{\rm B}\). The colours are the same as in Figure 4. The solid lines represent the values of the simulations while the dotted lines show the values obtained analytically. The black vertical dashed line shows the ratio E\({}_{0}\)/E\({}_{B}\) = 1. Each panel shows the distributions for a different value of the initial semi-major axis ratio where a\({}_{1}\)/a\({}_{2}\) = [1, 2, 4, 8]. All histograms have been normalized by the total number of simulations that resulted in the corresponding outcome. Note that for the 2+2 case (red color) there is a solid line and another dashed line, which represent the simulated values for the compact and wide binary, respectively.
Figure 6: Comparison between simulations and analytic results for normalized distributions of escape velocities from the single star (in km/s) for the 3+1 (black color) and 2+1+1 (blue color) outcomes. The different insets show the same semi-major axis ratios, number of simulations, line types and colours as in Figure 5. The dotted black line shows the distribution of escape velocities calculated using Equation 37 for a 3+1 outcome and assuming n = 4.5, corresponding to approximately isotropic scattering. The dotted blue line shows the same thing but for the 2+1+1 case assuming n = 4.5. For both analytical curves we assume \(m_{\rm e}\) = \(m_{\rm b}\)/3 = M/4. Note that for the 2+1+1 case there are 2 solid blue lines, which correspond to the 2 escapers in this scenario.
our derivation. Moreover, for our simulations we incorporated different initial semi-major axes in the four-body interaction, while for the initial derivation we assumed equal semi-major axes.
In Figure 8, we see that the eccentricity distribution tends to be quite similar to the thermal distribution. While this is theoretically expected for the three-body case, assuming a detailed balance between binary creation and destruction (Heggie, 1975), we are not aware of any expectation for the four-body case. Nevertheless, the reason we see this agreement is probably the same as argued in (Heggie, 1975) for the three-body case. This is because a thermal distribution is expected for ergodized outcomes in three-body interactions, and since in this paper we treat each decomposition of the four-body case as a variation of the three-body decomposition, this distribution makes sense (e.g., the 2+1+1 case is modeled as two sequential disruptions of three-body systems).
On the other hand, we see that the distribution of eccentricities corresponding to the external component of the triple system (green dashed line) shows a distribution that does not quite match the thermal distribution for values close to 1 (highest eccentricities). Here we see a paucity of triples with high outer eccentricities relative to a thermal distribution because stable triple systems cannot exist if the external component has very large eccentricity. Otherwise, the triple system will tend to break up, ending up in the 2+1+1 configuration. As expected, this tendency is repeated in each inset (a\({}_{1}\)/a\({}_{2}\) = 1, 2, 4, 8), but we see a tendency for the eccentricity distribution of the outer orbits of stable triples to flatten as the ratio of semi-major axes increases. This is likely because we begin with all binaries being initially circular, and when there is a large ratio between their semi-major axes a simple exchange interaction is the most likely outcome, and here the outer orbit is more often left unaffected, remaining approximately circular.
Figure 8: The distributions of final binary orbital eccentricities are shown for each encounter outcome. The solid blue line shows the distribution of eccentricities for the binary for the 2+1+1 case. The solid black line shows the distribution of eccentricities for the inner binary of the triple system for the 3+1 case while the dashed green line shows the same for the outer orbit of the triple system for the 3+1 case. The red solid line shows the eccentricity distribution for the compact binary in the 2+2 case, while the red dashed line shows the distribution of eccentricities for the wide binary in the same case. For comparison, we plot a black dashed line showing a thermal eccentricity distribution (f\(\mathrm{e}\))\(\mathrm{de}\) = 2e. The different insets show the same semi-major axis ratios and number of simulations as in Figure 5.
Figure 7: The same as in Figure 6 but for both binaries in the 2+2 scenario. The different insets show the same semi-major axis ratios, number of simulations, line types and colours as in Figure 5. The dashed red line shows the distribution of escape velocities calculated using Equation 37 and assuming n = 4.5, with m\({}_{\mathrm{e}}\) = \(m_{\mathrm{b}}\) = M/2. Note that there is only one solid red line, since both binaries (wide and compact) have the same escape velocity distribution due to conservation of linear momentum.
## 10 Conclusions
In this paper, we have derived analytic distribution functions using the density of states formalism and an ansatz-based approach for the outcomes of four-body (i.e., binary-binary) scatterings in the equal-mass point-particle limit. We have further confronted our analytic fits with the results of numerical scattering simulations, and find good agreement. The highlights of our results can be summarized as follows:
* We have derived analytic distribution functions (DFs) to describe the properties of the products of chaotic four-body interactions in the equal-mass point particle limit. These DFs include, for the most compact orbit in the left-over binaries and/or triples in the final outcome state, the distributions of orbital energies for the left-over binaries and triples, the distributions of ejection velocities and the orbital parameters of any left-over binaries or triples. We find good agreement between our analytic theory and the simulations for low semi-major axis ratios, since for larger ratios the angular momentum dependence would need to be integrated into our analytic formalism to expect good agreement in this limit.
* For most of the relevant parameter space, binary-binary scatterings act to systematically destroy binaries by either forming two ejected singles or a stable hierarchical triple instead.
* The 2+1+1 outcome (i.e., one binary and two singles are formed) tends to form the most compact binaries.
* The 2+2 outcome (i.e., two binaries are produced) is consistently the least likely outcome for all ratios of the initial binary semi-major axes, and tends to produce the widest binaries. This is because, in order to form two binaries in the end, effectively the more compact final binary must eject the other two stars at about the same time, in similar directions and with comparable ejection velocities. Alternatively, this can be viewed as the wider binary having more and more difficulty in ejecting the more compact binary (in analogy to the three-body case) as the ratio of semi-major axes increases (at fixed particle mass).
* All outcomes of binary-binary scatterings produce binaries with a distribution of eccentricities consistent with being thermal. This is the case except for very large initial semi-major axis ratios, for which we find a flat eccentricity distribution for the inner binaries of dynamically-formed triples (while that for the 2+1+1 outcome remains consistent with thermal). Since it is those binary-binary scatterings with the largest semi-major axis ratios that produce the most triples, we naively expect that the inner binaries of dynamically-formed triples should show an approximately flat distribution of orbital eccentricities (in part since we assume initially circular orbits). Finally, for the outer orbits of stable triples, we see a slight deviation from a thermal distribution at high eccentricities, since here very high values for the eccentricity are forbidden if the formed triple is to be dynamically stable.
* We have derived a prediction for the distribution of inclination angles between the inner and outer orbital planes of dynamically-formed stable hierarchical triples. We find that it deviates from an isotropic distribution more and more with increasing angular momentum, potentially allowing for an observational signature to test if triples are primarily formed dynamically (including during the star formation phase).
## Acknowledgements
We very gratefully acknowledge discussions with Barry Ginat and Hagai Perets. CMBR acknowledges financial support from Millenium Nucleus NCN19_058 (TITANs). NWCL gratefully acknowledges the generous support of a Fondecyt Iniciacion grant 11180005 and a Fondecyt Regular grant 1230082, as well as support from Millenium Nucleus NCN19_058 (TITANs) and funding via the BASAL Centro de Excelencia en Astrofisica y Tecnologias Afines (CATA) grant PFB-06/2007. NWCL also thanks support from ANID BASAL project ACE210002 and ANID BASAL projects ACE210002 and FB210003.
## Data Availability
The data underlying this article will be shared on reasonable request to the corresponding author.
|
2310.08715
|
Toward Joint Language Modeling for Speech Units and Text
|
Speech and text are two major forms of human language. The research community
has been focusing on mapping speech to text or vice versa for many years.
However, in the field of language modeling, very little effort has been made to
model them jointly. In light of this, we explore joint language modeling for
speech units and text. Specifically, we compare different speech tokenizers to
transform continuous speech signals into discrete units and use different
methods to construct mixed speech-text data. We introduce automatic metrics to
evaluate how well the joint LM mixes speech and text. We also fine-tune the LM
on downstream spoken language understanding (SLU) tasks with different
modalities (speech or text) and test its performance to assess the model's
learning of shared representations. Our results show that by mixing speech
units and text with our proposed mixing techniques, the joint LM improves over
a speech-only baseline on SLU tasks and shows zero-shot cross-modal
transferability.
|
Ju-Chieh Chou, Chung-Ming Chien, Wei-Ning Hsu, Karen Livescu, Arun Babu, Alexis Conneau, Alexei Baevski, Michael Auli
|
2023-10-12T20:53:39Z
|
http://arxiv.org/abs/2310.08715v1
|
# Toward Joint Language Modeling for Speech Units and Text
###### Abstract
Speech and text are two major forms of human language. The research community has been focusing on mapping speech to text or vice versa for many years. However, in the field of language modeling, very little effort has been made to model them jointly. In light of this, we explore joint language modeling for speech units and text. Specifically, we compare different speech tokenizers to transform continuous speech signals into discrete units and use different methods to construct mixed speech-text data. We introduce automatic metrics to evaluate how well the joint LM mixes speech and text. We also fine-tune the LM on downstream spoken language understanding (SLU) tasks with different modalities (speech or text) and test its performance to assess the model's learning of shared representations. Our results show that by mixing speech units and text with our proposed mixing techniques, the joint LM improves over a speech-only baseline on SLU tasks and shows zero-shot cross-modal transferability.
## 1 Introduction
Speech and language processing research has largely focused on spoken and written language separately. However, the integration of speech and text in a single model holds potential benefits. Speech data contains prosodic information that does not exist in the text, which can help in modeling dialogues. On the other hand, text data from sources like Wikipedia can provide structural knowledge that is not available in most speech datasets. Moreover, the amount of written text on the internet exceeds the size of any available speech dataset.
The impressive performance of text large language models (LLMs) has caused a revolution in natural language processing (Radford et al., 2019; Brown et al., 2020). On the other hand, generative spoken language models (GSLM) (Lakhotia et al., 2021), which are LMs trained on discrete speech units derived from self-supervised representations (Hsu et al., 2021), are also promising for spoken language modeling.
In this work, we aim to fill the gap between text-only and speech-only LMs by developing and studying design choices for a joint Speech Unit and Text Language Model (SUTLM). For speech, we use a self-supervised learning (SSL) speech model, i.e. HuBERT (Hsu et al., 2021), to convert continuous speech signals into speech units. We then combine the units with text data to train an LM that models speech units and text jointly. We convert speech-only, mixed speech-text, and text-only data into token sequences (as shown in Figure 1 and Table 1), and train the model as an LM.
Figure 1: An illustration of our workflow. We tokenize speech signals into discrete units and mix them with text to create speech-text data. Our SUTLM is then trained on a combination of speech-only, text-only, and speech-text data. More details on the data formats can be found in Table 1.
To evaluate the SUTLM, automatic metrics are developed to quantify the cross-modal ability of the LMs. We also fine-tune our models on downstream tasks for spoken language understanding. We fine-tune the SUTLMs on either the speech or text data and test them on either speech or text to understand how well the models learn to align the two modalities.
Our main contributions are:
* We present a joint autoregressive LM trained on both speech and text (Sec 3).
* We develop automatic metrics that require no fine-tuning for the evaluation of an SUTLM, and show that the proposed metrics are indicative of the model's cross-modal transfer ability on downstream tasks (Sec 4).
* Empirically, we show that units covering a larger span obtained through SentencePiece tokenization Kudo and Richardson (2018) outperform local units learned by existing self-supervised models Hsu et al. (2021) (Sec 5.5.1).
* We find that mixing speech units and text with our proposed techniques (Sec 5.5.3 & Sec 5.5.4) improves the cross-modal ability of the model. (Sec 5.4).
## 2 Related Work
### SSL speech models
Self-supervised pre-training enables speech models to learn the information in speech without paired text transcriptions and show impressive performance on tasks such as automatic speech recognition (ASR) with minimal supervised fine-tuning Baevski et al. (2020); Hsu et al. (2021); Chen et al. (2021). As SSL speech models learn phonetically meaningful speech representations Pasad et al. (2023), they can be used as a feature extractor Yang et al. (2021) or a quantizer to transform continuous speech into discrete units Lakhotia et al. (2021); Lee et al. (2021, 2021); Lin et al. (2022); Chen et al. (2022). In this work, we use the HuBERT model Hsu et al. (2021) along with a quantizer to tokenize continuous speech into discrete representations. The discrete speech units are then combined with text data to train a single LM that is able to model speech and text jointly.
### Textless NLP
Textless NLP Lakhotia et al. (2021); Polyak et al. (2021); Kharitonov et al. (2021) is a framework to model speech in the absence of textual data. It consists of three components: a speech-to-unit tokenizer, a unit LM (uLM), and a unit-to-speech detokenizer. The tokenizer takes speech signals as inputs to generate discrete speech units. A uLM is trained to predict the next token in an utterance given its prior context. Once the uLM is trained, it can be used to generate unit sequences autoregressively. In the end, the detokenizer is used to convert the generated unit sequences to speech signals.
### Joint speech-text transformers
Transformer models have been extremely successful in natural language and speech processing Vaswani et al. (2017); Gulati et al. (2020), with three major configurations: encoder-decoder models Vaswani et al. (2017), encoder-only models Devlin et al. (2018), and decoder-only models Radford et al. (2018).
Previous works on speech-text joint transformers mostly adapt the encoder-decoder Ao et al. (2021); Tang et al. (2022); Cheng et al. (2022) or encoder-only Chung et al. (2020); Bapna et al. (2021); Chen et al. (2022); Zhang et al. (2022) architectures. Compared with decoder-only architectures, the training of these models typically requires multiple losses and explicit alignments between paired speech and transcriptions. This makes the hyper-parameter selection time-consuming. Also, encoder-only and encoder-decoder models are mostly used in the pre-training + fine-tuning paradigm, which limits the use cases of these models.
On the other hand, decoder-only models on text Radford et al. (2019); Brown et al. (2020) show the impressive capability of in-context learning, which also reduces the efforts spent on fine-tuning pre-trained models. In light of this, we explore decoder-only models for speech-text joint training. In this under-explored area, the concurrent work VALL-E Wang et al. (2023) is the only other attempt to build a decoder-only model jointly modeling speech and text. However, VALL-E's purpose is controllable text-to-speech synthesis (TTS), and the work mainly focuses on the acoustic controllability of the generated speech, while our work aims to build a general-purpose joint LM and mainly focuses on modeling the content of spoken language.
Method
We start with a dataset of sentences \(\mathcal{D}=\{s^{1},s^{2},\ldots,s^{n}\}\), where a sentence \(s^{i}\) is composed of a sequence of \(T_{i}\) tokens \((z^{i}_{1},z^{i}_{2},\ldots,z^{i}_{T_{i}})\), where \(z^{i}_{j}\) can be either text or speech units. The SUTLM is trained to predict the next token \(z^{i}_{j}\) given its prior context \(z^{i}_{<j}\). We maximize the log-probability of the data
\[\sum_{i=1}^{n}\sum_{j=1}^{T_{i}}\log P(z^{i}_{j}|z^{i}_{<j}) \tag{1}\]
In the following sections, we describe how we construct token sequences from speech and text. An example of our data formats can be found in Table 1.
### Speech-only: unit LM (uLM)
Prior work has shown that discrete speech units derived from a pre-trained HuBERT model can be used as compact representations to encode speech content, enabling the training of a unit language model Lakhotia et al. (2021). However, when combining speech with text, the time scales of speech units and text differ. HuBERT units are typically on the phone or sub-phone level, as shown in Table 2. This leads to longer sequences, making it difficult for the model to capture long-term dependencies. On the other hand, subword tokenizers for text generally break text sequences into chunks of a larger size than speech units. This length mismatch between speech and text makes it challenging to model them in a single model. Therefore, we use a subword tokenizer Kudo and Richardson (2018) to combine HuBERT units into larger chunks as in Wu et al. (2022) to mitigate the length mismatch.
The process of generating speech units is as follows. Speech signals are first fed into a HuBERT model. The representations in the final layer are then clustered with the k-means algorithm. The cluster IDs are used as the discrete speech units after removing consecutive repeating units Lakhotia et al. (2021).1 These units are then further combined by the subword SentencePiece tokenizer Kudo and Richardson (2018). The resulting average number of tokens per second can be found in Table 2.
Footnote 1: For example, the unit sequence 13 13 15 80 80 80 becomes 13 15 80 after removing repetitions.
### Text-only: text LM (tLM)
We train another SentencePiece tokenizer Kudo and Richardson (2018) using the text-only corpus Sec 5.1.3 to convert text into subword tokens. The resulting vocabulary size of the subword tokens is around 45k.
### Concatenated speech-text (CST)
To present paired speech-text data to the SUTLM, we first convert speech units and their transcriptions into the uLM and tLM formats, respectively, and combine them into one sequence by simply concatenating them as shown in Table 1. The CST format explicitly tells the model the correspondence between paired speech and text and thus encourages the model to learn the dependence between speech units and the corresponding text transcriptions.
### Alternating speech-text (AST)
Aside from simply concatenating the sequences of speech units and text, we also construct mixed speech-text that takes the word-level correspondence into consideration.
We use a pre-trained speech recognizer McAuliffe et al. (2017) to force-align speech and its transcription to obtain the word boundaries in an utterance. We then randomly sample some word boundaries within the utterance2 as the "switching points", which divide the utterance into several chunks. The alternating speech-text (AST) sequence is then constructed by alternatively filling in the chunks with uLM speech units and tLM text tokens, resulting in a sequence that switches modalities at every switching point. Special tokens <U2T> and <T2U> are inserted when switching from speech units to text and text to speech units, respectively.
Footnote 2: For a sentence with \(k\) words, we uniformly sample \(\lfloor N\rfloor\) boundaries as the switching points with \(N\sim\mathcal{N}(\frac{k}{10},1)\).
## 4 Evaluation Metrics
We introduce automatic metrics that require no fine-tuning to evaluate the SUTLM. Fine-tuning is a common approach to assess the quality of pre-trained models Baevski et al. (2020); Hsu et al. (2021); Chen et al. (2021). However, it is a time-consuming process and the reliability of the experiments highly depends on the hyper-parameter selection process. Furthermore, there is no reliable metric to measure the cross-modal ability of LMs.
In light of this, we propose Context Retrieval Accuracy (CRA), a new metric that does not require fine-tuning, to evaluate the cross-modal ability of an SUTLM.
### Context Retrieval Accuracy (CRA)
The motivation of Context Retrieval Accuracy (CRA) comes from the intuition that a good LM should learn to predict the next token based on its prior context. When we divide a sentence into prompt and continuation, a good LM should be able to capture the dependence between them. That is, it should assign a higher conditional probability to the continuation given its corresponding prompt than given a random prompt.
To measure CRA, we gather a collection of \(m\) sentences \(\mathcal{C}=\{s^{1},s^{2},\ldots,s^{m}\}\) and break \(s^{i}\) into a pair of prompt \(x^{i}\) and continuation \(y^{i}\). Given an SUTLM parameterized by \(\theta\), we can measure the conditional probabilities \(P_{\theta}(y^{i}|x^{i})\) with Eq 1. The CRA is then computed as:
\[\frac{1}{m}\sum_{i=1}^{m}\mathbbm{1}[\arg\max_{j\in\{1\ldots m\}}P_{\theta}(y^ {i}|x^{j})=i], \tag{2}\]
That is, the LM is used as a scorer to classify whether the matched prompt-continuation pair has the highest conditional probability among a pool of unmatched prompts.
CRA also has a pointwise mutual information (PMI) interpretation:
\[\arg\max_{j\in\{1\ldots m\}}P_{\theta}(y^{i}|x^{j})=i\] \[\implies\log P_{\theta}(y^{i}|x^{i})\geq\max_{j\in\{1\ldots m\}} \log P_{\theta}(y^{i}|x^{j})\] \[\implies\log\frac{P_{\theta}(y^{i}|x^{i})}{P_{\theta}(y^{i})} \geq\max_{j\in\{1\ldots m\}}\log\frac{P_{\theta}(y^{i}|x^{j})}{P_{\theta}(y^{ i})}\] \[\implies\mathrm{PMI}(x^{i},y^{i})\geq\max_{j\in\{1\ldots m\}} \mathrm{PMI}(x^{j},y^{i})\]
That is, correctly identifying the prompt implies the matched prompt-continuation pair has a higher PMI than all unmatched prompt-continuation pairs.
Ideally, the model should produce similar representations given the same content regardless of the modality. Hence, in addition to the uni-modal CRA, we also consider cross-modal CRA, where the prompt and the continuation are in different modalities. In practice, for example, when we use text as the prompts and speech units as the continuations, we set the probability of emitting text tokens to zero and re-normalize the probability to ensure that the continuation \(y^{i}\) can be only speech units. Cross-modal CRA can be used as a way to measure whether the SUTLM successfully learns shared representations between text and speech.
### Perplexity under External LM (PELM)
Following previous work, we use the perplexity under external LM (PELM) to measure the quality of the content of generated samples Lakhotia et al. (2021). We sample a continuation from the SUTLM given each ground truth prompt. We then use an external text LM, OPT-6.7B Zhang et al. (2022),
\begin{table}
\begin{tabular}{l|c} \hline Task & Example \\ \hline \hline uLM & \textless{}U\_EN\textgreater{} S12 S66 S17 S18... \textless{}EOU\textgreater{} \\ CST & \textless{}U\_EN\textgreater{} S12 S66 S17 S18... \textless{}EOU\textgreater{} \textless{}T\_EN\textgreater{} how are you \textless{}EOS\textgreater{} \\ CST & \textless{}T\_EN\textgreater{} how are you \textless{}EOS\textgreater{} \textless{}U\_EN\textgreater{} S12 S66 S17 S18...\textless{}EOU\textgreater{} \\ AST & \textless{}U\_EN\textgreater{} S12 S66 \textless{}U2T\textgreater{} are you \textless{}EOS\textgreater{} \\ AST & \textless{}T\_EN\textgreater{} how \textless{}T2U\textgreater{} S17 S18... \textless{}EOU\textgreater{} \\ tLM & \textless{}T\_EN\textgreater{} how are you \textless{}EOS\textgreater{} \\ \hline \end{tabular}
\end{table}
Table 1: An example of the formats of unpaired (uLM, tLM) and mixed speech-text (CST, AST) data. For the CST and AST formats, speech units and text can be present in a sequence in different orders. \textless{}U\_EN\textgreater{} and \textless{}T\_EN\textgreater{} are used at the beginning of the unit/text sequence. \textless{}EOU\textgreater{} and \textless{}EOS\textgreater{} are used at the end of the unit/text sequences. \textless{}U2T\textgreater{} and \textless{}T2U\textgreater{} are used when switching from unit to text and text to unit at word boundaries.
\begin{table}
\begin{tabular}{l|c} \hline & Average tokens per second \\ \hline \hline Phone & 20.32 \\ \hline HuBERT & 50.00 \\ + deduplication & 33.33 \\ + SP 10k & 17.67 \\ + SP 32k & 14.33 \\ \hline \end{tabular}
\end{table}
Table 2: The average number of tokens per second for different types of speech units. SP 10k and 32k refer to SentencePiece tokenization Kudo and Richardson (2018) applied to HuBERT units to create a dictionary with 10k and 32k tokens respectively.
to compute the perplexity of the sequence:
\[\hat{y}^{i} \sim P_{\theta}(y|x^{i}) \tag{4}\] \[x^{\prime i},y^{\prime i} =\mathrm{T}(x^{i}\parallel\hat{y}^{i})\] \[\mathrm{PELM}(\theta) =2\frac{\frac{-\sum_{i}\log P_{\text{OPT}}(y^{\prime i}|\operatorname {gt}(x^{i}))}{\sum_{i}\operatorname{len}(y^{\prime i})}}{\sum_{i}\operatorname {len}(y^{\prime i})}\]
where \(x^{i}\) and \(\hat{y}^{i}\) refer to the prompt and sampled continuation, and \(\theta\) are the parameters of the SUTLM. Similarly to cross-modal CRA, we control the modality of sampled continuations by zeroing out the probability of the tokens in the undesired modality. Since the prompt and the continuation can be either speech units or subword text tokens, we use a transcriber \(\mathrm{T}(\cdot)\) to transcribe the concatenated sequences \(x^{i}\parallel\hat{y}^{i}\) into text \(x^{\prime i},y^{\prime i}\).3\(\operatorname{gt}(\cdot)\) is a function that outputs a ground truth transcription when the input is speech units and is an identity function when the input is text. The external LM is then used to measure the perplexity of the continuation part of the text sequence.
Footnote 3: For both speech units and text tokens, we first invert the SentencePiece tokenization process to get raw HuBERT units and raw text. For speech units, we further use a 12-layer Transformer encoder with a CTC head to map HuBERT units to text. The transformer is trained on LibriSpeech, with a WER of 5.18% on dev-clean, and 11.61% on dev-other.
### Evaluation on SLUE tasks
We use the SLUE benchmark [22] to evaluate our models on downstream tasks. The benchmark includes two tasks, sentiment analysis (SLUE-SA) and named entity recognition (SLUE-NER), with both speech data and transcriptions provided. After pre-training the SUTLM, we fine-tune it on the SLUE dataset with either speech or text data as inputs to predict the ground-truth labels, and then evaluate it on either speech or text inputs. We evaluate the model on different input modalities to understand the cross-modal ability of the model as in [22, 21, 20]. Fine-tuning details can be found in 5.4.2.
## 5 Experiments
### Data
#### 5.1.1 Speech-only
We use 5% of the dataset used in [23] to match the size of the mixed speech-text and text-only data. The dataset includes Multilingual LibriSpeech (MLS) [20], VoxPopuli [23], Common-Voice [19] and Spotify Podcast & People's Speech [2]. The subsampled dataset consists of 65k hours of speech.
#### 5.1.2 Mixed speech-text (CST and AST)
We use MLS [20] and VoxPopuli [23] to create mixed speech-text data without subsampling. The dataset contains 45k hours of speech and 2.7B of words.
#### 5.1.3 Text-only
We combine OPT web data [20], Wikipedia, and LibriLM [20], and then subsample 5% of it, resulting in a total of 8.5B subwords.
### SSL speech tokenizer
We use a HuBERT Base model trained on 221K hours of unlabeled speech in 8 languages as in [21, 22].4 After pre-training, the representations at the last layer (12th) are clustered with k-means using 2000 clusters.
Footnote 4: [https://dl.fbaipublicfiles.com/hubert/mhubert_base_vp_mls_cv_8lang_it3.pt](https://dl.fbaipublicfiles.com/hubert/mhubert_base_vp_mls_cv_8lang_it3.pt)
### Model architecture and training
We use the 24-layer transformer implementation in fairseq [10] with 16 attention heads. The embedding size is 1024, the feed-forward dimension is 4096, and the dropout probability is set to 0.1. The weights of the embedding layer are tied to the output layer [20]. The model contains 350M parameters.
The model is trained for 500k updates on 32 V100 GPUs with a batch size of 8192 tokens per GPU. We use Adam optimizer [13] with \((\beta_{1},\beta_{2})\) = (0.9, 0.95). Gradient clipping with a threshold 1.0 and weight decay of 0.1 are applied to stabilize the training. Since the data size is different for different data formats, we resample speech-only, speech-text, and text-only data equally (1/3 for each in every training batch) to prevent the model from being biased toward any of them.
### Evaluation setup
#### 5.4.1 Automatic Metrics
We use a subset of the Multilingual LibriSpeech [20] dev set to evaluate the SUTLM. To provide enough context to the SUTLM, we filter out sentences of less than 20 words. For each sentence, we use the first 10 words as the prompt and the rest as continuation. For the CRA experiments, we evaluate the SUTLM with
the 100 shortest utterances in the filtered dataset, while for the PELM experiments, we use the 500 shortest utterances. We use fewer utterances in CRA experiments as the computation of CRA is \(O(N^{2})\) for \(N\) utterances. We constrain ourselves to sentences with moderate lengths because the continuation part becomes less coherent with the prompt as the sequence length grows, which hurts the sensitivity of the proposed metrics.
When sampling the speech or text continuations in the PELM experiments, we use temperature \(t=0.6\) and nucleus sampling (Holtzman et al., 2019) with \(p=0.95\), and truncate the continuation to 10 words (identical to the length of the prompts).
#### 5.4.2 Downstream Tasks
For SLUE-SA, we fine-tune SUTLM by adding a self-attention pooling layer on top of the transformer model after removing the last output layer (Shon et al., 2022). We fine-tune it with a learning rate of 3e-5 for 30k updates and evaluate it with Macro F1 (Shon et al., 2022).
For SLUE-NER, we follow the SLUE official baseline to formulate the task as an ASR problem and train our model to decode special tokens around each named entity (Shon et al., 2022). We concatenate the output (the text transcription with special tokens before and after each named entity) after the input (speech units when fine-tuned on speech, text tokens when fine-tuned on text) and fine-tune our SUTLM as an LM with the same loss function as Eq 1. The loss is only applied to the output part of the sequence. We fine-tune the SUTLM with a learning rate of 3e-5 for 50k updates. During decoding, we use a beam size of 5 to generate the outputs and evaluate them with Micro F1 (Shon et al., 2022). For both SLUE tasks, we report results on the dev set since the test set is not publicly available. We use the fine-tuned HuBERT as the baseline as in (Shon et al., 2022).
### Results
#### 5.5.1 What kind of speech units works the best?
We utilize HuBERT units described in Sec 5.2 (2000 units) and apply SentencePiece tokenizers on them. Results can be found in rows **(A)**, **(B)**, **(C)** in Table 3 for automatic metrics, Table 4 for SLUE-SA and Table 5 for SLUE-NER.
The model trained with SP 10k has the best performance in terms of PELM, SLUE-SA, and SLUE-NER, but slightly worse CRA than the model using the original HuBERT units. For CRA for the u2u case (unit prompt, unit continuation), we hypothesize that the model uses low-level acoustic information to make predictions as the CRAs are nearly 1.0 for all types of speech units. Also, HuBERT uses overlapping windows for neighboring tokens, so the first token of the continuation contains information about the previous token.
For the speech continuation (PELM) experiments, the SP 10k-based sequences are shorter than HuBERT unit-based sequences, so the model trained with SP 10k (row **(B)**) can generate more coherent continuations.
#### 5.5.2 Do we need paired data to learn shared representations?
In this section, we compare models trained with and without paired data to investigate the usefulness of paired data. We can compare the results in row **(D)** and **(F)** in Table 3 for automatic metrics, Table 4 for SLUE-SA and Table 5 for SLUE-NER. For cross-modal cases (u2t and t2u), in terms of automatic metrics, the model trained with unpaired data alone (row **(D)**) has almost random CRAs and high PELMs, indicating a lack of cross-modal ability.
Similarly, for SLUE-SA, the model trained with unpaired data alone (row **(D)**) shows almost random macro F1 scores for a 3-way classification task when tested on the other modality. For SLUE-NER, the model trained without exposure to paired data (row **(D)**) performs worse than models trained with paired data (row **(F)**) when fine-tuned on speech and shows no transferability between modalities. Row **(D)** also performs worse than its speech unit-only counterpart (row **(B)**, showing that the model trained solely on unpaired data does not demonstrate any cross-modal transfer ability between speech and text.
#### 5.5.3 Does concatenated speech-text (CST) help learn shared representations?
The next question we want to answer is whether CST is helpful in learning shared representations. Building on the previous findings (rows **(A)**, **(B)**, **(C)**), we utilize SP 10k as our speech unit vocabulary and present the results in row **(E)** in Table 3 for automatic metrics, Table 4 for SLUE-SA, and Table 5 for SLUE-NER. The results show that, compared to using unpaired data alone (row **(D)**), the model trained with CST (row **(E)**) has higher CRAs for u2t and t2u, which indicates that the model cap
tures the relationship between speech and text better than models trained with unpaired data alone.
For SLUE-SA, the model pre-trained with CST shows comparable performance when fine-tuned on one modality and evaluated on the other. The performance when fine-tuning on text and testing on speech is even better than directly fine-tuning on speech (0.51 vs. 0.48). The reason is likely to be that text data provides a less noisy supervisory signal compared to using speech units. The model trained with extra speech-text data (row **(E)**) performs worse than the model trained with only speech units (row **(B)**). The reason may be similar to the "curse of multilinguality" (Conneau et al., 2019), where sharing the capacity of the model with other languages or modalities hurts performance.
For SLUE-NER, concatenated speech-text improves performance over the model trained with only speech units (row **(B)**) when fine-tuned on speech. Unlike SLUE-SA, which is a classification task, here we need to generate the corresponding transcription along with the named entity tags for SLUE-NER. Hence, the model (row **(E)**) fine-tuned on speech benefits directly from the extra speech-text data. We discuss the implications of the fine-tuning results further in Sec 5.7.
For speech / text continuation, when only using concatenated speech-text data (CST) as our mixed data, there are no special tokens (<U2T>, <T2U>) to trigger modality switching. As shown in Table 6, in the u2t case the model trained with CST simply transcribes the speech prompt into its transcription on u2t and synthesizes the text prompt into speech units, resulting in low PELMs for u2t and t2u in row **(D)** due to the repetition. PELM fails to reflect the quality of the continuation accurately. We discuss this limitation further in Sec 5.6.
#### 5.5.4 Does alternating speech-text (AST) help learn shared representations?
This section discusses the benefits of alternating speech-text (AST). The results are presented in (row **(F)**) in Table 3 for automatic metrics, Table 4
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c} & & \multicolumn{2}{c|}{FT data} & \multicolumn{2}{c|}{SP} & \multicolumn{2}{c}{TXT} \\ \hline \hline row & unit & Eval set & SP & TXT & SP & TXT \\ \hline \hline \multicolumn{5}{c|}{Baseline} & 0.46 & - & - & - \\ \hline \hline
**(A)** & HuBERT & uLM & 0.51 & - & - & - \\ \hline
**(B)** & SP 10k & uLM & 0.56 & - & - & - \\ \hline
**(C)** & SP 32k & uLM & 0.54 & - & - & - \\ \hline \hline
**(D)** & SP 10k & uLM+LM & 0.52 & 0.33 & 0.35 & 0.49 \\ \hline
**(E)** & SP 10k & uLM+CST & 0.48 & 0.42 & 0.51 & 0.52 \\ \hline \hline
**(F)** & SP 10k & uLM+CST & \multirow{2}{*}{0.49} & \multirow{2}{*}{0.43} & \multirow{2}{*}{0.52} & \multirow{2}{*}{0.56} \\ & & & & & & \\ \end{tabular}
\end{table}
Table 4: Macro F1 score on SLUE-SA. FT data indicates the model is fine-tuned on speech (SP) or text (TXT). Eval set denotes the fine-tuned model is tested on speech (SP) or text (TXT).
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c} & & \multicolumn{2}{c|}{u2u} & \multicolumn{2}{c|}{t2u} & \multicolumn{2}{c}{t2t} \\ \hline \hline row & unit & uLM & CST & AST & tLM & CRA & PELM & CRA & PELM & CRA & PELM \\ \hline \hline \multicolumn{5}{c|}{Ground truth continuation} & - & - & - & - & - & - & - & 101.4 \\ \hline \hline
**(A)** & HuBERT & v & & & 1.00 & 193.3 & - & - & - & - & - \\ \hline
**(B)** & SP 10k & v & & & & 0.96 & 163.6 & - & - & - & - & - \\ \hline
**(C)** & SP 32k & v & & & & 0.96 & 177.4 & - & - & - & - & - \\ \hline \hline
**(D)** & SP 10k & v & & & v & 0.94 & 175.9 & 0.03 & 394.9 & 0.01 & 1973.3 & 0.20\({}^{**}\) & 20.7\({}^{**}\) \\ \hline
**(E)** & SP 10k & v & v & & & 0.95 & 166.0 & 0.37 & 39.1\({}^{*}\) & 0.26 & 43.4\({}^{*}\) & 0.56 & 34.7 \\ \hline \hline
**(F)** & SP 10k & v & v & v & v & 0.97 & 162.8 & 0.70 & 124.7 & 0.81 & 38.7 & 0.67 & 28.2 \\ \end{tabular}
\end{table}
Table 3: Automatic metrics (CRA and PELM). “u2t” denotes that the prompts are speech units and the continuations are text, and so on. (*): for cross-modal cases (u2t and t2u) in row **(E)**, the PELM is low because the continuation simply repeats the prompt. We discuss this issue in Sec 5.6. (**): The low CRA for t2t is due to the use of MLS as an evaluation set, resulting in a distribution mismatch from the text-only training data. Similarly, the use of OPT data to train the SUTLM results in better PELM on t2t in row (D).
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c} & & FT data & \multicolumn{2}{c|}{SP} & \multicolumn{2}{c}{TXT} \\ \hline \hline row & unit & Eval set & SP & TXT & SP & TXT \\ \hline \multicolumn{5}{c|}{Baseline} & 54.5 & - & - & - \\ \hline \hline
**(A)** & HuBERT & uLM & 62.9 & - & - & - \\ \hline
**(B)** & SP 10k & uLM & 64.4 & - & - & - \\ \hline
**(C)** & SP 32k & uLM & 62.5 & - & - & - \\ \hline \hline
**(D)** & SP 10k & uLM+LM & 63.2 & 1.5 & 0.0 & 66.8 \\ \hline
**(E)** & SP 10k & uLM+CST & 65.0 & 3.6 & 0.5 & 79.5 \\ \hline \hline
**(F)** & SP 10k & uLM+CST & \multirow{2}{*}{66.6} & \multirow{2}{*}{25.2} & \multirow{2}{*}{0.3} & \multirow{2}{*}{77.2} \\ & & & & & & \\ \end{tabular}
\end{table}
Table 5: The F1(%) score on SLUE-NER. FT data indicates the model is fine-tuned on speech (SP) or text (TXT). Eval set denotes the fine-tuned model is tested on speech (SP) or text (TXT).
for SLUE-SA, and Table 5 for SLUE-NER.
By comparing the results of CRA for t2u and u2t in row **(F)** with those in row **(E)** in Table 3, we observe an improvement in CRA when the data is directly constructed to switch modalities on word boundaries. We can also see that CRA is similar for t2u, u2t, and t2t. It suggests that the model learns to match context regardless of modality.
In row **(F)**, PELM for t2u is lower than PELM for u2u as the text prompt is less noisy than speech units. PELM for u2t is only marginally worse than t2t. This shows that the LM trained with AST can continue a sentence regardless of the modality. The worse PELM for u2u and t2u than for u2t and t2t could be attributed to the recognition errors within our unit transcriber.
Regarding SLUE-SA, we can observe that AST and tLM further improve the cross-modal transfer performance (trained on the text and evaluated on speech, or vice versa) in row **(F)**.
In SLUE-NER, row **(F)** also shows better performance than row **(E)** when fine-tuned on speech and evaluated on speech. There is also non-trivial speech-to-text transfer (fine-tuned on speech and evaluated on text) in row **(F)**, showing that AST helps in learning transferable features between modalities.
In SLUE-NER, when fine-tuned on text and evaluated on speech, there is no transferability between speech and text. The reason can be attributed to the fine-tuning task becoming almost trivial. In text NER, in our formulation, the input and output are nearly identical. The only difference is the named entity tags. Further discussion of downstream task performance can be found in Sec 5.7.
### Limitations of PELM
We use PELM as a metric to measure the quality of continuations. However, although our SUTLM (row **(F)**) shows the ability to continue after a cross-modal prompt, the resulting continuation is still only locally consistent as shown in Table 6. This can be attributed to the use of a 350M-parameter model architecture, which is relatively small in the era of LLMs.
The PELM metric fails to accurately reflect the result in the case of row **(E)** when the model simply repeats the prompt. It has been a known phenomenon that LMs tend to assign a high probability to repeated tokens Holtzman et al. (2019).
To quantify repetition, we compute the proportion of bi-grams in continuations that have appeared in the prompt transcription. For row **(E)**, the proportions are 0.02, 0.53, 0.42, and 0.02 for u2u, u2t, t2u, and t2t, respectively. For row **(F)**, the proportions are 0.02, 0.03, 0.01, and 0.03. For row **(E)**, the continuations for u2t and t2u are simply repeating the content of the prompt.
We can see that the u2t and t2t PELMs are lower than the ground truth PELM. This is because of the use of the temperature of \(0.6\) in the softmax layer, which likely hurts diversity and coherence as in Caccia et al. (2018); Lakhotia et al. (2021).
### Implications for SLU Downstream Tasks
We show that mixing speech units and text improves the cross-modal ability of the model. In SLUE-SA, the mixed speech-text data enables the model to zero-shot transfer between speech and text. In SLUE-SA, we remove the output layer from the SUTLM and attach a classification head so the model will always output a valid class.
In SLUE-NER, using mixed speech-text data directly improves the performance. Since this is a sequence generation task, the mixed speech-text data helps the model generate better text. The transfer from speech to text is non-trivial but not vice versa. This finding aligns with the experiments in Bapna et al. (2022), in which they also find non-trivial transfer from speech to text but not the other way around. However, we note that different fine-tuning strategies can produce different results, as demonstrated in Liu et al. (2021).
## 6 Conclusion
Our study on joint language modeling for speech units and text involved developing evaluation metrics and fine-tuning the model on speech and text data. We found that using mixed speech-text data improves the model's cross-modal ability and performance on both automatic metrics and downstream tasks.
Our study sheds light on the benefits of considering both speech and text in building language models. We hope that this research will motivate the research community to further explore the integration of speech and text data for more comprehensive language modeling.
Future work in this area could involve investigating the optimal balance between speech and text data in model training and exploring ways to handle multi-modal data beyond the speech-text domain.
## 7 Limitations
Our approach involves using a speech tokenizer that can encode phonetic information (HuBERT) and an off-the-shelf speech recognizer to generate word-level alignment. For other, lower-resource languages, these components may be harder to obtain or may not perform as well.
For our proposed automatic metrics, the complexity of CRA grows at a rate of \(O(N^{2})\), which can be expensive when evaluated on a larger number of utterances or when scaling up the model size. PELM, on the other hand, also has limitations as stated in Sec 5.6. For the empirical results on downstream tasks, we test our SUTLMs on the SLUE benchmark, which has only two tasks. Extending the experiments to more downstream tasks may provide more insights.
Finally, we only study relatively small SUTLMs (350M parameters). It is unclear how scaling it up would affect the results.
|
2307.02191
|
Evaluating AI systems under uncertain ground truth: a case study in
dermatology
|
For safety, AI systems in health undergo thorough evaluations before
deployment, validating their predictions against a ground truth that is assumed
certain. However, this is actually not the case and the ground truth may be
uncertain. Unfortunately, this is largely ignored in standard evaluation of AI
models but can have severe consequences such as overestimating the future
performance. To avoid this, we measure the effects of ground truth uncertainty,
which we assume decomposes into two main components: annotation uncertainty
which stems from the lack of reliable annotations, and inherent uncertainty due
to limited observational information. This ground truth uncertainty is ignored
when estimating the ground truth by deterministically aggregating annotations,
e.g., by majority voting or averaging. In contrast, we propose a framework
where aggregation is done using a statistical model. Specifically, we frame
aggregation of annotations as posterior inference of so-called plausibilities,
representing distributions over classes in a classification setting, subject to
a hyper-parameter encoding annotator reliability. Based on this model, we
propose a metric for measuring annotation uncertainty and provide
uncertainty-adjusted metrics for performance evaluation. We present a case
study applying our framework to skin condition classification from images where
annotations are provided in the form of differential diagnoses. The
deterministic adjudication process called inverse rank normalization (IRN) from
previous work ignores ground truth uncertainty in evaluation. Instead, we
present two alternative statistical models: a probabilistic version of IRN and
a Plackett-Luce-based model. We find that a large portion of the dataset
exhibits significant ground truth uncertainty and standard IRN-based evaluation
severely over-estimates performance without providing uncertainty estimates.
|
David Stutz, Ali Taylan Cemgil, Abhijit Guha Roy, Tatiana Matejovicova, Melih Barsbey, Patricia Strachan, Mike Schaekermann, Jan Freyberg, Rajeev Rikhye, Beverly Freeman, Javier Perez Matos, Umesh Telang, Dale R. Webster, Yuan Liu, Greg S. Corrado, Yossi Matias, Pushmeet Kohli, Yun Liu, Arnaud Doucet, Alan Karthikesalingam
|
2023-07-05T10:33:45Z
|
http://arxiv.org/abs/2307.02191v1
|
# Evaluating AI systems under uncertain ground truth: a case study in dermatology
###### Abstract
For safety, AI systems in health undergo thorough evaluations before deployment, validating their predictions against a so-called ground truth. Importantly, this ground truth is assumed known and fixed, i.e., certain. However, especially in health settings, this is actually not the case and the ground truth may be uncertain. Unfortunately, this is largely ignored in standard evaluation of AI models but can have severe consequences such as overestimating the future performance (Gordon et al., 2021). To avoid this, we measure the effects of _ground truth uncertainty_, which we assume decomposes into two main components: _annotation uncertainty_ which stems from the lack of reliable annotations, and _inherent uncertainty_ due to limited observational information. This ground truth uncertainty is ignored when estimating the ground truth by deterministically aggregating annotations, e.g., by majority voting or averaging. In contrast, we propose a framework where aggregation is done using a statistical model. Specifically, we frame aggregation of annotations as posterior inference of so-called _plausibilities_, representing distributions over classes in a classification setting, subject to a hyper-parameter encoding annotator _reliability_. Based on this model, we propose a metric for measuring annotation uncertainty and provide uncertainty-adjusted metrics for performance evaluation. We present a case study applying our framework to skin condition classification from images (Liu et al., 2020) where annotations are provided in the form of differential diagnoses, modeled as partial ranking of conditions. The deterministic adjudication process called inverse rank normalization (IRN) from previous work ignores ground truth uncertainty in evaluation. Instead, we present two alternative statistical models: a probabilistic version of IRN and a Plackett-Luce-based model (Luce, 2012; Plackett, 1975). We find that a large portion of the dataset exhibits significant ground truth uncertainty and standard IRN-based evaluation severely overestimates performance without providing uncertainty estimates. In contrast, our framework provides uncertainty estimates on common metrics of interest such as top-\(k\) accuracy and average overlap, showing that performance can change multiple percentage points depending on the annotator reliability which directly impacts model selection.
2023
## 1 Introduction
Prior to usage, predictive AI models in health are usually evaluated by comparing model predictions on a held-out test set annotated with a corresponding known ground truth. In a supervised classification context, this typically assumes the availability of a single, unique and certain ground truth per example. In almost all benchmarks for supervised learning, this ground truth is derived by _deterministic aggregation_ of multiple human annotations, e.g., using simple majority voting or averaging, often ignoring any uncertainty or disagreement. For example, in medical diagnosis, doctors hired for labeling examples have to make complex decisions with very limited information. Where doctors would usually ask patients questions or perform additional tests, they instead have to come up with a working hypothesis of possible conditions, a so-called differential diagnosis. On ambiguous cases,
doctors with different levels of experience in medicine, different expertise and biases, constrained by an imperfect labeling tool, will come to different conclusions (Freeman et al., 2021).
Instead of ignoring this disagreement and pretending the ground truth to be fixed and certain, we argue that it is important to acknowledge and address this disagreement. This is to ensure the best possible outcome for the patient because disagreement often reflects individual challenges that when ignored pose significant risks. Specifically, we state that _ground truth is uncertain_. This uncertainty can be decomposed into _annotation uncertainty_ and _inherent uncertainty_. The former stems from an imperfect labeling process: even expert annotators can make mistakes; tasks might be subjective, annotators might be biased, inexperienced in using the labeling tool or simply lack experience in the labeling task. The inherent uncertainty, on the other hand, stems from limited observational information. In the example above, there might be ambiguous cases where inferring a clear condition for a patient solely on a single image or a short report, can be extremely difficult. This also includes what related work calls task ambiguity (Uma et al., 2022) (e.g., if classes are not agreed upon (Medeiros et al., 2023; Phene et al., 2019)). In practice, ground truth uncertainty is manifested through _disagreement among annotators_ (e.g., measured by inter-annotator disagreement) and it is challenging to attribute disagreement to annotation or inherent uncertainty (Abercrombie et al., 2023; Rottger et al., 2022). However, while recruiting a larger group of annotators, training them better or improving the labeling tool can reduce annotation uncertainty (usually at a very high cost), inherent uncertainty is generally irresolvable (Schaekermann et al., 2016). This also holds for adjudication (Duggan et al., 2021; Schaekermann et al., 2019). Moreover, deterministically aggregating more annotators may eliminate minority views (Field et al., 2021).
We can observe annotator disagreement in many standard datasets (see Figure 3 for an example
Figure 1: Ground truth uncertainty in supervised AI systems such as classification: The unobserved ground truth \(y^{*}\) is assumed to yield an observation \(x\). This observation can contain limited information giving rise to _inherent uncertainty_ where the true label might not be clearly identifiable purely from \(x\). Nevertheless, experts are asked to provide annotations \(b^{1},\ldots,b^{R}\) based on \(x\). These are subject to _annotation uncertainty_ caused by the lack of reliable annotations. See Figure 5 for a concrete example in skin condition classification. These annotations are then typically (deterministically) aggregated and a single estimate \(y\) of the true label \(y^{*}\) is extracted. Evaluation of AI models uses the estimate \(y\) as ground truth, ignoring any ground truth uncertainty. In contrast, we use statistical aggregation to explicitly model distributions over ground truth labels, so-called plausibilities \(\lambda\), representing categorical distributions over classes. The distribution over \(\lambda\) captures annotation uncertainty; the plausibilities themselves may capture inherent uncertainty. We discuss how to measure annotation uncertainty and evaluate AI systems on top of these plausibilities.
on CIFAR10 (Krizhevsky, 2009; Peterson et al., 2019)). For many of these benchmarks, ground truth uncertainty is limited to a small fraction of examples while the ground truth of the majority of examples can be trusted. In medicine, in contrast, it is common that a significant portion of examples is subject to ground truth uncertainty, observed by high annotator disagreement (Schaekermann et al., 2016). For example, this has recently been shown in skin condition classification (Eng et al., 2019; Jain et al., 2021), but also holds beyond health, e.g., in toxicity classification (see Figure 9). As a result, the problem of ground truth uncertainty has been recognized in several previous works; see e.g. (Cabitza et al., 2020; Davani et al., 2022; Gordon et al., 2021; Leonardelli et al., 2023; Northcutt et al., 2021; Plank, 2022; Sculley, 2007; Uma et al., 2021). Often, however, the focus is on mitigating the symptoms of ground truth uncertainty rather than tackling it directly. For example, there is a large body of work on dealing with label noise (Northcutt et al., 2021). In contrast, we believe that many instances of label errors stem from ignoring the underlying disagreement. In cases where ground truth uncertainty is modeled explicitly, related work focuses on training (Guan et al., 2018; Rodrigues and Pereira, 2018; Welinder et al., 2010) but still assumes certain ground truth for evaluation. This is despite Maier-Hein et al. (2018) explicitly highlighting the impact of ignoring annotator disagreement in evaluation on model selection and ranking. As illustrated in (Gordon et al., 2021), ignoring disagreement by deterministically aggregating annotations may lead to misleading and fragile results when assessing the future performance of AI systems.
We propose here a general framework for measuring and evaluating with ground truth uncertainty based on _statistical aggregation_ of annotations. Essentially, we pose aggregation as a posterior inference task following Figure 1: given multiple annotations and a prior knowledge of _annotator reliability_, we infer so-called _plausibilities_. In a classification setting, these plausibilities represent categorical distributions over classes. The variation in plausibilities captures annotation uncertainty. The plausibilities themselves capture inherent uncertainty depending on the entropy of the corresponding categorical distributions. Reliability can be thought of as a model parameter that provides a prior over the expected variation in plausibilities (cf. Figure 2). It could be informed by domain experts or tuned on data (similar to work in crowd sourcing (Yan et al., 2014; Zheng et al., 2017)). However, as quantifying annotator reliability is difficult, we assume it to be a free parameter during evaluation. Then, we propose a measure of _annotation certainty_ to quantify the uncertainty of any label being the true ground truth. This allows us to quantify annotation uncertainty on individual examples as well as on whole datasets. In addition, we present _uncertainty-adjusted_ variants of common classification metrics such as top-k accuracy or average overlap. Altogether, we provide a comprehensive strategy to evaluate AI systems for health under ground truth uncertainty.
In this paper, we apply our framework to skin condition classification from images in dermatology following the setting of (Liu et al., 2020). Here, annotations are expressed as differential diagnoses, i.e., partial rankings. Because classifying skin conditions purely from images is an incredibly difficult task, there is significant disagreement among annotations for a large portion of the dataset. Following (Liu et al., 2020), we consider inverse rank normalization (IRN) as baseline deterministic aggregation method. IRN can be thought of as providing a plausibility point estimate and prior work typically used the top-1 label as ground truth for evaluation - ignoring both annotation and inherent uncertainty. Instead, we propose two alternative statistical models: a probabilistic interpretation of IRN and a Plackett-Luce-based model (Luce, 2012; Plackett, 1975) specifically adapted to partial rankings. Our experiments highlight the high annotation uncertainty in this setting. Moreover, they show that previous IRN-based evaluation significantly over-estimates classifier performance and disregards large variations in performance due to ground truth uncertainty, hindering model selection. We discussed these results with dermatologists and highlight medical implications such as the inability to clearly categorize cases by risk.
## 2 Framework for evaluation under uncertain ground truth
This section introduces our framework for evaluation of AI systems with ground truth uncertainty. While this paper focuses on a case study in dermatology, we present our framework in general terms and discuss technical details of applying it to the differential diagnosis annotations from (Liu et al., 2020) in Appendix B. We start by introducing notation and give an intuition of our approach on a toy example as well as a standard image recognition task, namely CIFAR10 (Krizhevsky, 2009). We then formalize this intuition by presenting the statistical model we use for modeling how annotator opinions are aggregated. Based on this model, we present measures for annotation uncertainty as well as uncertainty adjusted performance metrics for evaluating AI models.
### Notation and introductory examples
For illustration and introducing notation, we consider the synthetic toy dataset of Figure 2 (left). Here, observations \(x\) are illustrated as one-dimensional on the x-axis and we plot the true distribution \(p(y|x)\) for three different classes (blue, orange and violet) on the y-axis. We highlight three examples, (i), (ii) and (iii), where the latter two are inherently ambiguous, i.e., the corresponding distribution \(p(y|x)\) is not crisp. This is intuitively visualized when plotting the distributions \(p(y|x)\) on a 3-simplex where the corners would correspond to crisp distributions (middle left). Of course, we never observe the true \(p(y|x)\) for these points. Instead, we assume access to a finite set of expert annotations \(b\). For simplicity, in this example we assume these annotations to be single labels (sampled from the true \(p(y|x)\)). Then, we aggregate these opinions to obtain an approximation \(\hat{\lambda}\) of \(p(y|x)\) and select a label to use as ground truth based on, e.g., majority voting. This represents how labels for many common benchmarks such as CIFAR, ImageNet (Russakovsky et al., 2015) or Hateful Memes (Kiela et al., 2020) have been obtained. We refer to \(\hat{\lambda}\) as _plausibilities_ as they construct a distribution \(p(y|\hat{\lambda})\) over labels from which the \(\arg\max\) corresponds to the majority voted label. However, this approach ignores any uncertainty present in the annotations \(b\).
Instead, we assume a distribution \(p(\lambda|b,x)\) over plausibilities. Here, we use a Dirichlet distribution with concentration parameters reflecting the annotator opinions as well as a prior _reliability_ parameter (\(y\) in Figure 2; also see Appendix A for details). This reliability parameter will quantify our a priori
Figure 2: Illustration of annotation certainty on a toy dataset. Left: Synthetic dataset showing one-dimensional observations \(x\) and corresponding true distribution \(p(y|x)\) over labels \(y\) for three imaginary labels Benign1, Benign2 and Cancer. Example (ii) is particularly ambiguous in the sense that \(p(y|x)\) is not crisp. Middle left: For illustration, we plot the distribution \(p(y|x)\) on the 3-simplex. Middle and middle right: Modeling aggregation of annotator opinions \(b\) statistically, see text, allows us to sample plausibilities \(\lambda^{(m)}\sim p(\lambda|b,x)\). The spread of these plausibilities around the actual true distributions \(p(y|x)\) captures the annotation uncertainty. This is influenced by the annotations as well as a prior _reliability_ parameter \(y\). This parameter reflects our a prior trust in the annotators. Right: Measuring how often the top-\(1\) label changes given plausibility samples \(\lambda^{(m)}\), we can compute _annotation certainty_ for all examples. We plot annotation certainty across different reliabilities \(y\) (black to gray) indicating the effect of different prior trust levels.
trust in the annotators. For example, we expect reliability to increase with the number of annotators or their expertise and training. However, choosing or estimating reliability can be challenging or require additional information. Instead, we take the reliability to be a free parameter, allowing us different views on the data1. This is illustrated in Figure 2 (middle and middle right), showing plausibilities \(\lambda^{(m)}\sim p(\lambda|b,x)\), \(m\in[M]\), sampled from the statistical aggregation model on the 3-simplex. The spread in these plausibilities represents annotation uncertainty and can be reduced using a higher reliability. The position on the simplex represents inherent uncertainty which is lower for plausibilities close to the corners such as (i) and higher for plausibilities close to the center such as (ii). Then, we compute the top-1 label for every sampled \(\lambda^{(m)}\), i.e., the label with the largest plausibility \(\arg\max_{k\in[K]}\lambda_{k}^{(m)}\). This allows us to measure _annotation certainty_ (defined formally in Section 2.3) - if the top-1 label is always the same label, there is no annotation uncertainty. Specifically we can measure the fraction of plausibilities \(\lambda^{(m)}\) among all \(M\) samples where the top-1 label is \(k\). Taking the maximum over all labels \(k\) defines _annotation certainty_. In Figure 2 (middle) we plot this annotation certainty for different reliabilities (black and gray) and we clearly see that annotation certainty is consistently low for example (ii), while for (iii) annotation certainty increases with higher reliability. This also allows us to easily summarize annotation uncertainty across the whole dataset.
Footnote 1: We use a single global reliability parameter across all annotators for simplicity and evaluation in Section 3.
To move from this toy example to a real benchmark, we consider CIFAR10 (Krizhevsky, 2009) with the human annotations from (Peterson et al., 2019). As above, each annotator provides a single label such that we can apply the same methodology. Figure 3 (left) reports the corresponding annotation certainty alongside an uncertainty-adjusted version of accuracy. As can be seen, annotators agree for the majority of examples, resulting in certainties close to 1. Nevertheless, on roughly 178 test examples, certainty is below 0.99. As there are 50 annotators per example with only 10 classes, it is likely that these are actually difficult cases, i.e. cases with inherent uncertainty, as confirmed in Figure 3. This corresponds to roughly 0.2% of the examples. Interestingly, recent improvements in accuracy on CIFAR10 are often smaller than 0.2%2. To measure accuracy while taking annotation uncertainty into account, we measure how often the original CIFAR10 ground truth labels coincide with the top-1 labels obtained from sampled plausibilities. We call this _uncertainty-adjusted accuracy_ (formally defined in Section 2.4) and, unsurprisingly, observe that the original labels of (Krizhevsky, 2009) (when interpreted as predictions) often perform poorly for examples with high annotation uncertainty (red in Figure 3). We expect the impact of ground truth uncertainty to be significantly more pronounced in settings with more disagreement, especially in our dermatology case study (Liu et al., 2020), but also in other domains. For example, Appendix A includes an example on Wikipedia Toxicity (Wulczyn et al., 2017) and we refer to (Leonardelli et al., 2021) for more examples in NLP.
Figure 3: We plot _annotation certainty_ and _uncertainty-adjusted_ (UA) accuracy for CIFAR10-H (Krizhevsky, 2009; Peterson et al., 2019), highlighting the 1000 examples with lowest annotation certainty. There is a small set of 178 images with certainty below 99% for which annotators tend to disagree. Given the large number of annotators (50), this likely indicates examples with inherent uncertainty, as shown for two examples on the right (images and histogram of annotations).
Figure 4 \(|\) Graphical model corresponding to the statistical aggregation model at the core of our framework: This describes the joint distribution of the observation \(x\), plausibilities \(\lambda\), annotations \(b=(b^{1},...,b^{k})\) and label \(y\). We observe \(x\) and \(b\) from which one can infer plausibilities \(p(\lambda|b,x)\) and label \(p(y|\lambda)\), see text. As discussed in the text, in order to simplify inference of \(\lambda\), we assume a simplified model with \(p(\lambda|b,x)=p(\lambda|b)\).
### Statistical model
The statistical model \(p(\lambda|b,x)\) informally introduced above and summarized in Figure 4 is at the core of our framework. Essentially, we propose to replace deterministic aggregation of annotator opinions with a statistical model. To this end, we view aggregation as computing the posterior distribution \(p(\lambda|b,x)\) over plausibilities, given annotations \(b\) and observations \(x\). In all the examples discussed within this paper, we make the simplifying assumption \(p(\lambda|b,x)=p(\lambda|b)\) such that plausibilities are inferred solely from the annotator opinions. Then, plausibilities represent distributions over labels:
\[p(y|b,x)=\int p(y|\lambda)p(\lambda|b,x)\mathrm{d}\lambda. \tag{1}\]
We compute this posterior by specifying the annotation process, i.e., \(p(b|\lambda)\) and assuming a prior \(p(\lambda)\) independent of the input \(x\). The annotation process specifies how we expect experts to provide their annotations if we knew the underlying distribution \(\lambda\) over labels. Additionally, we assume a _reliability_ parameter as part of our statistical model. In practice, this often corresponds to a temperature parameter in \(p(b|\lambda)\) (e.g., \(y\) in our toy example from Figure 2). However, we interpret it as quantifying the prior trust we put in the annotators. As fixing this parameter based on domain expertise or data is challenging, we treat it as a free parameter that is to be explored during evaluation.
On CIFAR10, we assumed each annotator to provide the top-1 label. Thus, it was simple to derive the posterior in closed form for a Dirichlet prior distribution. Here, the plausibilities \(\lambda\) explicitly correspond to the categorical distribution over classes \(p(y|x)\). In other cases, where annotations do not directly match the label space (see Appendix A for examples), this statistical model might be more complex. For our case study, a skin-condition classification problem detailed in Section 3, the expert annotations are partial rankings instead of top-1 labels. In this case, the plausibilities \(\lambda\) approximate the categorical distribution \(p(y|x)\) as for CIFAR10 but we will rely on a Plackett-Luce model for \(p(b|\lambda)\)(Luce, 2012; Plackett, 1975). Unfortunately, this means that the posterior \(p(\lambda|b)\) is not available in closed-form so specific techniques are needed to sample from it; see Appendix B.
Our model in Figure 4 comes with assumptions and limitations that are important to highlight. First, we assume the annotators to be conditionally independent given the plausibilities. This is a simplification and there is significant work in crowd sourcing and truth discovery considering alternative models (Yan et al., 2014). Second, as discussed previously, in all our examples, we consider a simplified version of the model in Figure 4, assuming conditional independence \(p(\lambda|b,x)=p(\lambda|b)\). This makes inferring the posterior over \(\lambda\) easier but clearly reduces our ability to model input dependent uncertainty and thereby disentangle the different sources of uncertainty outlined above. This means that \(p(\lambda|b)\) can only capture the uncertainty present in the annotations. This reduces our ability to decide whether the uncertainty stems mainly from limited observational information or from the annotation process. For example, it is difficult to distinguish between an inherently ambiguous example where we observed low disagreement by chance and an actually unambiguous example with low disagreement. However, this distinction would be extremely valuable, e.g., to inform relabeling. Finally, it is important to realize that this introduces a model assumption in our
certainty and performance metrics. For example, there is no guarantee that \(p(y|b,x)\) converges to the true \(p(y|x)\) as the number of annotators goes to infinity as the annotation model can be mis-specified. However, all existing benchmarks are based on an assumed annotation model. That this assumption is implicit and often unacknowledged does not change the fact that it has tangible effects on evaluation.
### Annotation certainty
To formalize our measure for annotation uncertainty as described above, we consider a fixed but arbitrary label \(y\in[K]\). This could be a deterministically aggregated ground truth as on CIFAR10 or, in Section 2.4, a prediction from a classifier. Then, we informally define the certainty for a specific label \(y\) as the probability that \(y\) corresponds to the top-1 label of the plausibilities \(\lambda\), given the input \(x\) and annotations \(b\) as well as the chosen statistical model. More formally, we can write this as an expectation over \(p(\lambda|b,x)\):
\[\text{Certainty}(y;b,x)=p(y=\arg\max_{j}\lambda_{j})=\mathbb{E}_{p(\lambda|b, x)}\Big{\{}\delta\left[y=\arg\max_{j}\lambda_{j}\right]\Big{\}}. \tag{2}\]
where \(\delta[\cdot]\in\{0,1\}\) is the indicator function for an event. In practice, we compute it using a Monte Carlo average
\[\text{Certainty}(y;b,x)\approx\frac{1}{M}\sum_{m=1}^{M}\delta\left[y=\arg \max_{j}\lambda_{j}^{(m)}\right],\quad\text{where }\lambda^{(m)\ \overset{ \text{i.i.d.}}{\sim}}p(\lambda|b,x). \tag{3}\]
This estimates certainty for a specified label \(y\). To summarize annotation certainty for an example, we compute the maximum certainty across all possible labels \(y\):
\[\text{AnnotationCertainty}_{1}(b,x)=\max_{y\in[K]}\text{Certainty}(y;b,x). \tag{4}\]
Here, the subscript 1 indicates that annotation certainty refers to the top-1 label from plausibilities. However, we can similarly generalize this to sets of labels \(C\), e.g., top-\(j\) sets with \(|C|=j\):
\[\text{AnnotationCertainty}_{j}(b,x)=\max_{C:\text{ top-$j$ labels}}\text{Certainty}(C;b,x) \tag{5}\] \[\text{with}\quad\text{Certainty}(C;b,x)=\mathbb{E}_{p(\lambda|b,x)}\left\{\delta\left[C=Y_{\text{top-$|C|}}(\lambda)\right]\Big{\}}.\right. \tag{6}\]
This certainty measure can also be estimated using \(M\) Monte Carlo samples. This prevents us from having to enumerate all \(K!/(j!(K-j)!)\) subsets of size \(j\) as seemingly implied by (5), which would be prohibitive. Instead, we only need to consider top-\(j\) sets corresponding to the \(M\) samples. However, larger \(j\) might require a higher \(M\) for reliable estimates of annotation certainty. To measure annotation certainty of a dataset, we can average AnnotationCertainty(\(b,x\)) across examples.
There are some caveats of our measure of annotation certainty to be aware of, most of them depending on the aggregation procedure \(p(\lambda|b,x)\). For example, Equation (5) is always 1 for any input where the model \(p(\lambda|b,x)\) is a point mass; i.e., it gives a single deterministic point estimate \(\hat{\lambda}\) (subject to there being ties, which we ignore for simplicity). This is, by construction the case in any deterministic aggregation procedure for human annotations. Also, with an infinite number of annotators \(p(\lambda|b,x)\) will typically converge to a point mass \(\hat{\lambda}\) such that annotation certainties converge to 1. However, this does not mean that the plausibilities correspond to an unambiguous one-hot distribution over classes. Finally, annotation certainties for the same problem depend on the used statistical aggregation process and the reliability we attribute to annotators. This also means that annotation certainty is not a metric that is to be minimized. This is trivially possible, for example,
by putting infinite trust in annotators and working with a point estimate \(\hat{\lambda}\) (infinite reliability \(\gamma\) in Figure 2).
Considering the 3-simplex in Figure 2, annotation certainty measures the impact of the spread in plausibilities on the top-\(j\) labels. Thereby, annotation certainty implicitly also captures inherent uncertainty: even a small spread in plausibilities changes the top-\(j\) labels more often for inherently uncertain examples, i.e., plausibilities close to the center of the simplex. More generally, this means that annotation certainty will be high for easy, unambiguous examples that all annotators agree on. The reverse, however, is not necessarily true: low annotation certainty _can_ indicate high inherent uncertainty but it does not necessarily have to. For example, many experienced annotators consistently disagreeing might indicate high inherent uncertainty; however, annotators might also disagree for other reasons such as unclear annotation instructions, even if the example is generally easy to classify. This also explains our naming: annotation certainty explicitly measures annotation uncertainty and only implicitly accounts for inherent uncertainty. Partly, this also stems from our assumption that \(p(\lambda|b,x)=p(\lambda|b)\) in Section 2.2, making it more difficult to disentangle annotation and inherent uncertainty.
### Uncertainty-adjusted accuracy
Given a measure of annotation certainty, we intend to take it into account when evaluating performance of AI models. Specifically, we assume the label set \(C\) used in Equation (5) to be a prediction set from a classifier - for example, corresponding to the top-\(k\) logits of a deep neural networks. Then, we wish to measure the quality of this prediction set. If we knew the true plausibilities \(\lambda^{*}=p(y|x)\), our ground truth target \(Y_{\text{top-}j}(\lambda^{*})\) would be chosen as the top-\(j\) elements of \(\lambda^{*}\). The quality of the prediction can then be computed by evaluating the indicator of the event that the target set is contained within the prediction set, \(\delta\left[Y_{\text{top-}j}(\lambda^{*})\subseteq C_{\text{top-}k}(x)\right] \in\{0,1\}\), assuming \(j\leq k\) for simplicity). For example, the standard top-\(k\) accuracy is an estimate of the probability where we specifically take \(j=1\) and define
\[\text{Accuracy}_{\text{top-}k}=p(Y_{\text{top-}1}(\lambda^{*})\subseteq C_{ \text{top-}k}(x))=\mathbb{E}_{p(\lambda)}\{\delta\left[Y_{\text{top-}1}( \lambda^{*})\subseteq C_{\text{top-}k}(x)\right]\}. \tag{7}\]
However, we acknowledge that we cannot know \(\lambda^{*}\) and that there is uncertainty on \(\lambda\). Luckily, given \(p(\lambda|b,x)\), we can quantify this uncertainty using Certainty from Equation (3). Integrating this into the above definition of accuracy yields the proposed _uncertainty-adjusted_ version:
\[\text{UA-Accuracy}_{\text{top-}k}=p(Y_{\text{top-}1}(\lambda)\subseteq C_{ \text{top-}k}(x))=\mathbb{E}_{p(x)}\mathbb{E}_{p(\lambda|b,x)}\{\delta[Y_{ \text{top-}1}(\lambda)\subseteq C_{\text{top-}k}(x)]\}. \tag{8}\]
Note that this metric now implicitly depends on the annotations \(b\) through our statistical model \(p(\lambda|b,x)\). As with annotation certainty, this metric mainly accounts for annotation uncertainty and is only indirectly aware of inherent uncertainty (see Section 2.3). In order to explicitly account for inherent uncertainty, i.e., the plausibilities \(\lambda\) not having a clear top-\(1\) label, we can compare the top-\(j\) target set \(Y_{\text{top-}j}(\lambda)\) with the top-\(k\) prediction set \(C_{\text{top-}k}(x)\) set. However, to avoid extensive notation, we consider only target sets having the same cardinality as the prediction set. We call this metric _uncertainty-adjusted top-\(k\) set accuracy_:
\[\text{UA-SetAccuracy}=p(Y_{\text{top-}|C|}(\lambda)=C(x))=\mathbb{E}_{p(x)} \mathbb{E}_{p(\lambda|b,x)}\{\delta\left[Y_{\text{top-}|C|}(\lambda)=C(x) \right]\}. \tag{9}\]
This deviates slightly from the notion of standard top-\(k\) (vs. top-\(1\)) accuracy, but can be more appropriate when evaluating under uncertain ground truth where a large part of the uncertainty stems from inherent uncertainty. In such cases, the top-\(1\) label will often be a poor approximation of the ground truth. In practice, we approximate these uncertainty-adjusted metrics using a Monte Carlo estimate.
Uncertainty-adjusted accuracy reduces to standard accuracy as soon as there is no annotation uncertainty or it is ignored through deterministic aggregation (i.e., when a point estimate \(\hat{\lambda}\) is used). As with annotation certainty, uncertainty-adjusted metrics primarily capture annotation uncertainty and do not necessarily capture inherent uncertainty if it is not reflected in the annotations. This is particularly important to highlight for uncertainty-adjusted top-\(k\) accuracy, which only considers the top-\(1\) label from sampled plausibilities \(\lambda^{(m)}\). For very ambiguous examples, this can ignore inherent uncertainty if the annotation uncertainty is low such that most plausibilities \(\lambda^{(m)}\) agree on the top-\(1\) label. To some extent this is mitigated by using our uncertainty-adjusted set accuracy. However, as our approach of constructing uncertainty-adjusted metrics is very general, this can also be addressed by considering different "base" metrics.
For example, we illustrate the applicability of our framework on a class of ranking metrics: Specifically, for large prediction set sizes \(k=|C|\), the annotation certainty can be very low as achieving exact match of large prediction and target sets can be a rare event. Thus, it seems natural to consider less stringent metrics based on the intersection of prediction set \(C\) and ground truth set \(Y\) instead of requiring equality:
\[\text{Overlap}(C,Y)=\frac{|C\cap Y|}{|C|}. \tag{10}\]
As we do not know the target ground truth set, we use instead the expectation under the aggregation procedure \(p(\lambda|b,x)\) where we define an uncertainty adjusted overlap as
\[\text{UA-Overlap}(C,x)=\mathbb{E}_{p(\lambda|b,x)}\left\{\text{ Overlap}(C,Y_{\text{top}-|C|}(\lambda))\right\}. \tag{11}\]
Now, following (Webber et al., 2010; Wu and Crestani, 2003), we consider overlaps of increasingly larger prediction and ground truth sets and define uncertainty adjusted average overlap as
\[\text{UA-AverageOverlap@L}=\mathbb{E}_{p(x)}\left\{\frac{1}{L}\sum_{k=1}^{L} \text{UA-Overlap}(C_{\text{top-}k}(x),x)\right\}. \tag{12}\]
Similar uncertainty-adjusted variants can be defined for other ranking-based metrics (Sakai, 2013), e.g., Kendall's tau or Spearman's footrule (Fagin et al., 2003, 2004; Kumar and Vassilvitskii, 2010; Shieh, 1998; Vigna, 2015). In general, we expect that uncertainty-adjusted variants can easily be defined for almost all relevant metrics in machine learning.
## 3 Case study: skin condition classification
We demonstrate our framework on a case study in dermatology: skin condition classification from images (Liu et al., 2020). Here, several dermatologists provide annotations in the form of differential diagnoses, i.e., partial rankings of classes (see Figure 5). Based on previous work (Eng et al., 2019; Jain et al., 2021), we expect significant disagreement among annotators indicating a high degree of uncertainty in the ground truth. Following (Liu et al., 2020), we initially use inverse rank normalization (IRN) for deterministic aggregation. We then introduce two statistical aggregation models: a probabilistic version of IRN (PrIRN) as well as a Plackett-Luce (Luce, 2012; Plackett, 1975) (PL) based model. Both models include a reliability parameter described in Section 2 that reflects our trust in the annotators. Selecting a range of reliabilities, we evaluate annotation certainty as well as uncertainty-adjusted accuracy of classifiers in a range of different trust scenarios. Across reliabilities, we observe very high annotation uncertainty in the dataset of (Liu et al., 2020) and discuss implications for evaluating and selecting classifiers.
### Dataset and methods
The dataset of (Liu et al., 2020) includes \(K=419\) different conditions which are to be predicted from three \(448\times 448\) pixel images taken with consumer-grade cameras. This is illustrated in Figure 5 (top row), showing one of the input images alongside the corresponding annotations from six dermatologists. Each annotation includes a variable number of conditions combined with a confidence value. As these confidence values are not comparable across dermatologists, previous work (Azizi et al., 2022; Eng et al., 2019; Roy et al., 2022) uses them to obtain (partial) rankings of conditions (made explicit by braces in Figure 5). Details and statistics on the annotations are provided in Appendix C. Following (Liu et al., 2020), we use inverse rank normalization (IRN) to obtain a point estimate of plausibilities. Specifically, IRN weights each condition by its inverse rank, i.e., \(1/j\) for the \(j\)-th condition, sums across annotations and re-normalizes. We use these plausibilities to train several classifiers using the cross-entropy loss by varying architecture and hyper-parameters following (Roy et al., 2022). We randomly selected four of these classifiers for evaluation and comparison. In prior work, evaluation is based purely on the top-1 IRN label (i.e., the classifier is evaluated against the argmax of IRN plausibilities), ignoring both evidence of lower ranked conditions as well as the uncertainty induced through the aggregated annotations.
In contrast, we propose two alternative _statistical_ aggregation models following Section 2: a probabilistic version of IRN (PrIRN), and a Plackett-Luce (PL) based model. PrIRN interprets the IRN plausibilities \(\lambda\) as a maximum likelihood estimate of the parameters of a multinomial distribution. The posterior \(p(\lambda|b)\) for sampling plausibilities is accordingly defined to be a Dirichlet distribution \(\mathcal{D}(y\lambda)\) where \(y\) is the aforementioned reliability parameter. Instead, PL is a standard model for rankings of conditions that we extend to handle partial rankings. PL assumes a categorical distribution over conditions from which annotators sample without replacement. Assuming independent gamma priors on plausibilities, a Gibbs sampler allows to efficiently sample plausibilities from the corresponding posterior. Similar to IRN, we consider the maximum likelihood (ML) estimate under PL likelihood as a deterministic counterpart to modeling the full posterior. Like PrIRN, PL model can also accommodate a reliability parameter that specifies our trust into the annotators. In both models, higher reliability will result in higher annotation certainty, cf. Equation (2). As it is generally unclear what the "right" reliability should be, we perform experiments across a range of reliabilities, corresponding to different scenarios of how much we trust our annotators. We refer to Appendix B for an in-depth description of both approaches, including several technical contributions in applying PL to partial rankings.
Plausibilities from PrIRN and PL for a particular difficult case are shown in Figure 5 (second row). Compared to ML and IRN (blue bars), there is clearly significant variation in the sampled plausibilities (green dots) to the extent that the two most likely conditions (Hemangioma and Melanoma) may swap their positions. This also impacts evaluation: considering the two prediction sets from Figure 5 (third row), the first model (A) does _not_ include both conditions, while the second model (B) does. Concretely, model A includes the top-1 label in only 70% of the sampled plausibilities, while model B always includes the top-1 label (100%), cf. Figure 5 (fourth row). In a nutshell, this summarizes our
Figure 6: For PL (top) and PrIRN (bottom), we plot annotation certainty as well as uncertainty-adjusted top-3 accuracy, top-2 and top-3 _set_ accuracy and top-3 average overlap (y-axis). We consider four classifiers, A to D, across various reliabilities (x-axis). We omit exact reliability values as they are inherently not comparable between PL and PrIRN. Annotation certainty and evaluation metrics are averages across all examples and \(M=1000\) plausibility samples. The shaded region additionally reports the standard deviation of metrics after averaging across plausibility samples; colored \(\times\) mark performance against point estimates (ML for PL, IRN for PrIRN). We find that reliability severely impacts evaluated performance and that there is a significant variation induced by annotation uncertainty. Moreover, classifiers tend to perform significantly worse when evaluated against more than the top-1 labels using set accuracy.
evaluation methodology on one specific case with fixed reliability, which we extend to the whole test set and across multiple reliabilities next.
### Results
Our main results focus on evaluating (top-k) annotation certainty (see Equation (5)) alongside uncertainty-adjusted accuracy (see Equation (8)) across classifiers and reliabilities. As reliabilities are inherently not comparable across PL and PriIRN, we omit specific values. Then, Figure 6 (first column) highlights that average annotation certainty clearly increases with higher reliability; eventually being 1 at infinite reliability. Also, top-2 and top-3 annotation certainty is significantly lower than top-1 annotation certainty, meaning that there is significant uncertainty not only in the top-1 condition but also lower ranked ones. This annotation uncertainty also has a clear impact on accuracy: Uncertainty-adjusted top-3 accuracy (second column) reduces significantly for lower reliability. As indicated by the shaded region, there is also high variation in accuracy across plausibility samples. In contrast, the ML and IRN plausibility point estimates, as used in previous work (Liu et al., 2020), typically overestimate performance and cannot provide an estimate of the expected variation in performance.
In order to account for inherent uncertainty in the evaluation, i.e., plausibilities not being crisp distributions, we additionally consider _set_ accuracy in Figure 6 (third and fourth columns). Strikingly, set accuracy is dramatically lower than standard accuracy, highlighting that the trained classifiers perform poorly on conditions likely ranked second or third. This is further emphasized by the fact that the reduction in accuracy is significantly larger than the corresponding drop in annotation certainty. Moreover, shifting focus to set accuracy also impacts the ranking of classifiers. This may has severe implications for hyper-parameter optimization and model selection which is typically based purely on standard accuracy. We also observe more significant differences between using PrIRN and PL, e.g., in terms of absolute accuracy numbers, their variation, or differences across classifiers. This highlights the impact that the statistical aggregation model can have on evaluation. As we might not want to put equal weight on all top-3 ground truth labels equally, Figure 6 (last column) also shows top-3 uncertainty-adjusted average overlap, where second and third condition are weighed by \(\nicefrac{{1}}{{2}}\) and \(\nicefrac{{1}}{{3}}\), respectively. Besides resulting in generally higher numbers, this also reduces variation significantly.
In the following, we focus on a fixed, medium reliability (as annotated in Figure 6) and consider annotation certainty and uncertainty-adjusted accuracy across examples. Specifically, Figure 7 (left) plots annotation certainty from PL (blue) and PrIRN (red) over examples (sorted for PL): For at
Figure 7 | For a fixed, medium reliability, we present annotation certainty and uncertainty-adjusted top-3 accuracy across examples and plausibility samples. Left: PL top-1 annotation certainty plotted against sorted examples (blue) in comparison to PrIRN (red). While there is high correlation between PL and PrIRN, there can be significant difference for individual examples. Middle: Uncertainty-adjusted top-3 accuracy for model A against sorted examples (blue). For many examples, the classifier does not consistently include all possible top-1 ground truth label in its predictions, resulting in values between 0 and 1. Right: Histogram plot of uncertainty-adjusted top-3 accuracy (averaged over examples) across the \(M=1000\) plausibility samples. Between worst- and best-case plausibilities, there can be up to 4% difference in accuracy.
least a quarter of the examples there is significant annotation uncertainty, i.e., top-1 annotation certainty is well below 1. We also observe that annotation certainty is strongly correlated between PL and PrIRN (correlation coefficient 0.9). This indicates that similar examples are identified as having high annotation uncertainty. However, on individual examples, there can still be a significant difference. Similarly, we found that annotation certainty correlates well with annotator disagreement (see Appendix D). Figure 7 (middle) also shows uncertainty-adjusted top-3 accuracy against (sorted) examples. Again, for at least a quarter of examples, uncertainty-adjusted accuracy lies in between 0 and 1, i.e., the top-3 prediction sets do not always include all possible top-1 ground truth labels. In Figure 7 (right), we also show results across plausibility samples might implicate in terms of aggregate statistics. While Figure 6 depicted the variability based on plausibility samples only through standard deviation error bars, these histograms clearly show that accuracy can easily vary by up to 4% between best and worst case.
Besides performance evaluation across all 419 distinct conditions, previous work (Roy et al., 2022) also put significant focus on classifying risk categories, considering low, medium or high risk conditions. These categories are assigned to each condition independent of the actual case (e.g., Melanoma is a high-risk condition). As recommendations to users (e.g., whether the user should see a specialist) are similar for conditions in the same risk categories, it is often more important to correctly classify risk categories compared to individual conditions. Figure 8 (left), however, shows that these risk categories are also subject to significant uncertainty. This is made explicit by computing top-1 annotation certainty for risks categories (red) derived from the plausibilities over conditions. While this is generally higher than annotation certainty for conditions (blue) due to the smaller label space (3 risks vs. 419 conditions), annotation certainty remains low for many cases. This also has far-reaching consequences for evaluation. For example, evaluation metrics such as accuracy are often conditioned on high-risk cases. That is, for evaluation, we are interested in a classifier's accuracy only considering high-risk cases. This conditioning, however, is not well defined in light of this uncertainty. This is made explicit in Figure 8 (right) which plots _expected risk_: the expected risk assignment for cases, based on plausibility samples, after mapping risk levels to an ordinal scale; low = 0, medium = 1, and high = 2. Most cases do not yield crisp risk assignments as there is typically evidence for multiple risk categories present in the annotations.
### Discussion
We also qualitatively evaluated our framework in an informal study with two US board-certified dermatologists familiar with the labeling tool (Liu et al., 2020). Specifically, we discussed individual cases with particularly low annotation certainty by showing input images alongside meta information
Figure 8: Implications of ground truth uncertainty on risk categories. Right: Annotation certainty computed for risks in red in comparison to conditions blue. Going from 419 conditions to 3 risk levels clearly increases annotation certainty on average. But for many examples, risk categories remain very uncertain. Right: For many examples, there is evidence for multiple risk categories within the annotations. We plot a histogram of _expected risk_, the expected risk categories given the plausibilities, i.e., distributions over conditions.
(sex, age, etc.) and the corresponding annotations (cf. Figure 5). Discussing these cases takes considerable time while the dermatologists try to understand how the annotators came to their respective conclusions. In most cases, the disagreement was attributed to inherent uncertainty, i.e., missing information, inconclusive images, etc. In only few cases, the disagreement was attributed to annotator mistakes or annotation quality in general - e.g., inexperienced annotators, annotators ignoring meta information etc. Again, this highlights the difficulty of disentangling annotation and inherent uncertainty in cases with high disagreement (as discussed in Sections 2.2 and 2.3) and is in line with related work on "meta-annotation" for understanding sources of disagreement (Bhattacharya et al., 2019; Sandri et al., 2023). However, this also highlights that our uncertainty-adjusted metrics appropriately take both sources of uncertainty into account.
The results for this case study indicate, using our annotation certainty measure, that a large portion of the dataset exhibits high ground truth uncertainty. The current approach (Liu et al., 2020) of deterministically aggregating annotations using IRN and then evaluating against the corresponding top-1 labels largely ignores this uncertainty. In our framework, using the PriRN model, this implicitly corresponds to evaluation at _infinite_ reliability, i.e., full trust in all annotators. Instead, our approach to evaluation paints a more complete picture by computing _uncertainty-adjusted_ (top-k) accuracy across a range of reliabilities, corresponding to different trust "scenarios". In practice, this not only allows to compare models across these different scenarios but also highlights the expected variation in performance. Moreover, we show that performance is always relative to the chosen aggregation model, as highlighted using our alternative Plackett-Luce (PL) (Luce, 2012; Plackett, 1975) based model. As feedback from dermatologists indicates that most of this ground truth uncertainty stems from inherently ambiguous cases, we also explored metrics considering more than the top-1 conditions for evaluation. Here, performance drops rather drastically, indicating potential negative consequences for patients when lower ranked conditions are not appropriate. For example, seemingly random conditions on the 2nd or 3rd place of the prediction set can easily lead to confusion or anxiety. Overall, we believe that our framework will help with model development and make model selection more robust and thereby positively influence patient outcome.
## 4 Related work
Annotator disagreement has been discussed extensively and early on in medicine (Feinstein and Cicchetti, 1990; McHugh, 2012; Raghu et al., 2019; Schaekermann, 2020) as well as machine learning (Dawid and Skene, 1979; Smyth et al., 1994). Natural language processing, for example, has particularly strong work on dealing with disagreement (Abercrombie et al., 2023; Aroyo and Welty, 2014, 2015; Dumitrache et al., 2019; Reidsma and open Aker, 2008; Rottger et al., 2022; Schaekermann et al., 2016), see (Pavlick and Kwiatkowski, 2019) for an overview. As crowdsourcing human annotations has become a standard tool in creating benchmarks across the field (Kovashka et al., 2016; Snow et al., 2008; Sorokin and Forsyth, 2008) - though not without criticism (Rottger et al., 2021) - most work focuses on resolving or measuring disagreement and aggregating annotations. Methods for measuring disagreement (Feinstein and Cicchetti, 1990; Powers, 2012; Uma et al., 2021) are often similar across domains. However, measures such as Fleiss'/Cohen's kappa (Cohen, 1960; Fleiss et al., 2003), percent agreement (McHugh, 2012), or intra-class correlation coefficient (Landis and Koch, 1977) are only applicable to annotations with single class responses such that generalized approaches (Braylan et al., 2022) or custom measures are used for more structured annotations (Pavlick and Kwiatkowski, 2019). Resolving disagreement is typically done _computationally_ (e.g., through majority vote). However, recent work has explored domain-specific _interactive_ approaches for resolving disagreement, involving discussions or deliberation (Bakker et al., 2022; Chen et al., 2019; Drapeau et al., 2016; Pradhan et al., 2022; Schaekermann, 2020; Schaekermann et al.,
2019, 2020a, 2020b; Silver et al., 2021) or relabeling (Sheng et al., 2008), and reducing disagreement by co-designing labeling with experts (Freeman et al., 2021). Recent work also considers properly modeling disagreement (Vitsakis et al., 2023) or performing meta analysis (Bhattacharya et al., 2019; Sandri et al., 2023), trying to understand sources of disagreement. For benchmarks, disagreement is generally addressed by aggregating labels from multiple annotators to arrive at what is assumed to be the single correct label. This can involve basic majority voting or more advanced methods (Carvalho and Larson, 2013; Dawid and Skene, 1979; de Marneffe et al., 2012; Gaunt et al., 2016; Pham et al., 2017; Tian et al., 2019; Warby et al., 2014), including inverse rank normalization (IRN) as discussed in this paper. Often, aggregation is also performed using probabilistic models from the crowdsourcing and truth-discovery literature (Bachrach et al., 2012; Chu et al., 2021; Dong et al., 2009; Gordon et al., 2022; Guan et al., 2018; Li et al., 2012; Rodrigues and Pereira, 2018; Wang et al., 2012; Welinder and Perona, 2010; Yin et al., 2008; Zhao et al., 2012), see (Yan et al., 2014; Zheng et al., 2017) for surveys. However, evaluation is often based on point estimates and the impact of annotator disagreement on evaluation is generally poorly understood (Gordon et al., 2021). While, e.g., (Collins et al., 2022; Reidsma and op den Akker, 2008) train models on individual annotators, (de Marneffe et al., 2012; Nie et al., 2020) perform evaluation on aggregated probabilities instead of top-1 labels, and (Gao et al., 2017) trains on label distributions rather than discrete aggregated labels, there is no common understanding for dealing with annotation uncertainty for evaluation. Instead, following (Plank, 2022), disagreement is often treated as label noise. Here, early work (Angluin and Laird, 1987; Kearns, 1998; Kearns and Li, 1993; Lawrence and Scholkopf, 2001) assumes uniform or class-conditional label noise, while more recent work (Beigman and Klebanov, 2009; Oyen et al., 2022) also considers feature-dependent or annotator-dependent noise. Popular methods try to estimate the label noise distributions (Hendrycks et al., 2018; Northcutt et al., 2019) in order to prune or re-weight examples. Such approaches have also been utilized to infer annotator confusion matrices (Tanno et al., 2019; Zhang et al., 2020), similar to annotator quality in crowdsourcing. We refer to (Chen et al., 2019; Zhang et al., 2022) for good overviews and note that there is also some similarity to partial label learning (Cour et al., 2011; Hullermeier and Beringer, 2005; Nguyen and Caruana, 2008; Wang et al., 2022).
Overall, work on handling annotation disagreement is very fragmented, as highlighted above and in recent surveys (Uma et al., 2021). Moreover, many works treat symptoms such as label noise rather than treating annotator disagreement as uncertainty in the ground truth. This is emphasized in recent position papers (Baan et al., 2022; Basile et al., 2021; Plank, 2022) that argue for common frameworks to deal with this challenge. Indeed, recent work (Belz et al., 2023; Cabitza et al., 2020; Maier-Hein et al., 2018; Sculley, 2007) demonstrates that many results in machine learning are not reproducible, in part due to annotation uncertainty. This has also been the basis for several workshops and challenges on directly learning with disagreement (Leonardelli et al., 2023). Closest to our work, (Gordon et al., 2021; Lovchinsky et al., 2020) propose methods to incorporate label disagreement into evaluation metrics. However, their work is limited to binary classification tasks. Moreover, the considered annotations are unstructured, i.e., annotators merely provide single labels. In contrast, our framework for evaluation with annotation uncertainty if independent of task, domain or annotation format. In contrast to our findings, (Chen et al., 2021) argues that label noise in validation sets can still lead to reliable model selection on rather unambiguous datasets such as CIFAR10. Finally, (Collins et al., 2022) evaluates against labels aggregated from random subsets of annotators on CIFAR10-H, which can be seen as a bootstrapping approach to statistical aggregation in our framework.
## 5 Conclusion
In this paper, we proposed a framework for evaluating AI models under uncertain ground truth. We believe that ground truth uncertainty stems from annotation uncertainty as well as inherent uncertainty and is typically observed in terms of annotator disagreement: in almost all supervised learning tasks, ground truth labels are implicitly or explicitly obtained by aggregating annotations, e.g., counting frequencies and majority voting. Unfortunately, this type of deterministic aggregation typically ignores the underlying uncertainty which can have severe consequences for safety-critical applications such as health. Instead, we introduce a framework based on a statistical model for aggregating annotations that explicitly accounts for uncertainty. Further, we propose a novel measure of annotation uncertainty and present uncertainty-adjusted metrics for evaluating and comparing AI systems. Applied to a case study in skin condition classification, our framework allowed us to make several important observations that previous work (Liu et al., 2020) missed: First, a large portion of cases exhibits high ground truth uncertainty which, according to dermatologists, often stems from inherent uncertainty. Second, classifier performance often degrades and exhibits significant variation under our uncertainty-adjusted metrics. Third, classifiers perform poorly when taking into account inherent uncertainty by evaluating not only against possible top-1 ground truth labels. Our framework can readily be applied to other settings by adapting the statistical aggregation model to the annotations at hand. We believe that properly accounting for ground truth uncertainty in evaluation will play a critical role in successfully tackling more nuanced and ambiguous tasks.
## Acknowledgements
We would like to thank Annisah Um'rani and Peggy Bui for their support of this project as well as Naama Hammel, Boris Babenko, Katherine Heller, Verena Rieser and Dilan Gorur for their feedback on the manuscript.
## Data availability
The de-identified dermatology data used in this paper is not publicly available due to restrictions in the data-sharing agreements.
|
2303.10062
|
Confidence-aware 3D Gaze Estimation and Evaluation Metric
|
Deep learning appearance-based 3D gaze estimation is gaining popularity due
to its minimal hardware requirements and being free of constraint. Unreliable
and overconfident inferences, however, still limit the adoption of this gaze
estimation method. To address the unreliable and overconfident issues, we
introduce a confidence-aware model that predicts uncertainties together with
gaze angle estimations. We also introduce a novel effectiveness evaluation
method based on the causality between eye feature degradation and the rise in
inference uncertainty to assess the uncertainty estimation. Our
confidence-aware model demonstrates reliable uncertainty estimations while
providing angular estimation accuracies on par with the state-of-the-art.
Compared with the existing statistical uncertainty-angular-error evaluation
metric, the proposed effectiveness evaluation approach can more effectively
judge inferred uncertainties' performance at each prediction.
|
Qiaojie Zheng, Jiucai Zhang, Amy Zhang, Xiaoli Zhang
|
2023-03-17T15:44:44Z
|
http://arxiv.org/abs/2303.10062v1
|
# Confidence-aware 3D Gaze Estimation and Evaluation Metric
###### Abstract
Deep learning appearance-based 3D gaze estimation is gaining popularity due to its minimal hardware requirements and being free of constraint. Unreliable and overconfident inferences, however, still limit the adoption of this gaze estimation method. To address the unreliable and overconfident issues, we introduce a confidence-aware model that predicts uncertainties together with gaze angle estimations. We also introduce a novel effectiveness evaluation method based on the causality between eye feature degradation and the rise in inference uncertainty to assess the uncertainty estimation. Our confidence-aware model demonstrates reliable uncertainty estimations while providing angular estimation accuracies on par with the state-of-the-art. Compared with the existing statistical uncertainty-angular-error evaluation metric, the proposed effectiveness evaluation approach can more effectively judge inferred uncertainties' performance at each prediction.
## 1 Introduction
The simplicity in hardware requirements and constraint-free settings make appearance-based gaze estimation attractive for human-machine-interaction (HMI) applications, such as virtual reality [5], driver monitoring systems [14], and assistive robotic arms [16, 4]. Typical fine-angle appearance-based 3D gaze estimation comprises two stages. The first stage adopts facial landmark detection techniques to crop out eye image patches and extract head angles. The second stage uses the cropped eye image patches and head angles to infer pitch and yaw gaze angles measured from the center of the eyes. Recent advancements in deep learning (DL), especially convolutional neural networks (CNN), significantly improve gaze angle inference accuracy and robustness. The state-of-the-art methods[7, 15] achieve average angular accuracies of around 3-8 degrees in occlusion-free and constraint-free datasets, such as MPII [18] and RT-Gene [7].
However, challenges still need to be addressed in handling inaccurate predictions due to the large variability in image quality and individual appearance differences. Existing methods mainly focus on improving angular estimation accuracies but overlook the prediction uncertainties caused by these challenges. These overlooked uncertainty effects will cause catastrophic problems and limit their adoption. For example, in situations where the eye features in images have been heavily corrupted or eliminated, the DL methods will still output unreliable and erroneous gaze estimation and be highly confident about it. Subsequent HMI
Figure 1: The proposed confidence-aware model (top) and the uncertainty effectiveness evaluation approach (bottom). Our model learns to judge the prediction confidence based on eye feature quality in the input images with our proposed loss function. Inference uncertainties are produced together with gaze angle estimates. Our uncertainty effectiveness assessment is based on the asserted causality between eye feature degradation and inference uncertainty. We assess the effectiveness based on the correlation strength between the inferred uncertainty and the severity of intentionally introduced corruptions used to achieve different levels of eye feature degradation.
applications that use these inaccurate estimations may behave unpredictably, losing human trust. To avoid unforeseeable overconfident inference, most HMI applications that critically depend on appearance-based gaze estimation are still performed under controlled environments [16, 12] Further adoptions of appearance-based gaze estimations require confidence output in addition to the angular estimation value to inform subsequent decision-making processes about potential errors.
In addition, a competent evaluation approach is lacking to assess the effectiveness of the estimated uncertainty values. Existing evaluation methods only focus on the overall performance of statistical angular estimation errors and ignore and are not capable of evaluating the effectiveness of individual uncertainty inference. Moreover, the uncertainty-angular-error correlation used by existing methods is non-causal. In other words, high inference uncertainty does not necessarily lead to large errors in predicted gaze angles. The inference-error-confidence correlation may be meaningful to evaluate the performance at a statistical level but not feasible to assess the confidence at each individual inference.
To fill these gaps, this paper proposes a confidence-aware model and new procedures with novel metrics to evaluate the effectiveness of the estimated uncertainties as shown in Figure 1. The confidence-ware model addresses the overconfident inference problem by outputting numerical values for prediction uncertainties together with the original gaze estimates. The proposed model learns detrimental influence factors, such as closed eyes, that corrupt input images and assigns high uncertainty values for their gaze angle estimates. A specially designed loss function enables unsupervised learning for these detrimental features without ground truth labelling. This paper also takes a step further and proposes a novel, more effective evaluation method and metrics to assess the effectiveness of confidence awareness. The proposed evaluation approach is based on evaluating the causal relationship between the severities of intentionally introduced corruptions and the models; inferred uncertainties. This evaluation approach addresses the limitations of the non-causal uncertainty-angular-error correlation used by the existing method. In short, the contributions of this work can be summarized as follows:
1. A confidence-aware model that outputs numerical values for inference uncertainties is proposed. This model can achieve angular accuracies on par with the state-of-the-art method while giving confidence estimates of the inferred gaze angle for subsequent applications.
2. A novel and more effective evaluation approach and metrics to assess the effectiveness of the uncertainty output from the proposed model. The proposed approach introduces a causal metric that measures the correlation between the severities of intentionally introduced corruptions and the model's inferred uncertainty value.
3. Extensive experiments are conducted to demonstrate the advantage of uncertainty estimation and the effectiveness of the uncertainty estimation method. A qualitative evaluation verifies the causality assumption between eye feature degradation and inferred uncertainty.
## 2 Related Work
### Deep learning 3D Gaze Estimations
Zhang et al. [18] first introduced deep convolutional neural network model together with their constraint-free MPII-Gaze dataset. This network takes eye-region image patches to perform gaze vector inference. In their later work [19], Zhang et al. proposed a new spatial weight network design that takes the full-face image for gaze angle inference. Kellnhofer et al. [10] proposed the Gaze360 dataset, which samples gaze angles in all 360 degrees in outdoor conditions. They also introduced a long-short-term-memory network that takes seven consecutive frames for gaze analysis. Fischer et al. [7] proposed the RTGene dataset that captures subjects at much greater distances than the MPII dataset. Their proposed gaze estimation network used the VGG16 network for feature extractions and implemented an ensemble inference scheme for added robustness and accuracy. Yu et al. [17] proposed the first unsupervised gaze representation learning structure and showed a strong linear correlation between the learned gaze representation and the ground truth angles. Other network designs and learning approaches, such as dilated convolution [3] and meta-learning [15], are also proposed. Most network structure or method improvements in these works aimed to lower the angular prediction error; not much attention was devoted to estimating uncertainties related to the predictions
### Confidence Estimation Approaches
Recent studies have proposed many uncertainty quantifications approaches in deep learning, including Bayesian Neural Networks (BNNs) [8], heteroskedastic maximum likelihood estimation [13], etc. A combination of the Bayesian approach and heteroskedastic MLE can be used to distinguish epistemic and aleatoric uncertainties in some situations [11]. The Bayesian approach, including its dropout approximations, places probability on the network's weight, thus concluding the epistemic uncertainty. BNN inferencing requires multiple passes, which slows down the prediction process. The heteroskedastic MLE approach relates uncertainty to the input data, thus concluding the aleatoric uncertainty. Unlike BNN inferencing, networks trained with heteroskedastic MLE only require a sin
gle pass for inferencing, thus preserving the run time of a regular neural network.
### Confidence Estimation in Gaze Tracking
There are very few works that incorporated confidence estimation into gaze estimation tasks. All these methods are applied to coarse gaze estimations based on the head or facial information rather than detailed eye features used in this paper. Their uncertainty sources majorly come from eye region occlusion rather than various image corruptions studied by this work. In [6], they perform gaze estimation based on low dimensional facial landmarks and the confidence estimated by the preceding OpenPose[2] anatomical key points detecting method. They proposed a Confidence Gated Unit to incorporate confidence information from OpenPose for gaze estimations. The output dimension was expanded from 2 to 3 to accommodate uncertainty estimations. A heteroscedastic MLE with a cosine-similarity-based negative log-likelihood loss function was used to train the model. In [10], the authors applied quantile regression, test time dropout, and test time augmentation to various network structures with CNN backbone to estimate confidence ranges for gaze prediction.
### Evaluation of Confidence Effectiveness
Due to the lack of ground truth labels for confidence, there is no simple numerical value to be directly compared to evaluate the performance of confidence estimation in gaze estimation tasks. Previously mentioned methods in [10] and [6] used the correlation between the estimated uncertainties and prediction angular errors. This correlation, however, may not be trustworthy because it is non-causal. These two works' inference confidence changes are caused mainly by eye occlusion rather than inference angular error. Thus, a causal effectiveness measure would relate occlusion severity to inference uncertainty. Although the current uncertainty-angular-error correlation showed a positive correlation between average angular errors and the estimated uncertainties, this value can only be used to statistically demonstrate that there are more samples with high angular estimation errors in the high-uncertainty group. Such positive correlations do not necessarily hold true for individual sample tests because individual samples can have high uncertainty and low angular error at the same time. In other words, a perfect uncertainty model would not achieve a perfect correlation of 1 in this evaluation. A method based on causal correlation is needed to measure the confidence-aware feature more effectively.
## 3 Methodology
### Confidence-aware gaze estimation network structure and training
#### 3.1.1 Confidence-Aware Network Structure
Recall those appearance-based gaze estimations included two stages: eye region detection and angular gaze estimation with cropped eye image patches. Since the facial landmark detection used by eye region extraction, such as [1], is a relatively mature technology, this paper will not present architectures used by facial landmark detection. It will mainly focus on the confidence-aware gaze angle estimation part. The model takes cropped eye patches and head angle information and adopts deep neural networks to predict both gaze angles (pitch and yaw) and their uncertainties, as shown in Figure 2.
The confidence awareness comes from the heteroskedastic assumption of the input data. Each set of inputs into the network is asserted to have unique associated variances to the outputs, which are treated as inference uncertainties. To accommodate the heteroskedastic assumption in 3D gaze angle estimation, the proposed network outputs 4 values, 2 for yaw and pitch angle estimation and the other 2 for their associated inference uncertainties. The maximum between pitch and yaw uncertainties represents the overall inference uncertainty.
The proposed network structure is adapted from the work of Fischer et al. Left and right eye images are first fed into Resnet18 models for feature extraction. We chose Resnet18 for its small size and fast training time while maintaining comparable output accuracies. The extracted features are represented by a 1024 vector for each eye. The extracted features are then concatenated and passed through a series of fully connected (FC) layers to perform inference. The head angle vector, which is a 1x2 vector containing head pitch and yaw angles, is concatenated with the output from the first FC layer to be considered for inferencing.
Figure 2: Network structure adapted from [7] for confidence-aware 3D gaze estimation. This network outputs uncertainty values for pitch and yaw angles, respectively. The maximum between the pitch uncertainty and yaw uncertainty represents the overall inference uncertainty.
#### 3.1.2 Loss Function
The proposed model is fully differentiable and can be trained end-to-end with only labels for gaze angles. We minimize a customized compounded loss (Equation 1) modified from [13] containing two parts; an uncertainty-regulated angular error loss term \(\frac{l_{n}}{2\sigma^{2}(x)}\) for gaze inference accuracies and an uncertainty regularization term \(\frac{1}{2}\ln\left(\sigma^{2}\left(x\right)\right)\) to avoid unbounded uncertainty regulation to the angular error loss.
\[\mathrm{loss}=\frac{1}{2}\ln\left(\sigma^{2}\left(x\right)\right)+\frac{l_{n} }{2\sigma^{2}\left(x\right)} \tag{1}\]
The uncertainty-regulated angular loss contains an angular loss term \(l_{n}\) in the numerator to represent the difference in inference and ground truth value and an uncertainty term \(\sigma^{2}(\cdot)\) in the denominator to calculate uncertainty based on input data \(x\). When the model outputs angle prediction with large errors, the uncertainty will be increased to lower the overall loss value. To avoid the infinite growth of uncertainty, a regularization term on its natural log values is needed in the overall loss function. To avoid gradient explosion and achieve training stability, the angular loss is calculated from smooth L1 loss depicted in equation 2.
\[l_{n}=\left\{\begin{array}{ll}0.5(\tilde{y}-y)^{2},&\text{for}\;|\tilde{y}- y|<1\\ |\tilde{y}-y|-0.5,&\text{otherwise}\end{array}\right. \tag{2}\]
### Evaluation of Confidence Awareness
To measure the effectiveness of the network's confidence awareness efficiently and accurately, we propose a novel evaluation approach that depends on a causal relationship between image feature degradations and uncertainties, which we refer to as corruptions and inferred uncertainties. We introduce controllable corruptions with different levels of severities to relatively clean images with little to no corruptions and pass these intentionally corrupted images into the confidence-aware network. We evaluate confidence awareness based on the correlation between inferred uncertainties and the corruption level (Figure 3).
#### 3.2.1 Image Feature Degradation Definition
In the gaze estimation application, the image feature degradation caused the input images to contain unfamiliar features for the model to infer. Two types of degradation on images are used in gaze estimation: improper image handling and source-level degradation.
Improper image handling attributes the cause of degradation to the process of acquiring eye patch images, which contain general image degradation, such as blurring and noises, and gaze-tracking-specific degradation, such as eye region off-cropping. During image handling, these degradations are applied to a clean source. Clearer eye region images could be captured if improper handling was avoided.
Source level degradation attributes the cause of degradation to the subject from whom the images are collected. Typical source-level degradation can be closed eyes or drastic eye shape differences. The source-level degradation cannot be further reduced due to the corrupted source.
#### 3.2.2 Two Assumptions
The proposed evaluation method is based on two assumptions: 1) most of the training samples are relatively clean from corruption, and 2) inferred uncertainties are positively correlated to the severity of corruption.
The first assumption lets the model learn the image-to-gaze-angle mapping function based on clear eye features. If the training dataset contains too many heavily corrupted images, the model cannot learn about the functionality of eye features in gaze estimation. The inferred uncertainties, therefore, cannot reflect confidence in gaze angle estimations. The second assumption enables us to quantitatively evaluate the model's uncertainty estimation performance. A perfectly trained confidence-aware model should output uncertainty values with a strict positive correlation with the introduced severity level.
#### 3.2.3 Proposed Method and Metric
Based on the two assumptions, we propose the evaluation method as follows:
1. From a dataset, choose images with low inferred uncertainties by the model and visually check the image to ensure it contains clear eye features. The visual check will avoid datasets that have been uniformly corrupted in every image.
2. Apply predefined controllable corruptions at different severities to the clean image and pass these images to the model for uncertainty inference.
3. Calculate the correlation between the severity of introduced corruptions and inferred uncertainty. The corre
Figure 3: Proposed procedure to evaluate effectiveness in uncertainty estimation. Intentional corruption with controllable severities is introduced to clean images for the confidence-aware model to infer. The model’s estimated uncertainties are compared with the corruption severity levels to find effectiveness.
lation should be close to 1 for uncertainty estimations with good performance.
With this evaluation method, we also propose an evaluation metric (Equation 3) to calculate the overall performance based on all types of corruptions introduced to the image. In Equation 3, \(C_{i}\) is Spearman's rank correlation coefficient for the \(i^{th}\) introduced corruption. \(k_{i}\) is the slope between the severity of introduced corruptions and inferred uncertainty when fitted with linear regression. A small slope magnitude represents that the model is robust against specific corruption, thus, does not show much fluctuation in the inferred uncertainty. We would like to make the correlation contribution from the low-impact corruption small by scaling its correlation value by its slope. \(n\) denotes the total types of corruption introduced.
\[P=\frac{\sum_{i}^{n}k_{i}C_{i}}{\sum_{i}^{n}|k_{i}|} \tag{3}\]
## 4 Experiment Setup
We build our model with PyTorch and evaluate the proposed confidence-aware model in 3D gaze estimation tasks and its uncertainty estimation performance with two open-source datasets, the MPII-Gaze and RTGene datasets. The experiment setup and experiment results are described in the following paragraphs.
### Training Settings and Hyperparameters
All model training is performed on the MPII dataset. The initial learning rate was set to 0.0001 with a weight decay factor of 0.1 after epoch 25. Adam optimizer and batch size of 64 was used. Network weights are initialized to the pre-trained ones from ImageNet. A leave-one-out training-testing split strategy was applied for the within-dataset test. Images from 14 out of the 15 participants were used for training and validation. We followed an 80-20 splitting rule to distribute the training and validation data. The input image patch was prepared following the method in RTGene by first resizing eye patch images from \(36\times 60\) to \(224\times 224\) to use the Resnet18 structure better. The color channels are then normalized with means of 0.485, 0.456, and 0.406 with standard deviations of 0.229, 0.224, and 0.225, respectively. No data augmentation, such as random crop or color jitter, was performed.
### Corruption Methods in Effectiveness Evaluations
Recall that corruptions can be categorized into two groups based on the cause - corruptions due to improper image handling and corruptions due to bad sources. Since it would be nearly impossible to quantify the corruption extent that natively exists in datasets, we designed experiments to simulate their effects of feature degradation with controllable corruptions whose severities are known.
The general image quality degradation and source-level corruptions are simulated with 14 out of the 15 corruption methods proposed in ImageNet-C[9] The elastic transform is excluded because it does not output visually consistent degradation with the severity levels. These 14 corruptions capture various degradations to feature sharpness, such as the eyelid and iris boundary, which compose most of the effect caused by general image quality degradation and source-level corruption. These 14 corruptions are simulated with the 5 severity levels as described in ImageNet-C. Figure 4 shows all 14 corruptions with the highest corruption severity and the uncorrupted image for comparison.
The gaze-tracking application-specific corruptions typically leave images with sharp eye features but with a partial or entire cutoff of the overall features. These corruptions cannot be reflected by the previously mentioned 14 and require custom implementation. Therefore, a custom implementation of vertical and horizontal eye patch off-cropping is developed to simulate the application-specific off-cropping corruption. These off-cropping corruptions are achieved by intentionally moving the cropping window away from the eye center by predefined distances. Five levels of severities are designed for this type of corruption to cover slight off-cropping to total off-cropping of the eye regions. The off-cropping effects are shown in Figure 5.
### Evaluation Metrics
The qualitative evaluation metric will be based on equation 3 In this experiment setup, we consider all introduced corruptions. That is, \(n\) in equation 3 is 16. When the model can effectively capture inference uncertainty, the correlation score calculated from equation 3 will be close to 1. Exist effectiveness evaluation method, which calculates the correlation between angular inference error and inferred uncertainty, will also be calculated as baseline values to be compared.
Figure 4: Visualization of 14 image corruptions. The top left shows the uncorrupted image for reference. These corruptions are adopted from ImageNet-C [9] Elastic transform corruption is not used because of its inconsistent behavior with the corruption severity.
Lastly, a qualitative evaluation is present in determining the effectiveness of the confidence-aware algorithm in evaluating corruptions that natively occurred in the dataset. Images are sorted by the model's inferred uncertainty value. Selected images from each confidence quantile are displayed on plots for human judgment. The qualitative evaluation is also used to verify the causality assumption between the corruption severity and inferred uncertainty magnitude.
## 5 Result
### Design of Experiments
The experiments are designed to study 1) the effectiveness of confidence awareness of the newly proposed confidence aware model on different corruptions, 2) the differences between the new and existing evaluating approach, 3) the generalizability of the model and evaluation metric across person and dataset, and 4) qualitatively judge the confidence-aware model for unquantifiable corruptions that occur natively in the dataset.
### Experiment Result
#### 5.2.1 Measuring Effectiveness of Confidence-Aware Model on Each Corruption
The effectiveness of the confidence-aware model on single corruptions is judged following the same concept using the correlation score and slope described in 3.2.3. We first analyze the effectiveness by plotting the corruption severity against the inferred uncertainty to visualize the correlation magnitude and slope in Figure 6. Datapoints in this figure were obtained by performing inference on a single image that is relatively clean (image 13322, person 0 in MPII dataset). Because we assume that uncertainties are proportional to the severities of introduced corruptions, we expect a perfect model to output uncertainties that are strictly positively correlated with corruption severity. Among these trendlines for corruption, several show near-zero slopes because the model is relatively robust and insensitive to these corruptions. These corruptions include defocus blur, glass blur, motion blur, zoom blur, brightness change, pixelation, and JPEG compression. Figure 4 shows that these corruptions cause very light eye feature degradation compared to some of the heavy ones. These degradations are not severe enough to cause high uncertainties for model inferencing.
All other trendlines show that the model is sensitive to the rest of the corruptions. All these corruptions caused heavy eye feature degradation or even elimination in some extreme situations. These corruptions caused the model to be unable to extract desired eye features and output high uncertainties. Some trendlines displayed a downward trend near the end, such as snow, because the eye features are overwhelmed by corruptions causing the model to perform inference outside of the designed region. Since very few samples in the training dataset contain such features, the model does not have enough knowledge to estimate the corresponding uncertainties correctly. It should be noted that although the uncertainty values dipped at these locations, their magnitude is still well above the baseline values, indicating non-trustworthy gaze angle estimation.
Using the newly designed evaluation metric in equation 3, the performance on effectiveness is 0.9534 (\(>0.8\)), demonstrating strong capability in detecting unreliable inferences. This high correlation effectiveness score is on par with trendline behaviors unaffected by the model's robustness against some corruptions.
#### 5.2.2 Comparison with Baseline Evaluation Method
To compare the effectiveness between the existing and new proposed evaluation methods for uncertainty estimates, we
Figure 5: Custom implementation of off-cropping image corruption with 5 levels of severity. The leftmost column contains uncorrupted images. The most severe off-cropping completely crops eye features by moving the crop center by the width or height of the patch. The rest 4 severities are spaced evenly by the crop center distance between no off-crop and the most severe one.
Figure 6: Correlation behaviors between the corruption severities and the inferred uncertainties on 16 intentionally introduced corruptions In MPII dataset. The introduced corruptions can be grouped into two categories based on the model’s behavior - insensitive corruptions and sensitive corruptions. Model’s behaviors on insensitive corruptions are shown as lines with small slopes and consistently low uncertainty values. Light feature degradation in these corruptions caused such behaviors. Model’s behaviors on sensitive corruptions are depicted as large slopes with some inconsistency when corruption severity is beyond 3. Heavy feature degradation caused the large slope. The inconsistent trends in medium to high severities from the sensitive corruptions are caused by eye feature overwhelming. Correlation score calculated using equation 3 is 0.9534, suggesting very strong correlation.
plot the angular error values against introduced corruption severity to judge the correlation strength with the uncertainty source in Figure 7. This figure is intended to study the correlation strength between angular errors and the root causes for inference uncertainties. Although general upward trends can be observed for all uncertainty values, trendline behaviors for the "sensitive corruptions" are different and much less consistent across the board. Using the proposed correlation calculation in equation 3, we calculated the correlation strength between the angular errors and severity levels to be 0.7109. The less consistent behaviors in Figure 7 and the lower correlation number suggest that angular prediction errors correlate with uncertainty sources weakly. Since the inferred uncertainty should be caused by image corruption severity, this low correlation score suggests that correlating angular values to inferred uncertainty may not reflect the model's actual performance.
In addition to calculating the correlation strength using the proposed method, we also performed a correlation study that links inferred uncertainty to angular errors as used in [10, 6]. The overall correlation is displayed in a scatter plot in Figure 8. No apparent trend can be discovered between the uncertainty and angular error. The Spearman's correlation calculated with the data points shown in Figure 8 is 0.5384, which suggests a medium to low correlation. Therefore, using angular errors to evaluate the effectiveness of a model is not representative.
#### 5.2.3 Consistency Study
Confidence awareness must be able to achieve consistent performance across different people to be applicable. This consistency study used a leave-one-out strategy to study within-dataset variations and used weights trained on MPII to perform cross-dataset study on RTGene. Due to the great number of test combinations, this paper only selects two corruptions, motion blur and contrast degradation, to represent the light hand heavy feature degradation corruption among all types. These two corruptions are applied to 100 images randomly selected from the upper 20% confidence percentile, i.e., images with 20% lowest uncertainties, to ensure the original images are relatively free of preexisting noises. These images are selected from 5 testing subjects with the most image data and are shown in Figure 9.
First, results from the cross-person consistency study are presented in Figure 10. The trendline for light corruption (motion blur on the left) shows highly similar behavior across all samples. This similarity is due to the feature-preserving nature of the motion blur corruption. On the other hand, trendlines diverge for heavy corruption at high severity. The diverge is caused by out-of-range inference since training datasets rarely have completely corrupted images without eye features. Similarities in trends between Figure 6, where single image is used, and Figure 11 suggest the consistency across samples within dataset. The effectiveness scores for each of the 5 test subjects using all 16 corruptions are calculated from equation 3, shown in Table 1.
All effectiveness scores are relatively high, indicating an effective confidence-aware model. It should be noted that the effectiveness score for test subject 1 is relatively lower than the rest. This is caused by the difference in eye features caused by eye shape difference. As shown in the top row of Figure 9, eye features from sample 1 are not as obvious as the rest and may cause the confidence-aware model to be less effective on test sample 1. Due to the scarcity of similar eye features in the training dataset, the model performs inference with higher uncertainties on subject 1.
Figure 8: Scatter plot for visualizing the correlation between prediction uncertainties and the prediction angular error. Data points in this figure do not show a strong trend. The Spearman’s rank correlation score was calculated to be 0.5384, suggesting a medium to low correlation between these two values.
Figure 7: Correlation behaviors between corruption severity and prediction angular accuracy on 16 intentionally introduced corruptions. The trendline are much less consistent than those from Figure 6 where correlations between corruption severity and inferred uncertainties are studied. The lack of consistency suggests a weak correlation between angular accuracies and the severity of introduced corruption. A lower correlation score of 0.7109 calculated from equation 3 also suggests a lower correlation.
Figure 9: Comparison of the test subjects from two datasets
Next, the results of cross-dataset consistency study are present in Figure 11. The model's behavior on the RTGene dataset shows different behaviors than that of MPII dataset. Due to the high corruption levels natively exists in all samples in the RTGene dataset, as shown in the bottom row of Figure 9, the model's estimated uncertainties in the uncorrupted are much higher than that from RTGene. The preexisting corruptions also caused the model's behavior to be less consistent than MPII. The high uncertainty values suggest that model inferencing result performed on RTGene should not be trustworthy.
#### 5.2.4 Quantitative Evaluation
To verify the effectiveness of the confidence-aware model against the preexisting unquantifiable corruption, we performed qualitative evaluations to examine the corruptions that exist in each confidence quantile visually. Figure 12 shows a qualitative evaluation by displaying selected images from each confidence quantile. The lowest confidence quantile contains images with the most severe corruptions and has much higher quantile-wise average values compared with the later three quantiles. Common corruptions in the lowest confidence quantile involve closed eyes, complete off-cropping, or drastic lightning condition change. A cross-dataset qualitative evaluation is conducted on RTGene dataset with weights trained on MPII Because images in the RTGene datasets are corrupted with much severity noises, the overall uncertainty magnitude is about 10 times that from MPII. The most confident quantile in the right of Figure 12 still shows certain eye features, albeit heavily corrupted.
It should be noted that the most confident quantile in the RTGene evaluation has higher uncertainty scores than the least confident quantile in the MPII, which contains little to no eye features. Based on this comparison, any inferences performed on RTGene dataset with the MPII-trained weights should not be trustworthy. This quantitative evaluation result also verified that our assumption on the causation between corruption severity and inference confidence is correct in the proposed confidence-aware model. Higher uncertainty values are assigned to the images with heavy corruption even with unquantifiable corruption from other test participants or dataset.
## 6 Conclusion
This work introduced an effective confidence-aware gaze estimation model against image corruption and a novel, ac
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|} \hline Subject & 0 & 1 & 2 & 3 & 6 \\ \hline Correlation & 0.95 & 0.85 & 0.91 & 0.91 & 0.94 \\ \hline \end{tabular}
\end{table}
Table 1: Uncertainty Effectiveness Score for subjects from MPII
Figure 11: Correlation behaviors between the corruption severities and the inferred uncertainties from the RTGene dataset. The model is trained with MPII dataset. 5 test subjects with the most images are used for testing with light and heavy corruption. The trendline behavior differs significantly from that of MPII due to the high level of preexisting corruptions. High values in the predicted uncertainty level suggest that the gaze angle prediction result using models trained on MPII is very unreliable on RTGene
Figure 12: Qualitative evaluation of the model on MPII and RTGene datasets with weights trained on the MPII dataset. Based on left plot, the model can successfully distinguish heavily corrupted images from the rest. In the right plot, although all images are affected by very heavy corruption, the model can still put images with slight eye features in the most confident quantile. The overall uncertainty in the right figure is almost 10 times as large as the left one, suggesting not trustworthy inference results.
Figure 10: Within dataset consistency for confidence awareness in the MPII dataset. 5 samples with the most images are studies with light (motion blur) and heavy (contrast) corruptions. The model shows consistent behavior for light corruption due to the eye feature preserving motion blur corruption. The model shows different behaviors when images were processed with heavy corruption due to the overwhelming of the eye features by the corruption. The overwhelm of corruption feature caused the model to infer outside the training range, leading to inconsistent results. Trends in both corruptions resemble those from Figure 6 where single image is used. This similarity suggests that the model can achieve consistent behaviors across samples within dataset.
curate evaluation approach for determining the effectiveness of confidence awareness on each inference. The evaluation approach involves intentionally introducing controllable corruptions, whose severities correlate with the inference confidence for effectiveness evaluation. The model shows consistent performance behavior across samples and dataset. The confidence-aware model has demonstrated its capability with the newly proposed evaluation methods. This confidence-aware model can make HMI safer by avoiding passing erroneous gaze information to the machine and improving the adoption rate for critical applications.
|
2301.11505
|
Design of an FPGA-based USB 3.0 device controller
|
The traditional USB 3.0 communication based on FPGA uses an external chip as
a USB PHY or a USB controller including a USB PHY. This paper realizes a USB
3.0 controller by using FPGA resources, in which FPGA logic realizes a serial
interface engine, and an FPGA internal transceiver is as a USB PHY. Used slices
percent after implementation is 4.59% in Kintex-7 325t. The test result shows
that the speed of USB 3.0 is more than 320 MB/s bulk-in and bulk-out transfers.
|
Zhe Ning, Yunhua Sun
|
2023-01-27T02:48:21Z
|
http://arxiv.org/abs/2301.11505v1
|
# Design of an FPGA-based USB 3.0 device controller
###### Abstract
The traditional USB 3.0 communication based on FPGA uses an external chip as a USB PHY or a USB controller including a USB PHY. This paper realizes a USB 3.0 controller using FPGA resources, in which FPGA logic realizes a serial interface engine, and an FPGA internal transceiver is a USB PHY. Used slices percent after implementation is 4.59% in Kintex-7 325t. The test result shows that the speed of USB 3.0 is more than 320 MB/s bulk-in and bulk-out transfers.
USB 3.0; FPGA; Transceivers.
|
2302.08504
|
PersonNeRF: Personalized Reconstruction from Photo Collections
|
We present PersonNeRF, a method that takes a collection of photos of a
subject (e.g. Roger Federer) captured across multiple years with arbitrary body
poses and appearances, and enables rendering the subject with arbitrary novel
combinations of viewpoint, body pose, and appearance. PersonNeRF builds a
customized neural volumetric 3D model of the subject that is able to render an
entire space spanned by camera viewpoint, body pose, and appearance. A central
challenge in this task is dealing with sparse observations; a given body pose
is likely only observed by a single viewpoint with a single appearance, and a
given appearance is only observed under a handful of different body poses. We
address this issue by recovering a canonical T-pose neural volumetric
representation of the subject that allows for changing appearance across
different observations, but uses a shared pose-dependent motion field across
all observations. We demonstrate that this approach, along with regularization
of the recovered volumetric geometry to encourage smoothness, is able to
recover a model that renders compelling images from novel combinations of
viewpoint, pose, and appearance from these challenging unstructured photo
collections, outperforming prior work for free-viewpoint human rendering.
|
Chung-Yi Weng, Pratul P. Srinivasan, Brian Curless, Ira Kemelmacher-Shlizerman
|
2023-02-16T18:57:17Z
|
http://arxiv.org/abs/2302.08504v1
|
# PersonNeRF :
###### Abstract
We present PersonNeRF a method that takes a collection of photos of a subject (e.g. Roger Federer) captured across multiple years with arbitrary body poses and appearances, and enables rendering the subject with arbitrary novel combinations of viewpoint, body pose, and appearance. PersonNeRF builds a customized neural volumetric 3D model of the subject that is able to render an entire space spanned by camera viewpoint, body pose, and appearance. A central challenge in this task is dealing with sparse observations; a given body pose is likely only observed by a single viewpoint with a single appearance, and a given appearance is only observed under a handful of different body poses. We address this issue by recovering a canonical T-pose neural volumetric representation of the subject that allows for changing appearance across different observations, but uses a shared pose-dependent motion field across all observations. We demonstrate that this approach, along with regularization of the recovered volumetric geometry to encourage smoothness, is able to recover a model that renders compelling images from novel combinations of viewpoint, pose, and appearance from these challenging unstructured photo collections, outperforming prior work for free-viewpoint human rendering.
figure
## 1 Introduction
We present a method for transforming an unstructured personal photo collection, containing images spanning multiple years with different outfits, appearances, and body poses, into a 3D representation of the subject. Our system, which we call PersonNeRF enables us to render the subject under novel unobserved combinations of camera viewpoint, body pose, and appearance.
Free-viewpoint rendering from unstructured photos is a particularly challenging task because a photo collection can contain images at different times where the subject has different clothing and appearance. Furthermore, we only have access to a handful of images for each appearance, so it is unlikely that all regions of the body would be well-observed for any given appearance. In addition, any given body pose is likely observed from just a single or very few camera viewpoints.
We address this challenging scenario of sparse viewpoint and pose observations with changing appearance by modeling a single canonical-pose neural volumetric representation that uses a shared motion weight field to describe how the canonical volume deforms with changes in body pose, all conditioned on appearance-dependent latent vectors. Our key insight is that although the observed body poses have different appearances across the photo collection, they should all be explained by a common motion model since they all come from the same person. Furthermore, although the appearances of a subject can vary across the photo collection, they all share common properties such as symmetry so embedding appearance in a shared latent space can help the model learn useful priors.
To this end, we build our work on top of HumanNeRF [46], which is a state-of-the-art free-viewpoint human rendering approach that requires hundreds of images of a subject without clothing or appearance changes. Along with regularization, we extend HumanNeRF to account for sparse observations as well as enable modeling diverse appearances. Finally, we build an entire personalized space spanned by camera view, body pose, and appearance that allows intuitive exploration of arbitrary novel combinations of these attributes (as shown in Fig. 1).
## 2 Related Work
3D reconstruction from unstructured photosReconstructing static scenes from unstructured photo collections is a longstanding research problem in the fields of computer vision and graphics. The seminal Photo Tourism system [39] applies large-scale structure-from-motion [36] to tourist photos of famous sites, enabling interactive navigation of the 3D scene. Subsequent works leveraged multi-view stereo [10, 37] to increase the 3D reconstruction quality [1, 38]. Recently, this problem has been revisited with neural rendering [19, 43, 40, 44]. In particular, Neural Radiance Fields (NeRFs) [32] have enabled photorealistic view synthesis results of challenging scenes, including tourist sites [27] and even city-scale scenes [42]. In addition to static scenes, unstructured photo collections have been also used to model human faces [15, 20] or even visualize scene changes through time [24, 25, 29].
Our method builds on top of NeRF's neural volumetric representation of static scenes, and extends it to model dynamic human bodies from unstructured photo collections.
3D reconstruction of humansMany early works in image-based rendering [41] have addressed the task of rendering novel views of human bodies. These techniques are largely based on view-dependent texture mapping [7], which reprojects observed images into each novel viewpoint using a proxy geometry. The image-based rendering community has explored many geometry proxies for rendering humans, including depth maps [14, 49], visual hulls [28], and parametric human models [3]. An alternative technique for 3D reconstruction and rendering of humans is to use 3D scanning techniques to recover a signed distance field representation [6, 9], and then extract and texture a polygon mesh [5, 11, 26]. Recently, neural field representations [47], have become popular for modeling humans since they are suited for representing surfaces with arbitrary topology. Methods have reconstructed neural field representations of humans from a variety of different inputs, including 3D scans [4, 23, 31, 35, 45], multi-view RGB observations [18, 21, 34], RGB-D sequences [8], or monocular videos [13, 46]. Our work is most closely related to HumanNeRF [46], which reconstructs a volumetric neural field from a monocular video of a moving human. We build upon this representation and extend it to enable reconstructing a neural volumetric model from unstructured photo collections with diverse poses and appearances.
## 3 Method
In this section, we first review HumanNeRF [46] (Sec. 3.1), explain how we regularize it to improve reconstruction from sparse inputs (Sec. 3.2), and then describe how we model diverse appearances (Sec. 3.3 and 3.4). Finally, we describe how we build a personalized space to support intuitive exploration (Sec. 3.5).
### Background
HumanNeRFThe recently-introduced HumanNeRF method represents a moving person as a canonical volume \(F_{c}\) warped to a body pose \(\mathbf{p}\) to produce a volume \(F_{o}\) in observed space:
\[F_{o}(\mathbf{x},\mathbf{p})=F_{c}(T(\mathbf{x},\mathbf{p})), \tag{1}\]
where \(T:(\mathbf{x}_{o},\mathbf{p})\rightarrow\mathbf{x}_{c}\) defines a motion field mapping points from observed space back to canonical space, and \(F_{c}:\mathbf{x}\rightarrow(\mathbf{c},\sigma)\) maps position \(\mathbf{x}\) to color \(\mathbf{c}\) and density \(\sigma\), represented by \(\mathrm{MLP}_{\theta_{c}}(\gamma(\mathbf{x}))\) taking \(\gamma(\mathbf{x})\), a sinusoidal positional encoding of \(\mathbf{x}\), as input, with parameters \(\theta_{c}\).
The motion field \(T\) is further decomposed into skeletal motion \(T_{\mathrm{skel}}\) and non-rigid motion \(T_{\mathrm{NR}}\):
\[T(\mathbf{x},\mathbf{p})=T_{\mathrm{skel}}(\mathbf{x},P_{\mathrm{pose}}( \mathbf{p}))+T_{\mathrm{NR}}(\mathbf{x}_{\mathrm{skel}},\mathbf{p}), \tag{2}\]
where \(\mathbf{x}_{\mathrm{skel}}=T_{\mathrm{skel}}(\mathbf{x},P_{\mathrm{pose}}( \mathbf{p}))\), \(T_{\mathrm{NR}}\) represented by \(\mathrm{MLP}_{\theta_{\mathrm{NR}}}\) predicts a non-rigid offset \(\Delta\mathbf{x}\), and \(P_{\mathrm{pose}}(\mathbf{p})\) corrects the body pose \(\mathbf{p}=(J,\Omega)\) with the residual of joint angles \(\Delta_{\Omega}\) predicted by \(\mathrm{MLP}_{\theta_{\mathrm{pose}}}(\Omega)\) taking joint angles \(\Omega\) as input.
The skeletal motion \(T_{\mathrm{skel}}\) maps an observed position to the canonical space, computed as a weighted sum of \(K\) motion bases \((R_{i},\mathbf{t}_{i})\):
\[T_{\mathrm{skel}}(\mathbf{x},\mathbf{p})=\ \sum_{i=1}^{K}w_{o}^{i}(\mathbf{x})(R_ {i}\mathbf{x}+\mathbf{t}_{i}), \tag{3}\]
where \((R_{i},\mathbf{t}_{i})\), explicitly computed from \(\mathbf{p}\), indicates the rotation and translation that maps \(i\)-th bone from observation to canonical space and \(w_{o}^{i}\) is the corresponding weight in observed space.
Each \(w_{o}^{i}\) is approximated using weights \(w_{c}^{i}\) defined in canonical space:
\[w_{o}^{i}(\mathbf{x})=\frac{w_{c}^{i}(R_{i}\mathbf{x}+\mathbf{t}_{i})}{\sum_{k =1}^{K}w_{c}^{k}(R_{k}\mathbf{x}+\mathbf{t}_{k})}. \tag{4}\]
HumanNeRF stores the set of \(\{w_{c}^{i}(\mathbf{x})\}\) and a background class into a single volume grid \(W_{c}(\mathbf{x})\) with \(K+1\) channels, generated by a convolution network \(\mathrm{CNN}_{\theta_{\mathrm{skel}}}\) that takes as input a random (constant) latent code \(\mathbf{z}\).
Volume RenderingThe observed volume \(F_{o}\) that produces color \(\mathbf{c}\) and density \(\sigma\) is rendered using the volume rendering equation [32]. The expected color \(\mathbf{C}(\mathbf{r})\) of a ray \(\mathbf{r}(t)=\mathbf{o}+t\mathbf{d}\) with \(G\) samples is computed as:
\[\begin{split}\mathbf{C}(\mathbf{r})&=\sum_{i=1}^{G}( \prod_{j=1}^{i-1}(1-\alpha_{j}))\alpha_{i}\mathbf{c}(\mathbf{x}_{i}),\\ \alpha_{i}&=f(\mathbf{x}_{i})(1-\exp(-\sigma( \mathbf{x}_{i})\Delta t_{i})),\end{split} \tag{5}\]
where \(\Delta t_{i}=t_{i+1}-t_{i}\) is sample interval, and \(f(\mathbf{x})=\sum_{k=1}^{K}w_{c}^{k}(R_{k}\mathbf{x}+\mathbf{t}_{k})\) is foreground likelihood. Finally, HumanNeRF optimizes for network parameters \(\Theta=\{\theta_{c},\theta_{\mathrm{skd}},\theta_{\mathrm{NR}},\rho_{\mathrm{ pose}}\}\) through MSE loss, \(\mathcal{L}_{\mathrm{MSE}}\), and LPIPS [48] loss, \(\mathcal{L}_{\mathrm{LPIPS}}\), by comparing renderings with inputs.
### Unseen view regularization
Although HumanNeRF [46] works well given monocular videos, we observe it produces poor results on unstructured photo collections due to insufficient observations: we usually only have a handful of photos of a subject's outfit (\(<\) 25 images in our case) while HumanNeRF relies on videos with a large number of video frames (\(>\) 300 frames).
We find HumanNeRF's struggles in our setting for two reasons: (1) its non-rigid motion does not generalize well to novel viewpoints since there are too few pose observations to sufficiently constrain this pose-dependent effect; (2) the reconstructed canonical-pose human body geometry is incorrect due to insufficient viewpoint observations, resulting in inconsistent appearance in rendered novel viewpoints.
Figure 2: Given an input personal photo collection, our method optimizes for a canonical volume that can render diverse appearances. We represent the canonical volume with an MLP conditioned on an appearance embedding, and use a shared pose-dependent motion field that maps from observation to canonical space. Additionally, we use a pose correction MLP that takes the estimated body pose and a pose embedding and outputs appearance-dependent pose residuals. Finally, to improve rendering quality from sparse observations, we regularize the volumetric representation to have smooth and opaque geometry with \(\mathcal{L}_{\mathrm{gcom}}\) and \(\mathcal{L}_{\mathrm{opacity}}\), which we apply to renderings from uniformly-sampled unobserved camera viewpoints. _Photo credits to Getty Images_.
We address the first limitation by simply removing the non-rigid component and only use skeletal motion:
\[T(\mathbf{x},\mathbf{p})=T_{\text{skel}}(\mathbf{x},P_{\text{pose}}(\mathbf{p})) \tag{6}\]
We address the second limitation by regularizing the body geometry as rendered in novel views. Specifically, inspired by RegNeRF [33], we encourage the geometry to be smooth by enforcing a depth smoothness loss on rendered depth maps. We generate novel camera poses by first sampling an angle \(\phi\) from a uniform distribution, \(\phi\sim U(0,2\pi)\), and rotate the input camera with \(\phi\) around the up vector with respect to the body center.
We render a pixel's depth value by calculating the expected ray termination position, using the same volume rendering weights used to compute the pixel's color (Eq. 5):
\[D(\mathbf{r})=\sum_{i=1}^{G}(\prod_{j=1}^{i-1}(1-\alpha_{j}))\alpha_{i}t_{i}. \tag{7}\]
Likewise, we compute a pixel's alpha value as:
\[A(\mathbf{r})=\sum_{i=1}^{G}(\prod_{j=1}^{i-1}(1-\alpha_{j}))\alpha_{i}. \tag{8}\]
Our proposed depth smoothness loss is formulated as:
\[\mathcal{L}_{\text{geom}}=\sum_{i,j=1}^{H-1}\left(A(\mathbf{r}_{i,j})A( \mathbf{r}_{i,j+1})(D(\mathbf{r}_{i,j})-D(\mathbf{r}_{i,j+1}))\right)^{2}\\ +(A(\mathbf{r}_{i,j})A(\mathbf{r}_{i+1,j})(D(\mathbf{r}_{i,j})-D (\mathbf{r}_{i+1,j}))\right)^{2}. \tag{9}\]
where the loss is evaluated over patches of size \(H\), as we use patch-based ray sampling similar to HumanNeRF. Note that this loss only penalizes depth discontinuities when the alphas of neighboring points are high, which effectively constrains the loss to points on the surface.
In practice, we find the depth smoothness term improves geometry and rendering but introduces "haze" artifacts around the subject. This problem arises because the loss encourages small alphas - all zero alpha would in fact minimize this term - biasing toward transparent geometry.
To address this problem, we use an opacity loss inspired by Neural Volumes [22] that encourages binary alphas:
\[\mathcal{L}_{\text{opacity}}=\sum_{i,j}\log(A(\mathbf{r}_{i,j})+ \epsilon)+ \tag{10}\] \[\log(1-A(\mathbf{r}_{i,j})+\epsilon)-C,\]
where \(C=\log(\epsilon)+\log(1+\epsilon)\) to ensure non-negativity.
### Appearance modeling
We take as input photos of a subject taken at different times; these photos are subdivided into _appearance sets_ corresponding to photos taken around the same time, i.e., with the same clothing, etc.
When modeling diverse appearances of a subject, we want to achieve two goals: (1) **appearance consistency**: synthesizing consistent texture in unobserved regions in one appearance set with the help of the others; (2) **pose consistency**: a motion model that keeps the rendered pose consistent when switching the subject's appearance.
A naive approach is to train a separate network on each appearance set. This approach does not perform well: (1) the canonical MLP sees very few images in the training, resulting in artifacts in unobserved regions, thus degrading appearance consistency (Fig. 3-(a)); (2) the learned motion weight volume overfits body poses in each (small) appearance set and does not generalize well to the other sets, leading to poor pose consistency (Fig. 3-(b)).
Instead, we propose to train all photos with different appearances into a single network. Specifically, we enforce the shared canonical appearance \(\mathrm{MLP}_{\theta_{c}}\) to be appearance-dependent but optimize for a single, universal motion weight volume \(W_{c}\) across all images. The shared, appearance-conditioned canonical MLP synthesizes consistent textures by generalizing over the full set of images seen in training, while the universal motion weight volume significantly improves pose consistency, as it is trained on the full set of body poses.
To condition the canonical MLP, inspired by Martin-Brualla et al. [27], we adopt the approach of Generative Latent Optimization [2], where each appearance set (with index \(i\)) is bound to a single real-valued appearance embedding vector \(\ell_{(i)}^{\text{app}}\). This vector is concatenated with \(\gamma(\mathbf{x})\) as input to the canonical \(\mathrm{MLP}_{\theta_{c}}\). As a result, the canonical volume \(F_{c}\) is appearance-dependent:
\[F_{c}(\mathbf{x},\ell_{(i)}^{\text{app}})=\mathrm{MLP}_{\theta_{c}}(\gamma( \mathbf{x}),\ell_{(i)}^{\text{app}}). \tag{11}\]
Similarly, we introduce pose embedding vector \(\ell_{(i)}^{\text{pose}}\) to condition the pose correction module on each appearance set and concatenate this vector with \(\Omega\) as input to \(\mathrm{MLP}_{\theta_{pose}}\).
The appearance embeddings \(L^{\text{app}}=\{\ell_{(i)}^{\text{app}}\}_{i=1}^{S}\) as well as pose embeddings \(L^{\text{pose}}=\{\ell_{(i)}^{\text{pose}}\}_{i=1}^{S}\) are optimized alongside other network parameters, where \(S\) is the number of appearance sets.
### Optimization
Loss functionOur total loss is a combination of the previously-discussed losses:
\[\mathcal{L}=\mathcal{L}_{\text{LPIPS}}+\lambda_{1}\mathcal{L}_{\text{MSE}}+ \lambda_{2}\mathcal{L}_{\text{geom}}+\lambda_{3}\mathcal{L}_{\text{opacity}}. \tag{12}\]
ObjectiveGiven input images \(\{I_{1},I_{2},...,I_{N}\}\), appearance set indices \(\{s_{1},s_{2},...,s_{N}\}\), body poses \(\{\mathbf{p}_{1},\mathbf{p}_{2},...,\mathbf{p}_{N}\}\), and cameras \(\{\mathbf{e}_{1},\mathbf{e}_{2},...,\mathbf{e}_{N}\}\), we
optimize the objective:
\[\min_{\Theta}\sum_{i=1}^{N}\mathcal{L}(\Gamma[F_{c}(T(\mathbf{x},\mathbf{p}_{i}, \ell_{(s_{i})}^{\mathrm{pose}}),\ell_{(s_{i})}^{\mathrm{app}}),\mathbf{e}_{i}],I _{i}), \tag{13}\]
where \(\mathcal{L}(\cdot)\) is the loss function and \(\Gamma[\cdot]\) is a volume renderer, and we minimize the loss with respect to all network parameters and embedding vectors \(\Theta=\{\theta_{c},\theta_{\text{skle}},\theta_{\mathrm{pose}},L^{\mathrm{ app}},L^{\mathrm{pose}}\}\).
We shoot rays toward both seen and unseen cameras. \(\mathcal{L}_{\mathrm{LPIPS}}\) and \(\mathcal{L}_{\mathrm{MSE}}\) are computed from the output of seen cameras, while \(\mathcal{L}_{\mathrm{geom}}\) and \(\mathcal{L}_{\mathrm{opacity}}\) are applied to renderings of unseen ones. We use \(\lambda_{1}=0.2\), \(\lambda_{2}=1.0\), and \(\lambda_{3}=10.0\). Additionally, we stop the gradient flow through the pose MLP when backpropagating \(\mathcal{L}_{\mathrm{geom}}\), as we found it can lead to degenerate pose correction.
### Building a personalized space
Once the optimization converges, we use its result to build a personalized space of the subject spanned by camera view, body pose, and appearance. We allow continuous variation in viewpoint, but restrict body pose and appearance to those that were observed in the set. Every point in the space has a corresponding rendering.
In practice, the space is defined as a cube with size 1 where the coordinate value ranges from 0 to 1. Our goal is to map a point in that cube to the inputs of the network from which we render the subject.
Specifically, assuming the subject has \(N\) body poses and \(S\) appearances, we need to perform mapping on coordinates (\(a,b,c\)) corresponding to position along the axes of appearance, body pose, and camera view, respectively:
(1) **Appearances**: we map the value \(a\) to the index of \(S\) appearances: \(\mathrm{idx_{a}}=\lfloor\,aS\rfloor\), which was used to retrieve the appearance embedding \(\ell_{(\mathrm{idx_{a}})}^{\mathrm{app}}\) for canonical \(\mathrm{MLP}_{\theta_{e}}\).
(2) **Body pose**: we map the value \(b\) to the index of \(N\) body poses: \(\mathrm{idx_{b}}=\lfloor\,bN\rfloor\). We get the \(\mathrm{idx_{b}}\) -th body pose \(\mathbf{p}\), corresponding to appearance index \(s_{\mathrm{idx_{b}}}\). We then take pose embedding \(\ell_{(s_{\mathrm{idx_{b}}})}^{\mathrm{pose}}\) as input for pose \(\mathrm{MLP}_{\theta_{\mathrm{pose}}}\).
(3) **Camera view**: we rotate the camera \(\mathbf{e}_{\mathrm{idx_{b}}}\) by \(\phi=2\pi c\) around up vector with respect to the body center to get a viewing camera \(\mathbf{e}_{\mathrm{v}}\).
Finally, we generate a subject rendering corresponding to the position (\(a,b,c\)) by feeding the appearance embedding \(\ell_{(\mathrm{idx_{a}})}^{\mathrm{app}}\), pose embedding \(\ell_{(s_{\mathrm{idx_{b}}})}^{\mathrm{pose}}\), and body pose \(\mathbf{p}\) to the network and producing a volume in observation space rendered by the viewing camera \(\mathbf{e}_{\mathrm{v}}\).
## 4 Results
### Dataset
In the main paper, we include results on experiments using a photo collection of Roger Federer (more subjects in supplementary material). The Roger Federer dataset contains 10 appearance sets spanning 12 years. We collect photos by searching for a specific game in a particular year (e.g., "2019 Australian Open Final"). We collected 19 to 24 photos for each game, one per year, and label each set according to the year (_2009, 2012_,..., _2020_).
Following [46], we run SPIN [17] to estimate body pose and camera pose, automatically segment the subject, and manually correct segmentation errors and 3D body poses with obvious errors. Additionally, for images where the subject is occluded by balls or rackets, we label the regions of occluded objects and omit them during optimization.
### Implementation details
We optimize Eq. 13 using the Adam optimizer [16] with hyperparameters \(\beta_{1}=0.9\) and \(\beta_{2}=0.99\). We set the learning rate to \(5\times 10^{-4}\) for \(\theta_{c}\) (the canonical \(\mathrm{MLP}\)), \(L^{\mathrm{app}}\), and \(L^{\mathrm{pose}}\) (embedding vectors), and \(5\times 10^{-5}\) for all the others. We sample 128 points along each ray for rendering. The size of embedding vectors of \(\ell^{\mathrm{app}}\) and \(\ell^{\mathrm{pose}}\) are 256 and 16. We use patch-based ray sampling with 6 patches with size 32x32 for seen cameras and 16 patches with size 8x8 for unseen ones. The optimization takes 200K iterations to converge when training each game with individual networks and takes 600K iterations for all games into a single network. Additionally, we delay pose refinement, geometry regularization, and opacity constraint until after 1K, 1K, and 50K iterations for separate-networks training, and 1K, 10K, and 200K iterations for single-network optimization.
### Comparison
BaselineWe compare our method with HumanNeRF [46], the state-of-the-art free-viewpoint method on monocular videos. We run experiments on individual datasets (_2009, 2012_,..., _2020_). We use the official HumanNeRF implementation with hyperparameters \(T_{s}=2.5K\) and \(T_{e}=5K\) to accommodate the much smaller input dataset size. Because HumanNeRF only can optimize for a single appearance, we do the same in our method. Finally, we train HumanNeRF with 200K iterations, the same number used in our method.
\begin{table}
\begin{tabular}{|c||c|c|c|c|c|c|c|c|c|c|} \hline & **2009** & **2012** & **2013** & **2014** & **2015** & **2016** & **2017** & **2018** & **2019** & **2020** \\ \hline \hline HumanNeRF [46] & 70.64 & 80.62 & 75.09 & 73.00 & 93.89 & 83.35 & 82.19 & 69.40 & 67.47 & 73.01 \\ \hline Our method & 59.28 & 63.92 & 68.92 & 63.39 & 77.36 & 71.99 & 71.98 & 58.38 & 58.21 & 61.77 \\ \hline \end{tabular}
\end{table}
Table 1: Comparison to related work: FID is computed per dataset (per year). Lower FID score is better.
**Evaluation protocol** As we lack ground truth when evaluating results rendered from unseen views, we adopt Frechet inception distance (FID) [12] for quantitative comparison. For each input image, we rotate the camera in 10-degree increments around the "up" vector w.r.t the body center and use these renderings for evaluation.
**Results** Quantitatively, as shown in Table 1, our method outperforms HumanNeRF on all datasets by comfortable margins. The performance gain is particularly significant when visualizing the results, as shown in Fig. 5. Our method is able to create consistent geometry, sharp details, and nice renderings, while HumanNeRF tends to produce irregular shapes, distorted textures, and noisy images, due to insufficient inputs.
**Ablation studies** Fig. 4 shows visually how we outperform HumanNeRF by modifying the model and introducing new losses. By removing non-rigid motion, we get a significant quality boost. We further enhance the shape and texture reconstruction with the geometry and opacity losses. Table 2 quantifies the importance of each element. We get the best performance when including all the refinements.
**Appearance and pose consistence** Fig. 3 illustrates the benefit of training all images with a single network. In contrast to individually trained networks, Fig. 3-(a) illustrates it can synthesize compatible textures for unobserved regions as a result of better generalization, thus maintaining appearance consistency; Fig. 3-(b) demonstrates the unified network is able to keep the rendered body pose persistent across different appearances, thanks to the shared motion weight volume, hence guaranteeing pose consistency.
**Visualization of Federer space** In Fig. 6, we visualize the rebuilt Federer space by keeping the body pose fixed and rendering dense samples in the camera-appearance plane starting from one photo. In this case, only a single image (the one with a red square) is directly observed, showing how sparse observations we have to rebuild the space. The renderings are sharp and with few artifacts, and the appearance and pose consistency are well-maintained.
## 5 Discussion
**Limitations** Our work builds upon HumanNeRF to account for sparse inputs and diverse appearance. While it is effective in this challenging scenario, it inherits some of HumanNeRF's limitations such as its reliance on the initialized poses, its assumption of relatively diffuse lighting,
\begin{table}
\begin{tabular}{|c|c|c|} \hline & \multicolumn{1}{c|}{FID \(\downarrow\)} \\ \hline \hline HumanNeRF [46] & 76.87 \\ \hline Ours & \(-\) non-rigid & 71.75 \\ \hline Ours & \(-\) non-rigid & 76.84 \\ & \(+\) geometry & 67.01 \\ \hline Ours & \(-\) non-rigid & **65.52** \\ & \(+\) geometry, opacity & **65.52** \\ \hline \end{tabular}
\end{table}
Table 2: Ablation: average FID (lower is better) over 10 datasets.
Figure 4: Ablation study. Removing the non-rigid motion component from HumanNeRF significantly improves reconstruction quality. Adding our geometry loss further refines the shape (green arrow) but introduces “haze” artifacts (red arrow), which we address with the opacity loss.
Figure 3: (a) **Appearance consistency**: training all appearance sets with a single network synthesizes higher quality texture for unobserved regions while training with separate networks produces incompatible colors (green arrow). (b) **Pose consistency**: In comparison to the source pose reconstruction (i.e., the combination of pose and appearance is observed in training), separate-networks training produces unsatisfied results when combining the pose with unseen appearances; the head orientations are different from the input (red arrow) and the bodies are unnaturally distorted (blue arrow). In contrast, single-network optimization enables consistent output.
and its requirement for manual human segmentation. Additionally, since human body pose estimators typically fail on images with heavily-occluded bodies, we can only use input photos that view the full body.
**Societal impact** In this work, we aim to faithfully produce images of a person with the capability of just rendering unseen views and switching appearance within their own set of appearances. The work does not intend to create motions and animations that didn't happen. While we focus in the paper only on one person and show more examples in the supplementary material, it is important to validate in future work that the method scales to a wide range of subjects.
**Conclusion** We have presented PersonNeRF allowing rendering a human subject with arbitrary novel combinations of body pose, camera view, and appearance from an unstructured photo collection. Our method enables exploring these combinations by traversing a reconstructed space spanned by these attributes and demonstrates high-quality and consistent results across novel views and unobserved appearances.
Figure 5: Our method produces more convincing renderings with fewer artifacts than those from HumanNeRF [46]. Note how HumanNeRF produces errors in regions occluded from the input view, while our method produces plausible geometry. _Photo credits to Getty Images._
Figure 6: The visualization of the (appearance, camera view) plane of the reconstructed Federer space. Note that only the image in the red square was directly observed in the input data.
**Acknowledgement:** We thank David Salesin and Jon Barron for their valuable feedback. This project is a tribute from the first author, a die-hard tennis fan, to Novak, Rafa, Roger, and Serena. He feels blessed to have lived in their era and wishes it would never come to an end. This work was funded by the UW Reality Lab, Meta, Google, Oppo, and Amazon.
|
2305.04399
|
IT from QUBIT or ALL from HALL?
|
Generalized $1+0$-dimensional Liouvillean dynamics describing deformations of
the Sachdev-Ye-Kitaev (SYK) model, as well as the various $1+1$-dimensional
dilaton and Horava-Lifshitz gravity theories, can all be mapped onto
single-particle quantum mechanics of a non-relativistic charge propagating in a
(generally, curved) $2d$ space and subject to a (generally, non-uniform)
magnetic field. The latter description provides a standard playground for the
phenomenon of Quantum Hall Effect (QHE), thereby elucidating the intrinsically
topological nature of pertinent gravity theories and demystifying their
(pseudo)holographic connection to a broad class of the SYK-like models.
|
D. V. Khveshchenko
|
2023-05-08T00:53:36Z
|
http://arxiv.org/abs/2305.04399v1
|
# IT from QUBIT or ALL from HALL?
###### Abstract
Generalized \(1+0\)-dimensional Liouvillean dynamics describing deformations of the Sachdev-Ye-Kitaev (SYK) model, as well as the various \(1+1\)-dimensional dilaton and Horava-Lifshitz gravity theories, can all be mapped onto single-particle quantum mechanics of a non-relativistic charge propagating in a (generally, curved) \(2d\) space and subject to a (generally, non-uniform) magnetic field. The latter description provides a standard playground for the phenomenon of Quantum Hall Effect (QHE), thereby elucidating the intrinsically topological nature of pertinent gravity theories and demystifying their (pseudo)holographic connection to a broad class of the SYK-like models.
_Holographic mirages_
In light of the slower-than-desired progress in understanding of the great many quantum many-body systems there has long been a dire need for finding a universal geometric (or, possibly, hydrodynamic) description of interacting quantum matter in terms of some semi-classical collective field variables.
Historically, this idea was first implemented in the framework of classical kinetic theory formulated in terms of the Wigner distribution function and its moments, which description could then be further promoted to the (formally exact) phase space path integral over the corresponding field variable. Conceptually, such a construction can be classified as Kirillov-Kostant co-cycle quantization on the orbits of a given system's dynamical symmetry group.
However, the intrinsic complexity of working with such exact, yet often intractable, formalism brought out a variety of approximate techniques, of which the best known one is (non-linear) bosonization by which some aspects of the quantum dynamics of interacting (fermionic) matter would be described in terms of shape fluctuations of the underlying Fermi surface [1].
Albeit being quite different in its appearance, the more recent conjecture of holographic duality has been pursuing a somewhat similar goal. In this revolutionary proposal, the equivalent bosonic variables would be assumed to organize into multiplets reminiscent of the metric, vector, and scalar fields in one higher dimension and governed by some local Einstein-Maxwell-scalar type of action (see [2] and references therein).
Although vigorous attempts to put the general holographic conjecture on a solid ground have been continuing for over two decades, a satisfactory proof still remains elusive. This fact notwithstanding and putting the general burden of proof aside, much of the massive effort exercised under the auspices of the so-called AdS/CMT (Anti-de-Sitter/Condensed Matter Theory) branch of applied holography has been devoted to the heuristic 'bottom-up' approach.
The latter offers a seemingly appealing resolution of the proverbial 'goals and means' dilemma by unequivocally postulating validity of the holographic conjecture in its broadest interpretation. As to the justification, it has been resorting to the steadfast declarations (bordering, at times, on either collective delusion or cargo cult) of its asserted success in explaining the data on a host of (allegedly) strongly coupled condensed matter systems [2].
Specifically, in great many [3] of the remarkably verbose and look-alike works (even prior to the advent of Chat-GPT) on AdS/CMT the choice of a dual gravity theory and its bulk metric would be made merely on the basis of technical convenience, such as the known classical solutions and normal modes of their fluctuations, availability of computational software developed in classical general relativity, etc.
However, the custom of flatly claiming applicability of the holographic ideas to a cornucopia of condensed matter - i.e., neither supersymmetric (SUSY), or Lorentz-, or translationally- and/or rotationally-invariant (contrary to the parent Maldacena's conjecture where all these symmetries would be necessarily present [2]) - systems of \(N\sim 1\) 'flavors' (for which values of \(N\) the dual gravity theory would have been anything but local, contrary to the common AdS/CMT assumption) of strongly (albeit, for the most part, not'very strongly', as required for a classical treatment of the bulk gravity) interacting species has been showing gradual - yet, apparent - demise, as of lately [3]. Arguably, the opportunistic 'anything goes' approach has run out of steam and the field has finally come to the point where it would need to be pursued differently by seeking a more solid foundation.
To that end, while being far more scientifically sound the attempts [4] to construct a general holographic picture from the first principles of quantum information (e.g., tensor networks) implementing the so-called 'IT from QUBIT' concept [2], so far, have not proceeded beyond the exploratory level. In most cases where the bulk metric was definitively ascertained it was found to be of the basic \(AdS\) or Lifshitz kind, thus casting doubts on the possibility of constructing anything as even nearly exotic as, e.g., the 'helical Bianchi \(VII\)' geometry that was repeatedly invoked in the AdS/CMT scenarios of the so-called'strange metallic' normal state of cuprates [5].
Likewise, the previous attempts [4] to derive holography directly from the scale-dependent renormalization group (RG) flow, thus implementing the holographic 'RG=GR' principle [2], have been largely inconclusive and, thus far, produced either the same plain \(AdS\) or, else, unrecogniz
able bulk geometries.
Also, the attempts[6] to establish a holographic correspondence between the bulk AdS gravity (in the Lorentz signature) and an ordinary superconductor (even a weakly coupled BCS one) hinge on the formal similarity between the \(2d\) d'Alembertian operator acting in the so-called kinematic space and the mixed second derivative of a bi-local function (see, e.g., Eq.(19) below), thus falling too short of providing a true 'derivation' of holography.
As compared to all the questionable (and, for its most part, easy to debunk[7]) evidence purported to be consistent with AdS/CMT, the recent studies of the holography-like correspondence[8] between the ensemble-averaged quantum mechanical SYK model[9] in \(1+0\) dimensions and Jackiw-Teitelboim (JT) gravity[10] in \(1+1\) dimensions may seem to have finally delivered a strong argument supporting the holographic conjecture (albeit in a form which is rather different from the earlier 'ad hoc' AdS/CMT constructions[2]).
At the very minimum, the following discussion aims at extending the list of holographically dual \(1+0\)- and \(1+1\)-dimensional problems beyond the extensively studied case of SYK-JT. It is argued that, conceivably, this specific example may represent a more general equivalence between a whole class of the deformed SYK models and a certain family of generalized \(2d\) gravities.
Even more importantly, taken at its face value the SYK-JT duality raises an important question as to whether or not any (or all) instances of actually proven - as opposed to merely assumed - cases of holographic correspondence would be limited to those situations where the bulk theory appears to be of a (possibly, implicit) topological nature?
In the specific case of SYK-JT, the bulk system does happen to be intrinsically topological, akin to QHE. Therefore, should the answer to the above question happen to be affirmative, it would naturally explain the otherwise rather baffling duality between certain systems of (ostensibly) different dimensionalities, as per the central holographic conjecture. Also, it would prompt one to look for a hidden 'Hall-ness' in any situation where some holographic features may have been observed.
_From SYK to Liouville via Schwarzian_
Extensions of the original SYK model are described by a generic Hamiltonian
\[\hat{H}=\sum_{q}\sum_{i_{1}\ldots i_{q}}J_{i_{1}\ldots i_{q}}\hat{\chi}_{i_{1} }\ldots\hat{\chi}_{i_{q}} \tag{1}\]
which combines the products of some even numbers \(q\) of the \(N\)-colored Majorana or Dirac fermion operators \(\hat{\chi}_{i}(\tau)\) where \(i=1,\ldots,N^{9}\). In turn, the independent Gaussian-distributed classical random amplitudes \(J_{i_{1}\ldots i_{q}}\) of the all-to-all \(q\)-body entanglement are characterized by the variances
\[\overline{J_{i_{1}\ldots i_{q}}J_{j_{1}\ldots j_{q}}}=J_{q}^{2}\prod_{k=1}^{ q}\delta_{i_{k}.j_{k}} \tag{2}\]
The analysis of the model (1) typically starts by integrating the fermions out, thereby arriving at the action in terms of the bi-local field \(G(\tau_{1},\tau_{2})\) which represents the fermion propagator and the corresponding self-energy \(\Sigma(\tau_{1},\tau_{2})\)[9]
\[S[G,\Sigma] = \frac{N}{2}\int d\tau_{1}\int d\tau_{2}(\ln(\partial_{\tau_{1}} \delta(\tau_{1}-\tau_{2})-\Sigma(\tau_{1},\tau_{2})) \tag{3}\] \[+ \Sigma(\tau_{1},\tau_{2})G(\tau_{1},\tau_{2}))-F[G(\tau_{1},\tau _{2})]\]
where the functional \(F[G]\) results from the Gaussian averaging. Moreover, Eq.(2) can be further promoted to a retarded and/or non-uniform disorder correlation function, thus introducing a notion of spatial dimensions and space/time-dependent (retarded and/or non-local) entanglement-like couplings[11].
Solving for the self-energy \(\Sigma=\delta F/\delta G\), the Schwinger-Dyson equation derived from (3) can be cast in the form
\[\int d\tau(\partial_{\tau}\delta(\tau_{1}-\tau)+\frac{\delta F}{\delta G(\tau _{1},\tau)})G(\tau,\tau_{2})=\delta(\tau_{1}-\tau_{2}) \tag{4}\]
In the original \(SYK_{q}\) model with \(F[G]=J^{2}G^{q}\) Eq.(4) remains invariant under the infinite group \(Diff(S^{1})\) of reparametrizations (diffeomorphisms) of the thermal circle \(\tau\to f(\tau)\) with the periodicity condition \(f(\tau+\beta)=f(\tau)+\beta\), as long as the derivative term is neglected and provided that \(G\) and \(\Sigma\) transform as
\[G_{f}(\tau_{1},\tau_{2}) = (\partial_{\tau_{1}}f(\tau_{1})\partial_{\tau_{2}}f(\tau_{2}))^{ \Delta}G(f(\tau_{1}),f(\tau_{2})) \tag{5}\] \[\Sigma_{f}(\tau_{1},\tau_{2}) = (\partial_{\tau_{1}}f(\tau_{1})\partial_{\tau_{2}}f(\tau_{2}))^{ 1-\Delta}\Sigma(f(\tau_{1}),f(\tau_{2}))\]
The above properties of Eq.(4) single out a translationally-invariant mean-field solution (hereafter, \(\tau=\tau_{1}-\tau_{2}\) and \(\beta\) is the inverse temperature)[9]
\[G_{0}(\tau_{1},\tau_{2})=(\frac{\pi}{\beta\sin(\pi\tau/\beta)})^{2\Delta} \tag{6}\]
In the zero temperature limit and for \(J\tau\gg 1\) it demonstrates a pure power-law ('conformal') behavior \(G_{0}(\tau_{1},\tau_{2})\sim 1/(J\tau)^{2\Delta}\) with the fermion dimension \(\Delta=1/q\).
This solution spontaneously breaks the full reparametrization symmetry down to its three-dimensional subgroup \(SL(2,R)\) implemented through the Mobius transformations \(\tau\rightarrow(a\tau+b)/(c\tau+d)\) where \(ad-bc=1\), under which the solution (6) and the action (3) remain invariant.
The reparametrization transformations outside the \(SL(2,R)\) subgroup modify the functional form of \(G_{0}\), thus exploring the entire coset \(Diff(S^{1})/SL(2,R)\) and providing it with the structure of a co-adjoint Virasoro orbit. The deviations from (6) are controlled by the short-time expansion
\[\delta G_{f}(\tau_{1},\tau_{2})=\frac{\Delta}{6}\tau^{2}Sch\{f(T),T\}G_{0}^{2} (\tau_{1},\tau_{2})+\ldots \tag{7}\]
Hereafter \(T=(\tau_{1}+\tau_{2})/2\) and \(Sch\) denotes the Schwarzian derivative, \(Sch\{f,x\}=f^{\prime\prime\prime}/f^{\prime}-\frac{3}{2}(f^{\prime\prime}/f^{ \prime})^{2}\) (here \(f^{\prime}=df/dx\)) which obeys the differential 'composition rule' \(Sch\{F(f),x\}=Sch\{F(f),f\}{f^{\prime}}^{2}+Sch\{f,x\}\).
The dynamics of the variable \(f(\tau)\) is then governed by the non-reparametrization invariant, yet manifestly geometrical and \(SL(2,R)\)-invariant, action
\[S_{0}[f]=-\frac{N}{Jq^{2}}\int d\tau Sch\{\tan\frac{\pi f}{\beta},\tau\} \tag{8}\]
that stems from the trace of the (infrared-irrelevant in the RG sense) time derivative \(\partial_{\tau}G\) in the gradient expansion of the first term in Eq.(3).
The mean-field ('large-\(N\)') SYK solution (6) is only applicable for \(1/J\ll\tau,\beta\ll N/J\), under which conditions the fluctuations \(\delta G\) about the saddle point \(G_{0}\) remain small. By contrast, in the 'Schwarzian' (long-time, low-temperature, \(N/J\lesssim\tau,\beta\)) regime these fluctuations can grow strong, thereby significantly altering the mean-field behavior [8; 12].
Namely, upon the Langer transformation \(\partial_{\tau}f=e^{\phi}\) the Schwarzian action () reduces to the (ostensibly) free expression in terms of the (unbounded) variable \(\phi(\tau)\)\(S_{0}[\phi]\sim\int d\tau(\partial_{\tau}\phi)^{2}\). However, the true Liouvillean action remains strongly non-Gaussian, as follows from the analysis of the products of propagators
\[<G_{f}(\tau_{1},\tau_{2})\ldots G_{f}(\tau_{2p-1},\tau_{2p})>=\] \[=\int D\phi e^{-S_{0}[\phi]}\prod_{i=1}^{p}\frac{e^{\Delta(\phi( \tau_{2i-1})+\phi(\tau_{2i}))}}{(\int_{\tau_{2i-1}}^{\tau_{2i}}d\tau e^{\phi} )^{2\Delta}} \tag{9}\]
Computing such amplitudes requires the denominator to be promoted to the exponent in the form of the \(2p\) consecutive quenches under the action of the local'vertex' operators \(e^{\Delta\phi(\tau)}\). The resulting effective action
\[S[\phi]=\frac{N}{Jq^{2}}\int d\tau(\frac{1}{2}(\partial_{\tau}\phi)^{2}+J^{2} e^{2\phi}) \tag{10}\]
then acquires the exponential \(1d\) Liouville potential \(V_{2,2}(\phi)=J^{2}e^{2\phi}\) (hereafter, \(V_{a,b}(\phi)\) denotes a potential which behaves as \(e^{(a/b)\phi}\) in the limits \(\phi\rightarrow\pm\infty\), respectively).
The action \(S[\phi]\) can be further quantized by switching to the Hamiltonian picture and substituting the momentum \(\pi=\delta S/\delta\partial_{\tau}\phi\) with \(-i\partial_{\phi}\). Consequently, one arrives at the (static) eigenvalue equation
\[-\partial_{\phi}^{2}\psi+V_{2,2}(\phi)\psi=E\psi \tag{11}\]
The spectrum of Eq.(11) is continuous, \(E_{k}=k^{2}\), and consists of the eigenstates \(\psi_{k}(z)\sim\sqrt{\nu\sinh\pi\nu}K_{i\nu}(kz)\) where \(\nu=\sqrt{E-1/4}\) and \(z=J\beta e^{\phi}\). The exact wave functions can be used to compute the various matrix elements \(<0|e^{\Delta\phi}|k>\) explicitly. Their calculation reveals a universal behavior of the averaged products of an arbitrary number of propagators in the long-time/low-temperature regime (\(N/J\lesssim\tau,\beta\)) where one finds \(<G_{f}^{p}(\tau,0)>\sim 1/(J\tau)^{3/2}\) for any \(p\geq 1\) and \(q\geq 2\)[12]. This behavior is markedly different from the (non-universal) mean-field one at short times/high temperatures (\(1/J\ll\tau,\beta\ll N/J\)), \(<G_{f}^{p}(\tau)>\sim G_{0}^{p}(\tau)\sim 1/J\tau^{2p/q}\).
The intrinsically non-Gaussian nature of the action (10) is manifest. Indeed, in the absence of the exponential term in (10) the latter amplitude would have been governed by the non-logarithmic correlator \(<\phi(\tau)\phi(0)>_{G}\sim J\tau\), thus demonstrating exponential, rather than algebraic, decay, \(<G_{f}^{p}(\tau)>_{G}\sim G_{0}^{p}(\tau)\exp(-\frac{1}{2}p^{2}\Delta^{2}<\phi( \tau)\phi(0)>_{G})\) which is also non-universal as a function of \(p\) and \(q\).
Notably, the \(1d\) action (10) is for just one variable representing the fluctuations of a single soft (energy) mode. It can be readily extended to include other degrees of freedom - as, e.g., in the case of the complex-valued ('Dirac', as opposed to 'Majorana') variant of the SYK model, the additional \(U(1)\) scalar field corresponding to the charge fluctuations [8].
_SYK deformations_
A deformation of the 'potential' part of the bi-local Liouvillean action (3) can generally be represented in terms of a two-time integral with the kernel
\[F[G]=\sum_{n}c_{n}\int d\tau_{1}\int d\tau_{2}G^{n}(\tau_{1},\tau_{2}) \tag{12}\]
with the coefficients \(c_{n}\sim NJ_{n}^{2}/q^{2}\) which results from the ensemble-averaged partition function and generalizes the original SYK model described by the single \(n=q\) term. Additional powers of \(G\) could also emerge if random amplitudes of the \(n\)- and \(m\)-body terms developed some (physically quite plausible) cross-correlations, resulting in \(\overline{J_{n}J_{m}}\neq 0\).
Beyond the Liouville point in the multi-dimensional Hamiltonian parameter space, the previous analyses of the action given by Eqs.(3) and (12) have been largely limited to the \(SYK_{q}-SYK_{q/2}\) model with only two non-zero coefficients, \(c_{q}=J^{2}N/2q^{2}\) and \(c_{q/2}=2\Gamma^{2}N/q^{2}\). For \(q=4\) it has been rather extensively discussed in the context of random tunneling between two SYK quantum dots [13; 14], the amplitude \(\Gamma\) being a variance of the tunneling amplitude. This action also finds its applications in theoretical cosmology ('traversable wormhole') and discussions of the \(1+1\)-dimensional analog of the Hawking-Page curve [15].
Moreover, in most of the previous analyses for \(\Gamma\ll J\) and \(\Gamma\gg J\) the terms with \(n=q/2\) and \(n=q\) would be treated, respectively, as small perturbations of one another. Specifically, for \(\Gamma\ll J\) and at relatively short times, \(1/J\ll\tau\ll J/\Gamma^{2}\), the value of the fermion dimension \(\Delta=1/q\) would be determined by the \(n=q\) term, while for \(\tau\gg J/\Gamma^{2}\) the \(n=q/2\) term takes over, thus causing a faster decay governed by \(\Delta=2/q\).
Such analysis can be potentially misleading, though, as it focuses on the soft ('angular' or 'along-the-valley') fluctuations about a chosen mean-field solution, while under a perturbation the mean-field solution itself might undergo a significant change which would then require a tedious account of the hard ('radial' or 'out-of-the-valley') fluctuations. It can be avoided, though, by using the proper solution of the mean-field equation derived with the use of the entire functional (12).
Of a particular interest are crossovers between different conformal fixed points where all pertinent coupling constants are of the same order. Such 'SYK transits' are not directly amenable to perturbation theory in the vicinity of the fixed points in question but can still be explored in the large-\(q\) limit (see the next Section). To that end, one can utilize the already available -and seek out new - non-perturbative (in general, non-conformal) mean-field solutions [16].
By analogy with the pure Liouvillean action (10), the canonical quantization procedure applied to the action \(S_{0}+\Delta S\) given by the sum of Eqs.(8) and (12) substitutes its non-Gaussian part with the ordinary single-time integral \(\Delta S(\phi)=\int d\tau V(\phi)\) where
\[V(\phi)=\sum_{n}c_{n}e^{2n\phi/q} \tag{13}\]
For one, the aforementioned two-term action with the non-zero coefficients \(c_{q}\) and \(c_{q/2}\) features the Morse potential [8; 16]
\[V_{2,1}(\phi)=c_{q}e^{2\phi}+c_{q/2}e^{\phi} \tag{14}\]
For both \(c_{q}\) and \(c_{q/2}\) positive the potential (14) features a continuous positive definite spectrum, \(E_{\nu}=\nu^{2}+\lambda^{2}+1/4\), with \(\lambda=c_{q/2}/c_{q}\) and the eigenstates
\[\psi_{\nu}(\phi)\sim\sqrt{\nu\sinh(2\pi\nu)}\Gamma(1/2-\lambda+i\nu)W_{ \lambda,i\nu}(2\lambda e^{\phi}) \tag{15}\]
where \(W_{\lambda,i\nu}\) is the Whittaker function. For \(c_{q/2}=0\) Eq.(15) reduces to the aforementioned eigenstates of the Liovillean potential given by the modified Bessel functions.
By contrast, for \(c_{q/2}\) negative the potential develops a minimum and the spectrum includes \({\cal N}=[\lambda-1/2]\) bound states at the negative energies \(E_{n}=\lambda^{2}+1/4-(n-\lambda+1/2)^{2}\) where \(n=0,\ldots,{\cal N}\). The corresponding eigenstates are given by the associated Laguerre polynomials
\[\psi_{n}(\phi)\sim e^{(\lambda-n-1/2)\phi-e^{\phi}}L_{n}^{2\lambda-2n-1}(2e^{ \phi}) \tag{16}\]
At low temperatures (\(\Gamma^{2}\beta/J\gg 1\)) the number \({\cal N}\) of bound states increases and they become nearly equidistant, as in the harmonic oscillator potential.
Notably, for \(J=2\Gamma\) the aforementioned monotonic and non-monotonic Morse potentials conspire to form a doublet of super-partners \(V_{\pm}(\phi)=W^{2}(\phi)\pm\partial_{\phi}W(\phi)\) with \(W(\phi)\sim e^{\phi}\). The ground state of the binding potential \(V_{-}\) then takes on the form \(\psi_{0}(\phi)\sim\exp(-\int Wd\phi)\).
Conceivably, the effective action \(S(\phi)\) may develop other interesting regimes at the points of still higher symmetry. Albeit being special, the integrable potentials may also provide insight into the general behaviors. A similar situation has long been known in the physics of integrable \(1d\) spin chains of arbitrary on-site spin.
In particular, below we demonstrate the emergence of the Toda-like action (in cosmology, a.k.a. 'oscillatory tracker model') described by the classically solvable two-term potential
\[V_{2,-2}(\phi)=c_{q}e^{2\phi}+c_{-q}e^{-2\phi} \tag{17}\]
which has only discrete levels. Its linearly independent solutions are given by the approximate formulas \(\psi_{\pm}(\phi)\sim\exp(\pm e^{\phi}-\phi/2)\). Notably, both Eqs.(14) and (17) belong to the still broader family of 'quasi-solvable' potentials \(V(\phi)=c_{q}e^{2\phi}+c_{q/2}e^{\phi}+c_{-q/2}e^{-\phi}+c_{-q}e^{-2\phi}\).
In the context of the problem of tunneling between two SYK quantum dots [13], going into the strong-coupling regime and taking into account multiple tunneling processes can be achieved by replacing \(G\) computed to zeroth order in tunneling with the all-order expression \(G/(1+i\sigma G)\) where \(\sigma\) is the tunneling conductance [14].
The corresponding potential \(V(\phi)\) can then consist of an infinite number of terms. In that regard, especially interesting is the 'hypersymmetric' Hulten potential \(V_{0,1}(\phi)\sim e^{\phi}/(1-e^{\phi})\) with all the coefficients being equal, \(c_{nq/2}=c\) for \(n\geq 1\). It develops the \(\sim 1/\phi\) behavior at small \(\phi\), reminiscent of the Coulomb potential. Unlike the latter, though, it features only a finite number \([\lambda]\) of bound states at \(E_{n}=-((\lambda^{2}-n^{2})/2\lambda n)^{2}\).
Another interesting ('variable scaling') model was proposed in Ref. [17]. It includes an infinite number of terms with the coefficients \(c_{nq/2}\sim n^{\nu}\). Performing an approximate summation over \(n\) one obtains a power-law potential \(V_{\infty,0}(\phi)=\sum_{n}n^{\nu-1}e^{n\phi}\sim 1/(-\phi)^{\nu}\) generalizing the Coulomb one.
_Large \(q\) limit_
An alternate approach to the generalized SYK-like models and a further justification of substituting Eq.(13) for (12) exploits the large-\(q\) approximation to the propagator [8]
\[G(\tau_{1},\tau_{2})=\frac{1}{2}sgn\tau(1+\frac{2}{q}g(\tau_{1},\tau_{2})+\ldots) \tag{18}\]
The higher order terms \(O(1/q^{2})\) can also be evaluated, albeit at increasingly prohibitive costs [8]. The path integral over the field \(g\) is governed by the action
\[S(g)=\frac{N}{q^{2}}\int\tau_{1}\int d\tau_{2}(\frac{1}{2}\partial_{\tau_{1}}g \partial_{\tau_{2}}g+V(g)) \tag{19}\]
where the potential is given by Eq.(13) as a function of the bi-local field \(g(\tau_{1},\tau_{2})\).
A complete theory (19) is genuinely two-dimensional, the relative \(\tau\) and 'center-of-mass' \(T\) time variables playing the roles of the effective 'radial' and 'angular' coordinates in the \(2d\) 'kinematic space', respectively [18]. So it is only by focusing on the former dependence and neglecting the latter can one reduce the low-energy sector of (19) to the \(1d\) action akin to that given by Eqs.(8) and (13).
This way, one arrives at the equation of motion
\[\partial_{\tau}^{2}g(\tau)=-\partial_{g}V(g(\tau)) \tag{20}\]
whose solutions correspond to the mean-field configurations, thus yielding the mean-field propagator \(G_{0}(\tau)=\exp(2g(\tau)/q)\). A solution to Eq.(20) provides one with the means to probe the system's thermodynamics. To that end, by solving (20)
\[\tau=\int_{g_{0}}^{0}\frac{dg}{\sqrt{V(g_{0})-V(g)}} \tag{21}\]
and putting \(\tau=\beta/2\) one can compute the mean-field energy [17]
\[E=\frac{N}{4q^{2}}(\beta V(g_{0})-2^{3/2}\int_{g_{0}}^{0}dg\sqrt{V_{0}-V(g)}) \tag{22}\]
where \(g_{0}<0\) is the turning point of the potential \(V(g)\).
In the case of the Morse potential (14) with \(g\) substituted for \(\phi\) the explicit saddle point solution of (20) reads [17]
\[g_{0}(\tau)=\ln\frac{2A\sin^{2}\theta}{\cos(2\omega\tau/\beta-\omega)+\cos\theta} \tag{23}\]
where \(A=\sqrt{(\omega/\beta J)^{2}+(\Gamma/J)^{4}}\), \(\theta=\tan^{-1}(\omega J/\beta\Gamma^{2})\), and the \(\omega\) obeys the equation \(2\omega^{2}=(\beta\Gamma)^{2}+A(\beta J)^{2}\cos\omega\). For \(\Gamma\ll J\) it takes the values \(\omega=\pi/2-O(1/\beta J)\) and \(\omega=\pi/2-O(\Gamma^{2}\beta/J)\) for \(1/J\ll\beta\ll 1/\Gamma\) and \(1/\Gamma\ll\beta\ll J/\Gamma^{2}\), respectively.
In the zero-temperature limit, Eq.(18) yields
\[G_{0}(\tau)=\frac{1}{2}\frac{sgn\tau}{(1+\sqrt{J^{2}+4\Gamma^{2}}\tau+\Gamma^ {2}\tau^{2})^{2/q}} \tag{24}\]
As compared to the approximate conformal propagator characterizing the original SYK model, this expression is UV-finite and naturally regularized at \(\tau\lesssim min[1/J,1/\Gamma]\). Also, in contrast with the perturbative results of Refs. [12; 13], the saddle-point solution (24) is applicable at all \(\Gamma/J\), large and small.
Gaussian fluctuations \(\delta g(\tau)\) about the saddle-point solution of Eq.(20) are governed by the action
\[\delta S=\frac{N}{2q^{2}}\int d\tau((\partial_{\tau}\delta g)^{2}+W(g_{0}( \tau))\delta g^{2}) \tag{25}\]
featuring the potential \(W(g(\tau))=\partial_{g}^{2}V(g)=\sum_{n}c_{n}n^{2}e^{ng}\) which is functionally similar to \(V(g)\) given by (13) and has to be evaluated at the solution \(g_{0}(\tau)\) of Eq.(20).
In contrast to the Schwarzian action (10) the fluctuations are scale-invariant and their strength is independent of temperature, being instead controlled by the numerical parameter \(N/q^{2}\) and decreasing/increasing with increasing \(N\) and \(q\), respectively. As compared to the fluctuations about the mean-field solution (6) those associated with the one given by \(g_{0}(\tau)\) correspond to the pseudo-Goldstone excitations about the fixed 'valley' in the space of field configurations which no longer needs to be adjusted.
Another uniquely simple (and previously unexplored) situation is the case of the Toda potential which, upon a global anisotropic coordinate rescaling, reduces to \(V_{T}(g)=J^{2}\cosh 2g\) and coincides with its second derivative up to a factor. Its classical equation of motion assumes the form of the celebrated sinh-Gordon equation, \(\partial_{\tau}^{2}g=-J^{2}\sinh g\) whose solution satisfying the initial condition \(g(0)=0\) reads
\[g_{0}(\tau)=-\ln\tan(J\tau+\frac{\pi}{4}) \tag{26}\]
Other known (quasi)solvable potentials are likely to provide novel mean-field solutions, alongside the associated actions for their fluctuations.
_Particle in magnetic field_
The Hamiltonians reminiscent of those discussed in the previous section routinely arise in the problem of a non-relativistic particle subject to a certain \(2d\) static geometry \(g_{ij}(x,y)\) and a vector potential \(A_{i}(x,y)\). By exploiting this analogy one can then replace a field-theoretical path integral over the fluctuating variable \(\phi(\tau)\) with a worldline one governed by the single-particle action
\[S[X]=\int d\tau(\frac{1}{2}g_{ij}\partial_{\tau}X^{i}\partial_{\tau}X^{j}+ \partial_{\tau}X_{i}A^{i}) \tag{27}\]
where \(X_{\mu}=(x,y)\). This equivalence is limited to the contributions of all single-valued (non-self-intersecting) curves which indeed dominate for low temperatures.
In the hyperbolic plane (\(H^{2}\)) geometry such a connection between the 'particle-in-magnetic-field' (PMF) problem and the SYK model has been extensively utilized before [8; 9]. It can be further extended towards a broader class of metrics and magnetic field configurations. As a technical simplification one can first explore the class of diagonal bulk metrics, \(g_{ij}(x,y)=diag[g_{xx}(x),g_{yy}(x)]\), and vector potentials in the Landau gauge, \(A_{i}(x,y)=(0,A_{y}(x))\), which choices facilitate a separation of variables in the corresponding Schroedinger equation with the Hamiltonian
\[H_{PMF}=\frac{1}{2}g^{xx}(x)\pi_{x}^{2}+\frac{1}{2}g^{yy}(x)(\pi_{y}-A_{y}(x)) ^{2} \tag{28}\]
where \(\pi_{i}\) is the conjugate momentum.
For the sake of the following discussion the background fields can be further restricted to the power-law functions
of the \(x\)-coordinate (here \(l\) is a characteristic length scale akin to the '\(AdS\) radius')
\[g^{xx}=(x/l)^{2\alpha},\quad g^{yy}=(x/l)^{2\beta},\quad A_{y}=Bl(l/x)^{\gamma} \tag{29}\]
so that the interval in this (Euclidean and, in general, anisotropic) metric reads \(ds^{2}=dx^{2}(l/x)^{2\alpha}+dy^{2}(l/x)^{2\beta}\).
In general, the Hamiltonian dynamics described by Eq.(28) develops in the \(4d\) phase space spanned by two pairs of canonically conjugated variables, \((x,\pi_{x})\) and \((y,\pi_{y})\). However, in the chosen gauge the \(y\) variable becomes cyclic and the conjugate momentum \(\pi_{y}=k\) is conserved, as in a translationally-invariant plane-wave solution propagating along the \(1d\) boundary of a \(2d\) region.
By comparison, the \(y\) variable can be paralleled with the aforementioned 'center-of-mass' time \(T\). In contrast, dynamics in the \(x\) direction remains non-trivial and is analogous to the dependence on the'relative' time \(\tau\).
The magnetic flux through the semi-space \(x\geq 0\)
\[\Phi=\int dxdy\sqrt{g}(\partial_{x}A_{y}-\partial_{y}A_{x})=B\int dxdy(\frac{l }{y})^{\gamma+1-\alpha-\beta} \tag{30}\]
scales with the area provided that \(\gamma+1=\alpha+\beta\).
A uniform magnetic field in flat space corresponds to \(\alpha=\beta=0\) and \(\gamma=-1\), while its much-studied counterpart on a hyperbolic plane \(H^{2}\) can be attained for \(\alpha=\beta=\gamma=1\).
Quantizing the PMF Hamiltonian (28) and factorizing its eigenstates, \(\Psi(x,y)=\psi(x)e^{iky}\), one arrives at the Schroedinger equation with the quasi-\(1d\) Hamiltonian
\[H=\frac{1}{2}x^{2\alpha}\pi_{x}^{2}+\frac{1}{2}(x^{2\beta}k^{2}-2x^{2\beta- \gamma}B\pi_{x}+B^{2}x^{2\beta-2\gamma}) \tag{31}\]
which contains a triad of algebraic terms with the exponents \(2\beta,2\beta-\gamma\), and \(2\beta-2\gamma\).
Moreover, the Hamiltonian (31) could acquire still higher powers of \(x\) stemming from the relativistic corrections proportional to \((\pi_{i}-A_{i})^{2n}\) with \(n>1\).
For \(\alpha=1\) and with the use of the logarithmic reparametrization \(x=e^{z}\) one can cast Eq.(31) in the form of the ordinary \(1d\) Schroedinger equation in flat space with the potential \(V(z)\) given by Eq.(13). Incidentally, the metric takes the form \(ds^{2}=dz^{2}+e^{-2\beta z}dy^{2}\).
In contrast, for \(\alpha\neq 1\) the corresponding \(2nd\) order differential equation would exhibit a power-law potential \(V(z)=\sum_{n}c_{n}z^{n}\) after the reparametrization \(z=x^{1-\alpha}/(1-\alpha)\) and rescaling \(y\to y(1-\alpha)^{-\beta/(1-\alpha)}\), in which coordinates the metric takes the form \(ds^{2}=dz^{2}+z^{-2\beta/(1-\alpha)}dy^{2}\).
Moreover, for \(\alpha=1\) and non-zero \(k\) and \(B\) the three-term potential in Eq.(31) reduces to only two terms, provided that the other two exponents are related as \(\beta=0\), \(\beta=\gamma\), or \(\beta=\gamma/2\).
In the first two cases one obtains the Morse potential (14) with \(\lambda=kl/2\gamma\) and \(\lambda=Bl^{2}/2\gamma\), respectively. Thus, the Morse scenario extends beyond the well-known case of a constant field and \(H^{2}\) space of constant negative curvature. Nonetheless, the magnetic flux \(\Phi\) can only be proportional to the area \(\int dxdy\) for \(\beta=\gamma\), but not in the other two cases.
By contrast, the third combination of the parameters yields the Toda potential (17) with \(c_{q}=B^{2}l^{4}\) and \(c_{-q}=k^{2}l^{2}\) which conforms to the symmetric potential \(\cosh 2z\) upon uniform re-scaling \(z\to z+\frac{1}{2}\ln(k/Bl)\).
For a given PMF Hamiltonian much information can be inferred from its resolvent which allows for a spectral expansion over its \(2d\) eigenstates
\[D_{E}(x,y|x^{\prime},y^{\prime})=<x,y|\frac{1}{E-H+i0}|x^{\prime },y^{\prime}>=\] \[=\int dke^{ik(y-y^{\prime})}\sum_{n/\nu}\frac{\psi_{k,\nu}(x) \psi_{k,\nu}^{*}(x^{\prime})}{E-E_{k,\nu}+i0} \tag{32}\]
where \(\cosh d=1+((x-x^{\prime})^{2}+(y-y^{\prime})^{2})/2xx^{\prime}\) and the sum/integral \(\Sigma_{n/\nu}\) is over the discrete and/or continuous parts of the spectrum.
In the case of the \(1d\) Morse potential, Eq.(32) can be computed in a closed form
\[D_{E}(x,y|x^{\prime},y^{\prime})\sim(\cosh\frac{d}{2})^{2i\nu-1 }\frac{\Gamma(\frac{1}{2}+\lambda-i\nu)\Gamma(\frac{1}{2}-\lambda-i\nu)}{ \Gamma(1-2i\nu)}\] \[F(\frac{1}{2}+b-i\nu,\frac{1}{2}-\lambda-i\nu,1-2i\nu,\frac{1}{ \cosh^{2}d/2}) \tag{33}\]
where \(E=\nu^{2}+\frac{1}{4}+\lambda^{2}\) and \(F\) is the hypergeometric function.
Fourier transforming (33) one obtains a fundamental solution for the Morse potential
\[d_{E,k}(x,x^{\prime})=\int dye^{ik(y-y^{\prime})}D_{E}(x,y|x^{ \prime},y^{\prime})\sim \tag{34}\] \[\frac{\Gamma(\frac{1}{2}-\lambda-i\nu)}{k\Gamma(1-2i\nu)\sqrt{x_{<} x_{>}}}M_{\lambda,-i\sqrt{E}}(2kx_{<})W_{\lambda,i\sqrt{E}}(2kx_{>})\]
In the zero field limit, Eqs.(33) and (34) reduce to
\[D_{E}(x,y|x^{\prime},y^{\prime})\sim Q_{-1/2-i\nu}(\cosh d) \tag{35}\]
and
\[d_{E,k}(x|x^{\prime})\sim I_{-i\nu}(kx_{<})K_{i\nu}(kx_{>}) \tag{36}\]
where \(x_{>/<}\) is the larger/smaller value of \(x\), respectively.
Also, in the flat space limit, \(kl\to\infty\), Eq.(33) reproduces the well-known result
\[D_{E}(x,y|x^{\prime},y^{\prime})\sim\frac{\Gamma(1/2-E/B)}{\sqrt{Br^{2}}}W_{E/B, 0}(Br^{2}) \tag{37}\]
for the energies \(E_{n}=B(2n+1)\) corresponding to the degenerate Landau levels, as all the scattering states are pushed to infinity.
Another important calculable is the thermodynamic propagator ('heat kernal')
\[K_{\beta}(x,y|x^{\prime},y^{\prime})=<x,y|e^{-\beta H}|x^{\prime },y^{\prime}>=\] \[=\int dk\sum_{n/\nu}e^{ik(y-y^{\prime})-\beta E_{k,n/\nu}}\psi_{k, \nu}(x)\psi_{k,\nu}(x^{\prime}) \tag{38}\]
At zero field (i.e., in the case of the Liouville potential) it simplifies to
\[K_{\beta}(x,y|x^{\prime},y^{\prime})\sim\exp(-\frac{r^{2}}{\beta}-\frac{\beta}{l^ {2}})\sqrt{\frac{r/l}{\beta^{2}\sinh r/l}} \tag{39}\]
and can be used for studying the system's thermodynamic properties.
_Thermodynamics and chaos_
A partition function for the (generalized) SYK action given by Eqns.(8) and (13) is represented by the field-theoretical path integral
\[Z_{SYK}(\beta)=\int d\phi\int_{\phi(0)=\phi}^{\phi(\beta)=\phi}D\phi(\tau)e^{- \int_{\tau}S_{SYK}[\phi(\tau)]} \tag{40}\]
Alternatively, it can be computed in terms of the eigenfunctions/values \(\psi_{n/\nu}(\phi)\) and \(E_{n/\nu}\) of the corresponding \(1d\) Schroedinger equation
\[Z_{SYK}(\beta)=\int d\phi\sum_{n/\nu}|\psi_{n/\nu}(\phi)|^{2}e^{-\beta E_{n/ \nu}} \tag{41}\]
On the other hand, the PMF partition function is represented by the world-line path integral
\[Z_{PMF}(\beta)=\int dxdy\int_{x,y}^{x,y}Dx(\tau)Dy(\tau)e^{-\int_{\tau}S_{PMF} [x(\tau),y(\tau)]} \tag{42}\]
where \(S_{PMF}\) is constructed from the same Hamiltonian (28). With the use of the eigenfunctions \(\Psi_{k,n/\nu}(x,y)=\psi_{k,n/\nu}(x)e^{iky}\) it can be cast in the form similar to (42)
\[Z_{PMF}(\beta)=\int dxdy\int dk\sum_{n/\nu}|\Psi_{k,n/\nu}(x,y)|^{2}e^{-\beta E _{k,n/\nu}} \tag{43}\]
thus establishing some form of equivalence between the (generalized) SYK and PMF problems.
Alternatively, instead of performing a direct spectral summation the partition function can be deduced from the density of states (DOS)
\[Z(\beta)=\int_{0}^{\infty}dE\rho(E)e^{-\beta E} \tag{44}\]
In turn, the (many-body) DOS of the SYK-like system can be read off from its single-particle PMF counterpart (32)
\[\rho(E)=\frac{1}{2\pi}ImD_{E}(x,y|x,y) \tag{45}\]
In the Morse case, using the exact resolvent Eq.(33) one obtains the DOS in a closed form [19]
\[\rho_{M}(E)\sim\frac{\sinh 2\pi\sqrt{E}}{\cosh 2\pi\sqrt{E}+\cos 2\pi\lambda} \tag{46}\]
For \(\lambda=0\), one then finds the well-known low-energy behavior the DOS in the SYK model \(\rho(E)\sim\sqrt{E}\)[8; 9]. In contrast, for \(\lambda=1/2\) the DOS diverges as \(\rho(E)\sim 1/\sqrt{E}\). Notably, this behavior is reminiscent of that found in the SUSY version of the SYK model [8]. On the other hand, a periodic dependence on \(\lambda\) could be spurious and remains to be better understood.
For \(\lambda=0\), by performing an (inverse) Laplace transformation on (46) one can reproduce the low temperatures partition function of the Liouville model \(Z_{L}(\beta)\sim\exp(O(l^{2}/\beta))/\beta^{3/2}\) for \(\beta\gg 1/J\), while for \(\beta\ll 1/J\) it yields \(Z_{L}(\beta)\sim\exp(O(l^{2}/\beta))/\beta\). Thus, specific heat defined as \(C=\beta^{2}\partial_{\beta}^{2}\ln Z(\beta)\) decreases with increasing temperature from \(C=3/2\) down to \(C=1\).
In contrast, the thermodynamic properties of the Morse model appear to be markedly different. Namely, for \(\lambda=1/2\) specific heat rises from \(C=1/2\) for \(\beta J\gg 1\) to \(C=1\) for \(\beta J\ll 1\). Together with the aforementioned behavior of the density of states this might be suggestive of a phase transition at \(\lambda_{c}=1/2\).
Such a conductor-to-insulator transition in the \(SYK\) double-dot system has been studied, both, without [13] and with [14] such a realistic factor as Coulomb blockade taken into account. Conceivably, a bulk counterpart of this transition has long been predicted to occur between the SYK non-Fermi liquid and disordered Fermi liquid in a granular array of randomly \(SYK_{2}\)-coupled \(SYK_{4}\) clusters [20].
A difference between the two phases on the opposite sides of this purported transition can elucidated with the use of out-of-time-order correlators (OTOC). Generically, the OTOC amplitudes are expected to demonstrate some initial short-time/high temperature exponential growth
\[\frac{<G_{f}(\tau_{1},\tau_{3})G_{f}(\tau_{2},\tau_{4})>}{<G_{f}(\beta/2,0)>^{2 }}=1-O(\frac{\beta J}{N})e^{\lambda_{L}t} \tag{47}\]
revealed by summing the 'causal' ladder series and controlled by the chaotic Lyapunov exponent \(\lambda_{L}\)[8].
The latter can be deduced directly from Eq.(11) for a general potential \(V_{a,b}\) upon restoring a dependence of the fluctuating normal mode \(\delta g(\tau,T)\sim e^{\lambda_{L}T}\chi(\tau)\) on the 'center-of-mass' time \(T\) and then continuing it analytically, \(\tau\to it+\beta/2\)[8; 9].
This way, one arrives at the eigenvalue equation in terms of the variable \(u=\tau/\beta\)
\[(\partial_{u}^{2}-W(g_{0}(u\beta)))\chi=(\frac{\lambda_{L}\beta}{2\pi})^{2}\chi \tag{48}\]
where \(W(g(\tau))\) was defined after Eq.(11).
In the case of the Morse potential, one obtains the equation
\[\partial_{u}^{2}\chi+(\frac{\cos\theta}{\cosh u+\cos\theta}+\frac{2\sin^{2} \theta}{(\cosh u+\cos\theta)^{2}})\chi=(\frac{\lambda_{L}\beta}{2\pi})^{2}\chi \tag{49}\]
where the effective potential crosses over from \(W_{q}=-2/\cosh^{2}u\) in the pure \(SYK_{q}\) limit (\(\theta\to\pi/2\)) with the
ground state \(\chi_{q}\sim 1/\cosh u\) to \(W_{q/2}=-1/2\cosh^{2}(u/2)\) in the pure \(SYK_{q/2}\) one (\(\theta\to 0\)). In both limits, the Lyapunov exponent approaches its maximal value \(\lambda_{L}^{max}=2\pi/\beta^{8}\) as \(\lambda_{L}/\lambda_{L}^{max}=1-O(max[1/\beta J,\Gamma^{2}\beta/J])\) for \(\Gamma\ll J\) and \(1/J\lesssim\beta\lesssim J/\Gamma^{2}\) or \(1=O(J/\Gamma^{2}\beta)\) for \(\Gamma\gg J\) and \(J/\Gamma^{2}\lesssim\beta\lesssim 1/\Gamma^{17}\).
In the special case of \(q=4\), though, the fixed-point \(SYK_{2}\) behavior corresponds to the disordered but non-chaotic Fermi liquid where \(\lambda_{L}\) is expected to vanish.
In the intermediate regime and for \(q>4\) the Lyapunov exponent appears to take lower, yet non-zero, values[16]. It does not vanish at any finite temperature, though, thus calling for a closer look at any scenario of a genuine finite-temperature phase transition - or a zero-temperature one predicted to occur at a critical ratio \(\Gamma/J\) vanishing at large \(N\) as a power of \(1/N\)[12; 20]. In that regard, it would be particularly interesting to compute \(\lambda_{L}\) at the supersymmetric point \(J=2\Gamma\).
Also, in the aforementioned 'variable scaling' model[17] some non-maximal and non-universal, yet temperature-independent and growing with the increasing integer parameter \(n\), values of \(\lambda_{L}\) were reported on the basis of the numerical solution of (48). In turn, the Hulten potential falls somewhere in between the'super-symmetric' point at the \(SYK_{q}-SYK_{q/2}\) model and the 'variable scaling' one[16].
As an interesting consistency check, the eigenfunction equation for the Toda potential \(V_{2,-2}\) evaluated on the solution (26) satisfies the same Eq.(49) apart from the constant shift, \(W_{T}=-2/\cosh^{2}u+1\), which raises the ground state energy to zero, thus implying \(\lambda_{L}=0\). This observation is consistent with the non-chaotic nature of the discrete spectrum consisting only of the bound states.
_Dual gravities_
In \(2d\), a powerful gauge invariance under local coordinate diffeomorphisms eliminates any bulk degrees of freedom, thereby making such theories locally quantum trivial in the absence of matter. Such bulk theories appear to be topological and allow for explicit classical solutions, thereby providing natural candidates for testing out the foundations of the holographic principle.
Moreover, the gauge symmetry leaves only one independent metric component (e.g., \(g_{01}=g_{10}=0,g_{00}=1/g_{11}\)), thus reducing (up to a conformal factor) all the static (Euclidean) metrics to the set \(ds^{2}=e^{2\nu(x)}d\tau^{2}+e^{-2\nu(x)}dx^{2}\) parametrized by a single function \(\nu(x)\).
However, a \(2d\) gravity theory can still develop a non-trivial boundary behavior as a result of introducing either an additional dilaton, Liouville, or scalar matter field. Alternatively, it requires anisotropic space vs time scaling, \(\tau\sim x^{z}\), characterized by a dynamical critical index \(z\). Thus, such extensions can be sought-out not only in the context of generalized JT but also the Horava-Lifshitz (HL)[21] theories.
The original (ostensibly \(2d\)) JT model is well known to be described by the Schwarzian boundary action (8) providing a natural holographic connection to the edge modes propagating along the \(1d\) boundary[8; 10]. Indeed, the Schwarzian can be directly related to the extrinsic curvature of a fluctuating closed \(1d\) boundary of a \(2d\) region, \(K=1+Sch\{\tan\pi f/\beta,\tau\}+...\).
In practical terms, establishing generalized holographic duality with a given Liouville-type theory described by Eqns.(8) and (13) can be formulated as a task of constructing the bulk theory whose boundary dynamics is governed by the same \(1d\) Hamiltonian as that of the conjectured dual \(1d\) quantum system.
In that regard, the boundary actions of SUSY and higher spin extensions of the \(2d\) dilaton gravities were argued to represent certain specific limits of the generalized JT model, including its non- and ultra-relativistic variants. Alternatively, the complex SYK model was argued to have a possible flat space bulk dual[10].
The most general action incorporating a dynamical dilaton[10]
\[S_{d}=\int dxd\tau\sqrt{g}(RU(\Phi)+V(\Phi)(\partial\Phi)^{2}+W(\Phi)) \tag{50}\]
is parametrized in terms of the functions \(U\), \(V\), and \(W\) of the dilaton field \(\Phi\). Such generalized dilaton gravities have been encountered among the deformations and compactifications of the higher-dimensional theories. Among them, there is an important family of potentials \(U\sim V\sim W\sim\Phi\) which may allow for the \(AdS_{2}\) ground state.
Moreover, the so-called \(F(R)\)-gravities with the generic action \(S_{F(R)}=\int dxd\tau F(R)\) were argued to be all equivalent to the'minimal' JT action with \(F(R)=\Phi R-V(\Phi)\) with the expectation value of the dilaton field being related to the Ricci curvature, as per the equation \(R=\partial V/\partial\Phi\)[22].
Alternatively, the intrinsically topological nature of the JT gravity can be made manifest with the use of its \(1st\) order formulation[23]
\[S_{JT}=\int(\Phi\epsilon_{\mu\nu}d^{\mu}\omega^{\nu}+W(\Phi) \epsilon^{ab}\epsilon_{\mu\nu}e^{\mu}_{a}e^{\nu}_{b}+\] \[X^{a}\epsilon_{\mu\nu}d^{\nu}e^{\mu}_{a}+X^{a}\epsilon^{b}_{a} \omega_{\mu}e^{\mu}_{b}) \tag{51}\]
in terms of the vielbein \(e^{\mu}_{a}\) and spin-connection \(\omega^{\mu}\) which is independent of the background metric \(g_{\mu\nu}=\eta_{ab}e^{a}_{\mu}e^{b}_{\nu}\) (here \(\eta_{ab}\) is the flat space metric). Notably, the action (51) shares its topological nature with the \(3d\) gravity that can be cast in terms of the (twinned) Chern-Simons theory[24].
Another viable candidate to the role of a \(2d\) gravity dual to a generalized SYK model can be sought out in the form of the (Lorentz non-invariant) Horava-Lifshitz action[21]
\[S_{HL}=\int dxd\tau\sqrt{g}N(aK^{2}+b\Lambda+c(N^{\prime}/N)^{2}) \tag{52}\]
where \(a,b,c\) are numerical parameters, \(\Lambda\) is the \(2d\) cosmological term, \(N\) and \(N_{1}\) are the lapse and shift functions,
\(h=\sqrt{g}_{xx}\), and \(K=-(\dot{h}/h-N^{\prime}_{x}/h^{2}+N_{x}h^{\prime}/h^{3})/N\), the dots and primes standing for the time and space derivatives, respectively. In contrast to the dilaton gravity (51) Eq.(53) is only invariant under the foliation-preserving diffeomorphisms \(\tau\rightarrow\tau^{\prime}(\tau)\) and \(x\to x^{\prime}(x,\tau)\).
Certain previously proposed \(F(R)\)-HL theories provide the Lifshitz-type black hole solutions with a constant negative curvature \(R=-2z^{2}/l^{2}\).
Under the projectability condition[21] one can choose \(N=N(\tau)\) to be a global (spatially uniform) variable which then gives rise to \(K^{\prime}=0\) (hence, \(K=K(\tau)\)). Furthermore, by using the coordinate gauge symmetry one can fix \(N=1\) and \(N_{x}=0\). The number of primary and secondary Hamiltonian constraints then equals the dimension of the phase space, thus reducing the number of the dynamical bulk degrees of freedom to zero. From that one incurs that the conjugate momentum \(p=p(\tau)\) is independent of the spatial position either.
Canonically quantizing the action (52) one then arrives at the effectively \(1d\) PMF-like Hamiltonian
\[H_{HL}=aqp^{2}+b\Lambda q+c\frac{P}{q^{w}} \tag{53}\]
where \(q(\tau)=\int dxh(x,\tau)\) and \(p(\tau)\) constitute a pair of conjugate canonical variables while the \(2nd\) variable \(Q(\tau)\) is cyclic and paired up with a conserved conjugate momentum \(P(\tau)=P\).
In the context of Friedmann-Robertson-Walker (FRW) cosmology, the Hamiltonian (53) emerges in the Wheeler-DeWitt equation, the parameter \(w\) taking values \(1,0,-1\) for radiation, matter, and dark energy, respectively. Contrasting Eq.(53) against Eq.(31) one finds that the essential terms in the two expressions match for, e.g., \(\alpha=1/2,\beta=-w-1/2,\gamma=-1-w\).
Adding matter to (53) introduces another pair of conjugate variables, similar to the formulation of the PMF problem in a non-separable gauge. In particular, the projectable HL action with an additional scalar field \(Q(x,\tau)=Q(\tau)\) governed by a potential \(V(Q)\) and paired with a conjugate momentum \(P(\tau)\) produces the Hamiltonian which resembles the \(2d\) PMF Hamiltonian
\[H_{HL+S}=aqp^{2}+b\Lambda q+c\frac{P^{2}}{q}+qV(Q) \tag{54}\]
#### Summary
The orthodox holographic scenario requires a bulk gravity to have non-trivial dynamics that gets quenched and turns classical only in a certain ('large-\(N\)') limit[2].
In that regard, the SYK-JT duality would often be referred to as the case of 'bona fide' low-dimensional holographic correspondence. It is generally agreed, though, that such equivalence does not quite rise to the level of the full-fledged AdS/CMT holographic duality, as the JT bulk dual is non-dynamical and determined by the boundary degrees of freedom, thus making both systems effectively \(1d\).
This note argues that similar (pseudo)holographic relationships can be established between the various extensions of the original SYK model and more general (JT, \(F(R)\)-, HL, etc.) \(2d\) gravities. The correspondence between their low-energy sectors presents a form of equivalence between different realizations of the co-adjoint orbits of the (chiral) Virasoro group.
Formally, both sides of such duality can be described in terms of some \(1d\) Liouvillean quantum mechanics, thus generalizing the pure Schwarzian action which description can also be mapped onto an equivalent (single particle) PMF problem. From the practical standpoint, certain analytically solvable quantum-mechanical potentials can then be related to the physically relevant SYK deformations, such as the action given by Eqns.(8) and (14) for a double SYK quantum dot.
The PMF analog picture allows for direct access to the resolvent \(D_{E}\) and heat kernel \(K_{\beta}\) functions, thus allowing one to compute the density of states \(\rho(E)\), partition function \(Z(\beta)\), and other thermodynamic properties of the boundary SYK-like system of interest. By further utilizing this approach one can also study the various quantifies of entanglement, quantum chaos, and even more subtle \(n\geq 2\)-body correlations.
Furthermore, the tangle of (pseudo)holographic relationships between the \(SL(2,R)\)-symmetric boundary (Schwarzian/Liouville-like) and bulk (JT/HL-like) models can be viewed as different forms of embedding (at fixed radial and angular vs temporal and angular coordinates, respectively) into the global \(AdS_{3}\) space[18]. Importantly, a similar relationship also exists between the \(1+2\)-dimensional gravity with its Banados-Teitelboim-Zanelli black hole backgrounds and the various (e.g., Korteweg-de-Vries) families of solvable \(1+1\)-dimensional quantum systems[24]. Among other things, such equivalence can be utilized to study non-linear hydrodynamics of the soliton-like edge states of generalized bulk QHE systems[25].
Thus, when seeking out genuine implementations of the central holographic 'IT from QUBIT' paradigm one might first want to make sure that the conjectured duality does not appear to be of the 'ALL from HALL' variety. Indeed, discovering a (possibly, hidden) topological origin of holographic correspondence could greatly help to demystify this otherwise fascinating, yet baffling, concept.
|
2310.01043
|
Decoding the Manhattan Project's Network: Unveiling Science,
Collaboration, and Human Legacy
|
The Manhattan Project was one of the largest scientific collaborations ever
undertaken. It operated thanks to a complex social network of extraordinary
minds and it became undoubtedly one of the most remarkable intellectual efforts
of human history. It also had devastating consequences during and after the
atomic bombings of Hiroshima and Nagasaki. Despite the loss of hundreds of
thousands of human lives during the bombing and the subsequent events, the
scientific journey itself stands as a testament to human achievement, as
highlighted in Christopher Nolan's film portrayal of Oppenheimer.
|
Milan Janosov
|
2023-10-02T09:42:20Z
|
http://arxiv.org/abs/2310.01043v1
|
# Decoding the Manhattan Project's Network: Unveiling Science, Collaboration, and Human Legacy
###### Abstract
The Manhattan Project was one of the largest scientific collaborations ever undertaken. It operated thanks to a complex social network of extraordinary minds and it became undoubtedly one of the most remarkable intellectual efforts of human history. It also had devastating consequences during and after the atomic bombings of Hiroshima and Nagasaki. Despite the loss of hundreds of thousands of human lives during the bombing and the subsequent events, the scientific journey itself stands as a testament to human achievement, as highlighted in Christopher Nolan's film portrayal of Oppenheimer.
network science, social network analysis, Manhattan project, data science
_Published in Nightingale, Journal of the Data Visualization Society, September 12, 2023.[5] Edited by Kathryn Hurchla._
The scientific literature on collaboration, particularly the role of network connections in achieving success, is robust and has been further enriched by the current data boom. This wealth of data, represented by for instance millions of scientific papers, is exemplified in works such as "The Science of Science," by D. Wang and A. L. Barabasi.[1] Utilizing network analysis to uncover the intricate connections within the Manhattan Project aligns with my perspective as a physicist turned network scientist. Without further ado, here's how I mapped the Manhattan Project into data and used that to create a network visualization of this historically significant collaborative project.
## 1 Collecting Data
As with many data science projects, the first question revolved around data selection. While scientific publication data might seem logical, given the project's scientific nature, this approach proved inadequate. The main reason for this was two-fold: First, some of the most important documents and papers could still be classified; and also, not everyone was active in science, as the operation was also heavily intertwined with politics and the military. Thus, resorting to collective wisdom, my focus shifted to Wikipedia, a global crowdsourced encyclopedia and a potential data source. Wikipedia offers a list of notable personnel connected to the project,[2] encompassing more than 400 contributors from various fields. I used a straightforward web-scraping technique to collect data from Wikipedia--a total of 452 usable profiles. Then I manually categorized each person based on occupation, leading to the distribution outlined in Table 1.
The list, not entirely surprisingly, is topped by physicists, followed by chemists and engineers. However, exploring the realm of science, particularly those at the forefront of the Project, awaits. Let's take the stories from the "Other" category. This group collects contributors' primary occupations that appeared infrequently and seemed unrelated to a scientific project focused on weaponry development. Among these unconventional contributors are Wolfrid Rudyerd Boutton, an American ornithologist, who also happened to become responsible for monitoring the supply of uranium ore from the Belgian Congo, and Edith Warner, a tea room owner in Los Alamos whose role was said to have profoundly impacted researchers' morale.
Some other notable "other" figures include Charlotte Serber, a journalist, statistician, librar
\begin{table}
\begin{tabular}{c c}
**Occupation** & **Ratio** \\ Physicist & 51.99\% \\ Chemist & 17.7\% \\ Engineer & 9.29\% \\ Other & 6.19\% \\ Army officer & 5.53\% \\ Mathematician & 3.54\% \\ Biologist & 1.99\% \\ Spy & 1.55\% \\ Physician & 1.11\% \\ Computer scientist & 1.11\% \\ \end{tabular}
\end{table}
Table 1: Occupation Distribution of Notable Manhattan Project Contributors.
ian, and the sole female laboratory group leader in Los Alamos. Ben Porter defies categorization, too, embracing roles as an artist, writer, publisher, performer, and physicist--later exhibiting work at New York's Museum of Modern Art. The selection concludes with James Edward Westcott, a notable Manhattan Project photographer, and Donald Lindley Harvey, a professional basketball player turned Army member contributing to the project.
## 2 Constructing the Network
With the data in hand, I picked network science,[3] the science of connections that is perfect for elegantly deciphering complex structures such as the Manhattan Project's collaboration
Figure 1: The collaboration network behind the Manhattan Project. Each node represents a contributor, where two nodes are linked if their Wikipedia pages reference each other. The top 50 nodes with the largest number of connections are labeled.
patterns. Each network comprises nodes (entities) and links (references) that weave the intricate social fabric of the collaborating people. In this context, each node symbolizes a Manhattan Project contributor, with links forming between individuals whose Wikipedia pages reference one another. The number of shared references determines the link's strength. Employing this straightforward framework, I arrived at a network of 316 individuals connected by 1,099 ties of various strengths.
## 3 Infusing Color into Insight
The next phase enriches the network visualization by introducing color--each hue repre
Figure 2: The collaboration network behind the Manhattan Project shown in Figure 1, where each node is colored based on the network community it belongs to.
senting a distinct network community or cluster. Defining these communities hinges on the methodology, but the general premise remains: Communities consist of nodes with a higher density of internal links than external ones [4, 6]. In other words, nodes mostly linked to each other--as opposed to the rest of the network--belong to one community. The resulting visual, presented in Figure 2, uncovers how contributors organize into closely connected clusters within the expansive Manhattan Project. In this Figure, each color encodes different communities.
## 4 Deciphering the Network's Narrative
With the vibrant visualization in Figure 2, we are ready to read the collaboration network. Key players in modern physics pop out immediately, including Nobel laureates Arthur Compton, Enrico Fermi, Niels Bohr, and Ernest Lawrence, alongside geniuses like J. Robert Oppenheimer and Edward Teller. Yet, there is much more to the story and the patterns behind the connections than just a handful of hubs.
At the core of this network diagram lies the red community centered by the legendary Niels Bohr. Here, Bohr's connections reveal his instrumental role in supporting refugee scientists during World War II, who also joined the Project, including people like Felix Bloch, James Franck, and George Placzek, all marked by red. Adjacent to Bohr's realm resides a green cluster, highlighted by the Italian physicist Enrico Fermi. Fermi, together with his collaborators like Anderson, Szilard, Compton, and Zinn, reached the milestone of the self-sustaining chain reaction using uranium and gave birth to the first nuclear reactor, the Chicago Pile-1.
While Eugene Wigner was most famous for his contribution to Chicago Pile-1, his links tie him closer to the purple community that seems to be scattered around the network. Wigner can be seen prominently in the upper-right corner of the network. This more decentralized community, having no one else but Oppenheimer as its key figure, also links the famous Mathematician John von Neumann, with purple, in the top-center part of Figure 3, who. (He, along with Wigner was unfortunately left out of the blockbuster movie by Nolan.) With purple, we see several other leading scientists, such as James Chadwick in the bottom-center, who led the British team on the Project; Robert Wilson right next to Oppenheimer, who became the head of its Cyclotron Group; and the American physicist Robert Serber directly above Oppenheimer,
who created the code names for all three design projects and the bomb, such as "Little Boy" and "Fat Man." Finally, a few words about the gray cluster, which turned out to be the Theoretical Division, with stars like Edward Teller in the center, and Nobel laureates Richard Feynman (my personal favorite scientist) in the top left, and Hans Bethe in the center.
One last observation to a personal accord: At first sight, the connections between the Hungarian immigrant Martians[7] Teller, Wigner, Szilard, and Neuman were hard to spot, despite their foundational role in the dawn of the atomic era and countless joint projects. However, once I highlighted them on the network, my expectations were quickly confirmed. They are all closely linked though not exclusively, meaning that they were also very well embedded in the
Figure 3: A close-up of the collaboration network behind the Manhattan Project colored by network communities shown in Figure 2, where each node is labeled.
American scientific community at that time (Figure 4). This is probably best illustrated by the so-called Einstein Szilard letter, written by Szilard who also consulted with Teller and Wigner, and which was ultimately signed by and sent to President Roosevelt by Einstein. A fun fact about this letter: during those days, Einstein was spending his vacation on the beach, so Szilard visited him right there. And as Szilard didn't own a driver's license, Teller was driving him.[8]
## 5 Closing
Beyond the pages of history, the project embodies the convergence of human endeavor--distinguished minds across varied disciplines united for a common goal. This analysis sheds some light on the
Figure 4: A variant of Figure 2 highlighting the Martians – Edward Teller, Eugene Wigner, Leo Szilard, and John von Neumann.
complex patterns of collaboration and joint efforts that allowed such great minds to connect, work in teams, and succeed at such an enormous scale. Additionally, the way I built this network illustrates how network science can be applied to nearly any social system, quantitatively capturing the invisible relations, and putting them into a quantitative context.
## 6 Disclaimer
Several parts of this text were upgraded by AI tools, namely, Grammarly and ChatGPT 3.5, while the whole text was initially drafted and later updated by the human author.
|
2307.13088
|
Direct measurement of the Husimi-Q function of the electric-field in the
time-domain
|
We develop the theoretical tools necessary to promote electro-optic sampling
to a time-domain quantum tomography technique. Our proposed framework
implements detection of the time evolution of both the electric-field of a
propagating electromagnetic wave and its Hilbert transform (quadrature). Direct
detection of either quadrature is not strictly possible in the time-domain,
detection efficiency approaching zero when an exact mode-matching to either
quadrature is reached. As all real signals have a limited bandwidth, we can
trace out the irrelevant sampling bandwidth to optimize the detection
efficiency while preserving quantum information of the relevant signal. Through
the developed understanding of the mode structure of the amplitude and Hilbert
transform quadratures, we propose multiplexing and mode-matching operations on
the gating function to extract full quantum information on both quantities,
simultaneously. The proposed methology is poised to open a novel path toward
quantum state tomography and quantum spectroscopy directly in the time domain.
|
Sho Onoe, Stéphane Virally, Denis V. Seletskiy
|
2023-07-24T19:18:13Z
|
http://arxiv.org/abs/2307.13088v1
|
# Direct measurement of the Husimi-Q function of the electric-field in the time-domain
###### Abstract
We develop the theoretical tools necessary to promote electro-optic sampling to a time-domain quantum tomography technique. Our proposed framework implements detection of the time evolution of both the electric-field of a propagating electromagnetic wave and its Hilbert transform (quadrature). Direct detection of either quadrature is not strictly possible in the time-domain, detection efficiency approaching zero when an exact mode-matching to either quadrature is reached. As all real signals have a limited bandwidth, we can trace out the irrelevant sampling bandwidth to optimize the detection efficiency while preserving quantum information of the relevant signal. Through the developed understanding of the mode structure of the amplitude and Hilbert transform quadratures, we propose multiplexing and mode-matching operations on the gating function to extract full quantum information on both quantities, simultaneously. The proposed methodology is poised to open a novel path toward quantum state tomography and quantum spectroscopy directly in the time domain.
## I Introduction
Light can be used to detect and transport quantum information. In order to reveal this information, various quantum metrological techniques must be applied. All information can in fact be recovered via quantum-state tomography of the incoming light. The photocounting theory of Glauber, Kelley, and Kleiner [1; 2], was one of the first attempts at a quantum mechanical interpretation of the detection of the radiation field. However, even with unit quantum efficiency, this technique cannot extract the full quantum information of mixed or even pure states due to its insensitivity to phase/quadratures. The nominal work by Yuen and Shapiro [3] introduced the quantum mechanical interpretation of homodyne detection, which can detect both quadrature fields and their correlations at a desired frequency \(\omega_{0}\). This interpretation has promoted our understanding of quantum field theory from a particle- to a field-based point-of-view incorporating the important role of the phase/quadratures.
Although its statistics can be utilized to reproduce the Wigner function [4; 5; 6], allowing full quantum-state reconstruction of the desired frequency, experimental implementations and extraction of quantum information have been met with difficulty due to the presence of noise. A leap in experimental progress was made with the introduction of the balanced-homodyne detection scheme [7], capable of suppressing the technical noise in the self-homodyne method. Rigorous analysis of the (quantum) noise associated with the balanced homodyne detection [8; 9; 10; 11] together with the introduction of the spectral decomposition method [12] paved the way toward quantum sensing in the frequency domain. These research led to the realization that efficient frequency domain homodyne detection of wavepackets require an understanding of the space-time character of the signal, to which the detection must be mode-matched. This insight led to the use of a quasi-monochromatic probe with a duration that is matched to the temporal duration of the signal, allowing efficient extraction of quantum information [13; 14] from the state-under-study. Detection of non-classical states, starting with squeezed vacuum states [10; 15; 16; 17], and the reproduction of their Wigner function [18] established balanced homodyne detection as one of the most reliable quantum sensing techniques in modern physics [19; 20; 21].
Despite its success in the (quasi-)frequency domain, homodyne detection has not been successfully implemented for quasi-time-domain measurements [22]. For a monochromatic light, the quadratures are well-defined, and the transition between the two (e.g. \(sin\) and \(cos\)) can be achieved via swap of parity or a simple time-delay, typically implemented in experiments via a change of group-velocity phase. In the time-domain, we can no longer swap between the two quadratures via a time-delay [23]: this simply changes the detection time and does not affect the parity of the measurement. As a result, the technical hurdle towards time-domain measurement is not only a broadening of the probe spectrum, but we must also correctly identify the quadrature fields in the quasi-time-domain, and seek the optimal spectral phases and amplitudes to mode-match the probe to those quadratures.
Electro-optic sampling (EOS)[24; 25; 26; 27; 28] has the ability, via nonlinearity, to isolate the electric-field from the Hilbert field (the quadrature orthogonal to the electric-field) at sub-cycle resolution. These features established EOS as one of the most reliable classical time-domain sensing techniques in the mid-infrared (MIR) frequency range, relevant for spectroscopic studies of molecular fingerprints [29], semiconductors [30] and various light-matter interactions [31]. Its success in the classical regime has motivated its promotion to the quantum domain, where a team lead by Leitenstorfer achieved the first milestone towards quantum sensing: the direct de
tection of the MIR quantum vacuum in the time-domain with sub-cycle resolution [32]. Since then, direct detection of terahertz quantum vacuum as well as its two-point correlations have been reported [33] based on the EOS variants. However, there remain several milestones this technique must reach in order to establish itself as a reliable MIR quantum sensing technique in the time domain. Prominently among those are the identification of the quasi-time-domain quadratures and the ability to detect both field quadratures [23] and their correlations, and the experimental demonstration of extracting of quantum information from the field.
Our Letter develops the theoretical framework for the former by accurately identifying the quadrature fields in the time domain and recasting the measurement protocols of EOS to be mode-matched to the detection of both simultaneously, hence achieving the direct measurement of the Husumi-Q function. Gained insights of mode-matching are exemplified through a predicted improvement upon a recent experimental proposal for the detection of a quantum squeezed state[34], which addresses the latter.
The remainder of this Letter is organized as follows: We first identify the quasi-time-domain quadratures as frequency-filtered time-domain electric-field and its Hilbert transform in Sec. II. We then optimize the mode-matching and coupling efficiency in Sec. III, to mitigate the introduction of undesirable vacuum noise and increase sensitivity to squeezing. In Sec. IV we promote EOS to efficiently couple to both the frequency-filtered electric field and its Hilbert transform simultaneously, giving the statistics of the Husimi-Q function which can utilized for full quantum-state reconstruction. We numerically demonstrate its effectiveness in quantum-state reconstruction of a squeezed state.
## II Electric field and its conjugate in the time-domain
Fields in (3+1) dimensions can be simplified to travelling-wave modes in (1+1) dimensions in the paraxial approximation, where they are considered to propagate along a single direction [19; 35; 36; 23]. In such scenario we can decompose the electric field operator, \(\hat{E}\), as
\[\begin{split}\hat{E}_{\sigma}(t,x)=\int_{-\infty}^{\infty}\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
are defined as:
\[\begin{split}\hat{E}_{BL,I}(t,x)=\int_{-\omega_{m}}^{\omega_{m}}& \omega\,E_{\omega}(t,x)\hat{a}_{\omega,I}\,,\,,\text{ with}\\ \hat{H}_{BL,I}(t,x)=i&\int_{-\omega_{m}}^{\omega_{m}} \hskip-14.226378pt\mathrm{d}\omega\,\mathrm{sign}(\omega)E_{\omega}(t,x)\hat{a }_{\omega,I}\,.\end{split} \tag{3}\]
This simplification renders both \(\hat{E}(t)\) and \(\hat{H}(t)\) to be detectable (refer to Fig. 1), without losing any relevant information of the signal under study. Furthermore, the finite bandwidth introduces the notion of mode-matching, which allows us to quantify how accurately we couple to these quadrature modes.
## III Mode-matching and detection efficiency of EOS
Electro-optic sampling (EOS) is one of the most promising candidates in ultra-fast photonics for the detection of MIR electric field in the time-domain [26; 27; 28; 24; 25; 24; 26; 27; 28]. It utilizes a \(\chi^{(2)}\) based interaction between near-infrared (NIR) and mid-infrared (MIR) via sum- and difference- frequency generation. By driving this interaction with a coherent probe-pulse with an envelope smaller than a single wavelength of the signal (i.e. sub-cycle with respect to the MIR signal), the sub-cycle quantum information is up-converted to the NIR. [25]. This information is then analyzed via the ellipsometry scheme. For a long time, experimental efforts have been made towards the direct detection of MIR electric field in the time-domain by utilizing an ultra-short pulse, reaching sub-cycle resolution under 6 fs [32]. This experimental effort for shorter pulses was motivated by the idea that the ideal probe is a Dirac-delta distribution. The introduction of \(\hat{E}_{BL}\) allows us to take a different narrative approach.
Let us consider a NIR signal with a central frequency of \(\Omega_{0}=20\) THz. The information of such pulse is generally limited to be under \(2\Omega_{0}\)[37; 22]. When this assumption is satisfied, we can neglect the quantum information of frequencies above \(2\Omega_{0}\) when analyzing this signal, and as a result we can utilize the bandwidth-limited electric field (refer to Eq. 3) with \(\omega_{m}=2\Omega_{0}\). Our goal is now tailored towards the detection of \(\hat{E}_{BL}\). We can detect \(\hat{E}_{BL}\) extremely accurately, with a mode-matching efficiency of 99% by utilizing the EOS scheme with a zinc probe pulse. In Fig. 2 we investigate the optimal bandwidth of the probe pulse to detect \(\hat{E}_{BL}\) by analysing the effect of probe-pulse bandwidth on bandwidth-limited coupling intensity, \(\theta_{E}\), and mode-matching efficiency, \(\gamma_{E}\), which are defined as:
\[\gamma_{E}=:[\hat{E}_{BL},Tr_{BL}(\hat{\mathbf{E}})]:/\sqrt{ \theta_{e}\theta_{E}}\,, \tag{4}\] \[\theta_{E}=:[Tr_{\omega_{m}}(\hat{\mathbf{E}}),Tr_{BL}(\hat{ \mathbf{E}})]:\,,\] (5) \[\theta_{e}=:[\hat{E}_{BL},\hat{E}_{BL}]:\,. \tag{6}\]
Here \(\hat{\mathbf{E}}\) is the detected electric-field via EOS (refer to Eq. (10)), \(Tr_{BL}[\hat{a}_{\omega}]=(1-\Pi((\omega)/(2\omega_{m})))\hat{a}_{\omega}\), \(\Pi(x)=1\,\forall-0.5\leq x\leq 0.5\), otherwise \(\Pi(x)=0\), and we have defined \(:[,]:\) as the normal-ordered commutator, with \(:[\hat{a}_{\pm|\omega|},\hat{a}_{\mp|\omega^{\prime}|}]:=[\hat{a}_{|\omega|}, \hat{a}_{|\omega^{\prime}|}^{\dagger}]\).
We find in Fig. 2, that when we keep the probe photon-number constant (at \(5\times 10^{9}\)) the bandwidth-limited coupling intensity plateaus at about 4% for large bandwidth. In comparison, the bandwidth-unlimited coupling intensity keeps increasing, implying that a larger bandwidth THz introduces additional noise from undesired frequencies. The amount of unwanted noise can be explicitly taken as the difference between to the bandwidth -limited and -unlimited detection efficiency. This implies we are using our resources in an inefficient manner, and we follow our analysis by considering the same plot, but with a constant intensity of the probe pulse motivated by the damage threshold of EOX crystal. In this scenario, a smaller bandwidth (roughly 70 THz) probe pulse shows a significant increase in coupling intensity, reaching roughly 9% in coupling intensity. It is noted that increasing the photon number does not change the ratio between the wanted and unwanted noise.
We also plot the bandwidth-limited mode-matching efficiency, which shows that a mode-matching of 99% is achieved when the full-bandwidth is utilized. We constrain ourselves to a mode-matching efficiency that is at least of that achieved by the full-bandwidth. Under this condition, the optimal coupling intensity of 8% is achieved at around 120 THz. By considering the appropriate spectral support we suppress the admixture of unwanted vacuum fluctuation and by efficiently mode-matching our probe specifically to the signal in question, we improved the detection efficiency [34] by roughly two folds without the cost of accuracy.
Figure 2: We demonstrate the effect of bandwidth on coupling intensity and mode-matching efficiency. The black (dashed) line correspond to the constant photon number (bandwidth-limited) coupling intensity plots. The grey (dashed) line correspond to the constant intensity (bandwidth-limited), with coupling intensity plots. The red line correspond to the bandwidth-limited mode-matching efficiency as a function of bandwidth.
Full tomography
One of EOS's strength over homodyne detection is its ability to isolate the electric-field from the Hilbert quadrature regardless of the probe pulse's phase. This introduces the notion of an absolute phase [23; 38] of the signal and we lose the meaning of relative phase.
In contrast, for homodyne detection we are agnostic to whether we are coupling directly to the electric-/Hilbert-field, but we obtain information on the relative phase between the signal and probe. In homodyne detection, the relative phase is altered by a time-delay to the signal. For EOS, this method cannot be used to alter the absolute phase, as it simply shifts the time that is sampled without affecting the quadrature (e.g. \(\hat{E}_{BL}(t)\rightarrow\hat{E}_{BL}(t^{\prime})\)). In the following sub-section we identify how EOS can be altered to detect the Hilbert transform.
### Detection of the Hilbert transform
When we are analysing the electric field, we can obtain its localized property in the time-domain by a mode with an even parity. Its Hilbert transform is represented by a mode with odd parity which is a delocalized property of the electric-field (refer to Fig. 1). From the motivational stand point of view, EOS aimed at extracting the local property of the electric-field and therefore fails to extract the Hilbert transform. There are two fundamental pillars of EOS that needs to be removed for the detection of the Hilbert transform: its ability to detect phase-locked (in frequency) electric-field mode and localized modes. The prior can be resolved by the introduction of spectral filters, and adding a different phase contribution to different frequencies. Changing from a flat phase with respect to the electric field to an odd-phase contribution (positive above the central frequency, negative below the central frequency) is desirable for the detection of a odd-function (e.g. the Hilbert transform). As the Hilbert quadrature is a delocalized quantity of the electric-field, we benefit from utilizing a delocalized probe-pulse with a strong central frequency contribution (refer to Fig. 3a for the exact waveform). In the App. A, we analyse the effect of bandwidth on detection efficiency and coupling strength, and find that a peak mode-matching of 99.9% is achieved at 80 THz bandwidth with a coupling intensity of roughly 2%.
### Simultaneous detection of both field quadratures
So far, we have established a mean to detect the field and its Hilbert transform independently. We can conduct both simultaneously via multiplexing and assigning different frequency regime to the detection of either. We demonstrate this experimental setup in Fig. 3b. In this paper we evaluate the effect of the optical elements via the Heisenberg picture. Driving the EOX crystal utilizing a strong coherent signal \(\alpha_{z}(t,x)\) leads to a Hamiltonian of the following form [36]:
\[\hat{H}_{\chi}(t)=\int_{-\infty}^{\infty}\mathrm{d}x\lambda\alpha_{z}(t,x) \hat{E}_{S}(t,x)\hat{E}_{S}(t,x)\Pi\left(\frac{x}{L}\right), \tag{7}\]
where \(\lambda=\frac{A\epsilon_{\mathrm{ad}}}{2}\), with the coupling constant \(d=-n^{4}r_{41}\), expressed in terms of the electro-optic susceptibility coefficient, \(r_{41}\), and the refractive index of the crystal, \(n\)[36; 39]. We consider, without the loss of generality, a zincblende-type crystal with a length of \(L=7\,\mu\)m. Its electro-optic coefficient is taken as \(r_{41}=4\) pm/V (for the particular case of ZnTe). The refractive index, \(n_{\Omega}\), varies only slightly (from 2.55 to 2.59) in the MIR [39]. We utilize a fit for the refractive index in the NIR frequency range \(n_{\omega}\)[36; 40]. We compute the effect of this Hamiltonian by utilizing a unitary correction [36] to second order perturbation [41; 42]. This is followed by a spectral filter, splitting the output into three frequency components:
\[\hat{N}_{i,\sigma}=\int_{\omega_{i,min}}^{\omega_{i,max}}\mathrm{d}\omega\; \hat{a}_{\omega,\sigma}^{\dagger}\hat{a}_{\omega,\sigma}\,, \tag{8}\]
Each port goes through either a half- or quarter- wave-plate:
\[\hat{U}(\theta_{i},\phi_{i})=\hat{e}^{i\phi_{i}(\cos(\theta_{i})\hat{N}_{i,S}+ \sin(\theta_{i})(\hat{N}_{i},x)}\,. \tag{9}\]
Details of \(\theta_{i}\) and \(\phi_{i}\) can be found in Fig. 3b. In each branch, the waveplate is followed by a Wollaston prism and a balanced pair of photo-detectors. The E-field is given by
\[\hat{\mathbf{E}}(t)=\frac{\hat{N}_{z,1}^{\prime}-\hat{N}_{s,1}^{\prime}}{ \sqrt{\langle\hat{N}_{z,1}^{\prime}+\hat{N}_{s,1}^{\prime}\rangle}}\,, \tag{10}\]
and its Hilbert transform,
\[\hat{\mathbf{H}}(t)=\frac{\hat{N}_{z,2}^{\prime}-\hat{N}_{s,2}^{\prime}}{ \sqrt{\langle\hat{N}_{z,2}^{\prime}+\hat{N}_{s,2}^{\prime}\rangle}}+\frac{ \hat{N}_{s,3}^{\prime}-\hat{N}_{z,3}^{\prime}}{\sqrt{\langle\hat{N}_{z,3}^{ \prime}+\hat{N}_{s,3}^{\prime}\rangle}}\,. \tag{11}\]
Simultaneous detection of both allows the detection of the Husumi-Q function. In App. B, we discuss two other promising schemes that can be implemented for signals with larger bandwidth. One of these scheme is analogous to the method discussed in this section, utilizing a beam-splitter instead of a spectral filter (refer to App. B.1), while the other can be utilized for full tomography with an arbitrary phase (refer to App. B.2), which can be utilized to extract the Wigner function.
## V Numerical results
The sensitivity of the \(n\)th moment utilizing the standard EOS setup scales as \(\epsilon^{n}\), where \(\epsilon=\gamma_{E}\) is the coupling
efficiency. This makes it extremely challenging for EOS to be implemented for quantum sensing beyond the second moment. However, a recent novel approaches to EOS utilizing a quantum probe has shown an enhancement to sensitivity to the higher moments [43]. For the purposes of extracting the information of a Gaussian state, it is sufficient to compute the second moment [44; 45], extracted through:
\[\begin{split}&\Delta V_{E}=\left(\langle\hat{\mathbf{E}}(t)^{2} \rangle-\langle\hat{\mathbf{E}}(t)\rangle^{2}\right)-\left(\langle\hat{ \mathbf{E}}_{0}(t)^{2}\rangle-\langle\hat{\mathbf{E}}_{0}(t)\rangle^{2} \right),\\ &\Delta V_{H}=\left(\langle\hat{\mathbf{H}}(t)^{2}\rangle-\langle \hat{\mathbf{H}}(t)\rangle^{2}\right)-\left(\langle\hat{\mathbf{H}}_{0}(t)^{2 }\rangle-\langle\hat{\mathbf{H}}_{0}(t)\rangle^{2}\right),\end{split} \tag{12}\]
where we have defined \(\hat{\mathbf{E}}_{0}/\)\(\hat{\mathbf{H}}_{0}\) as their respective operators \(\hat{\mathbf{E}}/\)\(\hat{\mathbf{H}}\) in the absence of the signal (i.e. in vacuum). We utilize this set-up to analyze the statistics of a squeezed Gaussian signal:
\[\begin{split}&\hat{a}_{G}=\int_{0}^{\infty}\mathrm{d}\Omega\;G( \Omega)\hat{a}_{\Omega,Z}\\ & G(\Omega)=B\sqrt{\omega}(\frac{1}{2\pi\sigma_{G}^{2}})^{1/4}e^ {(\Omega_{0}-\Omega)^{2}/(4\sigma_{G}^{2})}\end{split} \tag{13}\]
with \(\Omega_{0}=20\) THz, \(\sigma_{G}=4\) THz and a squeezing strength of \(r=0.5\). Our approach utilizing simultaneous measurement of both \(\hat{E}_{BL}\) and \(\hat{B}_{BL}\) allows the reconstruction of the Husimi Q-function (with added noise) in the time-domain, showing the correlation between the quadratures. The detected second moment and their correlation are plotted in the Fig. 4, and compared to the re-scaled second moment of the field. The results demonstrate EOS's ability to conduct full tomography of a squeezed state at a relatively high accuracy. The difference between the two results is associated with the imperfect mode-matching, which can be traced back to the use of the same probe-pulse for both the \(\chi^{(2)}\) interaction and ellipsometry, and other technical effects such as phase-matching.
## VI Conclusion
Our work establishes a method to conduct full tomography of a MIR squeezed state in the time-domain via EOS and introduces a quantitative method to analyse the quality of mode-matching, all of which are important milestones towards promoting EOS to a time-domain quantum sensing technique. Although the direct detection of \(\hat{E}(t)\) and \(\hat{H}(t)\) cannot be conducted in theory [19; 20; 21], the quantum information of interest will exist within a certain bandwidth. By introducing the same bandwidth to the electric field and its Hilbert transform quadrature, they are rendered detectable: \(\hat{E}_{BL}(t)\) and
Figure 3: a) We show the probe pulse electric-field that is utilised to drive the EOX interaction. The signal below/above 275 THz is dedicated towards the detection of the electric-field/Hilbert-field quadrature. b) We present a novel schematic set-up for the simultaneous detection of the electric- and Hilbert- quadrature via EOS. The probe pulse goes into the EOX crystal in the Z-polarisation, while the signal in the S-polarisation. This is followed by a spectral filter beam-splitter, splitting the three frequency regimes (red for below 275 THz, purple for above 340 THz and blue for frequencies in-between). The first port has a half-waveplate in the 22.5 degrees from the z-polarisation. The second and third port has a quarter-waveplate 45 degrees from the z-polarisation. This is followed by a Wollaston prism which separates the S- and Z- polarisation fields, where the intensity difference between the polarisations are recorded.
\(\hat{H}_{BL}(t)\). Contrary to the standard approach to EOS which favors the use of a shorter probe pulse [34], we have demonstrated that we can improve detection efficiency with the use of a longer probe-pulse, without the cost of accuracy.
The standard EOS set-up was shown to be very inefficient in coupling with the \(\hat{H}_{BL}(t)\) field, and undetectable without an effective spectral filter [46]. We have created an efficient and accurate scheme to detect the Hilbert transform by introducing a phase-jump in frequency for the detection, and utilizing a delocalized probe field. Furthermore, we achieved simultaneous detection of both \(\hat{E}_{BL}(t)\) and \(\hat{H}_{BL}(t)\) via multiplexing. An experimental realization of this set-up would be the first example of full-tomography in the time-domain, and would be one of the most crucial advances in quantum sensing for MIR and ultra-fast photonics.
We found that a perfect phase-matching to \(\hat{E}_{BL}(t)\) and \(\hat{H}_{BL}(t)\) was not possible when the same probe pulse was utilized for driving the \(\chi^{(2)}\) interaction and ellipsometry, especially in the presence of phase-matching. For future research, implementing the use of template search [47] and a different probe-pulse for the \(\chi^{(2)}\) interaction and ellipsometry as viable options to achieve perfect replication of the Husumi-Q function.
|
2306.15908
|
Generalized Bayesian Multidimensional Scaling and Model Comparison
|
Multidimensional scaling is widely used to reconstruct a map with the points'
coordinates in a low-dimensional space from the original high-dimensional space
while preserving the pairwise distances. In a Bayesian framework, the current
approach using Markov chain Monte Carlo algorithms has limitations in terms of
model generalization and performance comparison. To address these limitations,
a general framework that incorporates non-Gaussian errors and robustness to fit
different types of dissimilarities is developed. Then, an adaptive inference
method using annealed Sequential Monte Carlo algorithm for Bayesian
multidimensional scaling is proposed. This algorithm performs inference
sequentially in time and provides an approximate posterior distribution over
the points' coordinates in a low-dimensional space and an unbiased estimator
for the marginal likelihood. In this study, we compare the performance of
different models based on marginal likelihoods, which are produced as a
byproduct of the adaptive annealed Sequential Monte Carlo algorithm. Using
synthetic and real data, we demonstrate the effectiveness of the proposed
algorithm. Our results show that the proposed algorithm outperforms other
benchmark algorithms under the same computational budget based on common
metrics used in the literature. The implementation of our proposed method and
applications are available at https://github.com/nunujiarui/GBMDS.
|
Jiarui Zhang, Liangliang Wang
|
2023-06-28T04:15:35Z
|
http://arxiv.org/abs/2306.15908v1
|
# Generalized Bayesian Multidimensional Scaling and Model Comparison
###### Abstract
Multidimensional scaling is widely used to reconstruct a map with the points' coordinates in a low-dimensional space from the original high-dimensional space while preserving the pairwise distances. In a Bayesian framework, the current approach using Markov chain Monte Carlo algorithms has limitations in terms of model generalization and performance comparison. To address these limitations, a general framework that incorporates non-Gaussian errors and robustness to fit different types of dissimilarities is developed. Then, an adaptive inference method using annealed Sequential Monte Carlo algorithm for Bayesian multidimensional scaling is proposed. This algorithm performs inference sequentially in time and provides an approximate posterior distribution over the points' coordinates in a low-dimensional space and an unbiased estimator for the marginal likelihood. In this study, we compare the performance of different models based on marginal likelihoods, which are produced as a byproduct of the adaptive annealed Sequential Monte Carlo algorithm. Using synthetic and real data, we demonstrate the effectiveness of the proposed algorithm. Our results show that the proposed algorithm outperforms other benchmark algorithms under the same computational budget based on common metrics used in the literature. The implementation of our proposed method and applications are available at [https://github.com/nunujirui/GBMDS](https://github.com/nunujirui/GBMDS).
Keywords:Sequential Monte Carlo, dimension reduction, adaptive inference, robustness, skewness, visualization.
## 1 Introduction
Multidimensional scaling (MDS) is a method of dimension reduction that represents objects as points in a multidimensional space using a given collection of pairwise dissimilarities between objects. In MDS, a two- or three-dimensional representation of high-dimensional data can be chosen so that the distance between points in the lower-dimensional space is similar to the distance in the original space. MDS is widely used in various fields, such as psychology, social science, genomics, etc. One use of MDS is visualization, allowing people to explore patterns in the data by creating spatial representations based on distances. By visualizing the spatial arrangement of data points, hidden patterns can be more easily identified. In the context of high-dimensional data, transforming data points into a lower-dimensional space using MDS can facilitate visualization and statistical analysis. Another practical application of MDS is data exploration, where people gain insight into the main dimensions that underlie the dissimilarities.
There are two primary categories of MDS techniques, namely metric and non-metric methods. In metric MDS, the dissimilarities are assumed to be numerical. It is useful when the dissimilarities follow a Euclidean geometry, and the dissimilarity matrix satisfies the metric axioms. On the other hand, nonmetric MDS is often preferred by some researchers for particular applications in which the dissimilarities between objects are of an ordinal or rank-based nature, and where the distances do not have a well-defined Euclidean interpretation. Both metric and non-metric MDS produce a configuration where high-dimensional data points are depicted as lower-dimensional points. This arrangement reflects the similarity relationships among data points. Our study primarily centers on the metric MDS methods. For a comprehensive review of modern MDS methods, refer to Borg and Groenen (2005).
Classical multidimensional scaling (CMDS) is a well-known dimension reduction method for metric MDS developed by Torgerson (1952). CMDS is effective when the given pairwise dissimilarities are precisely equal to the Euclidean distances and when the optimal low-dimensional configuration is accurately specified (Oh and Raftery, 2001). However, these assumptions can limit the performance of CMDS in some cases. Additionally, it is reasonable to assume some errors in the dissimilarities in certain situations.
Oh and Raftery (2001) developed a Bayesian multidimensional scaling (BMDS) method by modeling the observed dissimilarities as equal to Euclidean distances plus measurement errors. Numerical solutions of the objects' locations in the low-dimensional space are obtained via a standard Markov chain Monte Carlo (MCMC) algorithm, a commonly used variate generation technique that provides powerful tools for approximating posterior distributions. Their results indicate that the BMDS demonstrates superior accuracy in fitting certain datasets compared to the CMDS method. The performance enhancement of the BMDS method is particularly significant in cases involving notable measurement errors in data, or violations of the Euclidean assumption, or incorrect specification of the latent dimension.
The Bayesian approach to the MDS problem has become increasingly attractive due to its superior performance and flexibility in accommodating external knowledge by means of prior specification. Oh and Raftery (2007) integrated the BMDS framework in Oh and Raftery (2001) with a Bayesian model-based clustering method to achieve dimension reduction in the clustering of high-dimensional objects. Bakker and Poole (2013) assumed the observed distances follow a log-normal distribution and employ a standard optimization method that minimizes the squared error loss function to isolate a single Bayesian posterior that can subsequently be analyzed using standard MCMC. Lin and Fong (2019) implemented a \(t\)-distribution to model the objects' locations which yields a more robust estimation, and variable selection is accomplished by incorporating a latent multivariate regression structure. Regarding the advancement of the sampling algorithm for BMDS, the differential evolution MCMC algorithm is used in Gronau and Lee (2020) to improve sampling in standard MCMC algorithms and explore the implementation with psychologically interpretable metrics such as the Euclidean and Minkowski metrics. Hamiltonian Monte Carlo (HMC) (Neal et al., 2011) is another sampling algorithm used in Holbrook et al. (2020) for BMDS with applications in phylogenetics. Holbrook et al. (2020) also applied massive parallelization using multi-core
central processing units and graphics processing units to accelerate the computation. However, a comprehensive Bayesian modeling framework has not yet been proposed to incorporate non-Gaussian errors and extend beyond the Euclidean space for dissimilarities. Furthermore, despite the widespread utilization of the MCMC algorithms, there exist certain limitations to the methodology of Markov chains.
One problem that users face is that MCMC algorithms do not easily take advantage of highly parallel computer architectures. Additionally, a limitation shared by MCMC-based algorithms is that their marginal likelihood estimators are generally biased. To better utilize computational power and construct unbiased estimators, researchers have developed Sequential Monte Carlo (SMC) methods to compute Bayesian estimates (see Doucet et al., 2001; Doucet and Johansen, 2009, for an introduction to SMC). In general, SMC method uses a set of random samples called particles to approximate a sequence of probability distributions of interest (Doucet et al., 2006). It propagates the particles through time using sequential importance sampling with resampling mechanisms and provides a flexible framework for constructing unbiased estimators. One variant of SMC methods that closely resembles standard MCMC is referred to as the annealed sequential Monte Carlo (annealed SMC) algorithm (Del Moral et al., 2006; Wang et al., 2021). It inherits the advantages of the SMC and can use any existing MCMC proposals. The annealed SMC also produces unbiased estimators of the marginal likelihood for free as a benefit of adopting the SMC framework. This offers a convenient way to perform model comparison using the Bayes factor (Jeffreys, 1935; Han and Carlin, 2001; Zhou et al., 2016; Wang et al., 2020) that relies on the computation of the marginal likelihood estimates. Additionally, the annealed SMC is less likely to get stuck in local modes compared to MCMC under the same computational budget. It begins with distributions from which it is easy to sample, and then gradually increases complexity to explore the space. This gradual movement avoids getting stuck in local maxima or minima. Previous research has demonstrated the efficiency of annealing approaches in various contexts, such as epidemiology (Del Moral et al., 2012), phylogenetics (Wang et al., 2020), and solving nonlinear differential equation systems (Wang et al., 2021).
The existing BMDS methods have several limitations. First, almost all these methods are based on the Euclidean distance metric. But Euclidean dissimilarity is not always the appropriate metric in various fields. In medical imaging and 3D face recognition, the minimum-distortion mapping between two surfaces is measured by the Gromov-Hausdorff distance in Memoli (2011) and the partial embedding distance in Bronstein et al. (2006). In text mining, the Cosine dissimilarity is often used (Li and Han, 2013), which calculates the dissimilarity between two vectors in an inner product space based on the cosine of the angle between them. The Euclidean distance may not be suitable for comparing small and large text documents, as it would be very large in this case. In contrast, the Cosine dissimilarity reflects the relative comparison of individual vectors in high dimensions regardless of magnitude, which is more suitable than the Euclidean distance. Second, the existing MDS methods mainly rely on the assumption of Gaussian errors, resulting in a lack of robustness and generality. Third, the existing literature on model comparison for BMDS frameworks with diverse dissimilarity modeling distributions is limited. The majority of previous studies have concentrated on comparing Bayesian and frequentist solutions to MDS problems by employing specific
statistics tailored for particular scenarios. Fourth, the rapid progress in data collection and storage has resulted in a vast amount of data, which poses a significant challenge for researchers seeking appropriate inference methods for handling increasingly large datasets. Bayesian inference, while known for its flexibility, is often computationally expensive. Consequently, applying Bayesian inference to MDS methods in the context of large data remains a challenging task.
To address the potential deficiencies of the current BMDS methods, we propose a more comprehensive Bayesian modeling framework, called generalized Bayesian multi-dimensional scaling (GBMDS), to incorporate general dissimilarity metrics and non-Gaussian errors into BMDS. We design an adaptive inference framework using the annealed SMC algorithm to obtain Bayesian solutions under the proposed GBMDS model. The developed algorithm does not require designing novel proposals, as in the SMC method. Instead, people can directly use the rich resources of Metropolis-Hastings proposals, making it easy to incorporate into existing MCMC approaches. Our adaptive annealed SMC algorithm considers cases where the number of observations, the dimensions of the parameters and hidden variables increase over time. The objective is to conduct sequential inferences as new data become available, allowing people to update and refine the most recent results. The proposed adaptive scheme can be readily implemented for datasets with large sample sizes through the division of data into smaller batches. The Bayesian inference can then be conducted sequentially for each batch, allowing for incremental updates.
Our contributions can be summarized as follows. i. We generalize the BMDS model to include the non-Gaussian errors in the pairwise dissimilarities. The proposed model can handle dissimilarities with heavier tails or skewed distributions and exhibit robustness and accuracy. ii. Our proposed GBMDS considers more general distance metrics that are not restricted to Euclidean space. iii. We propose a framework to perform efficient adaptive Bayesian inference for the GBMDS based on annealed SMC, which reduces the overall computational burden for large-scale data scenarios. iv. Our framework can provide unbiased estimators of the marginal likelihood as a byproduct of sampling, which makes the model comparison via the Bayes factor straightforward. v. We employ the adaptive annealed SMC in two simulation studies and three real data applications, showcasing its superior estimation capabilities compared to benchmark methods across diverse dissimilarity metrics.
The rest of this article is organized as follows. Section 2 describes the models for BMDS: we propose the GBMDS model, define the priors, and discuss model comparison and issue of identifiability in Section 2.1 to 2.4. Section 3 depicts the implementation for the GBMDS model: Sections 3.1 and 3.2 detail the initialization and inference procedure; Section 3.3 outlines the adaptive mechanism with annealed SMC algorithm. Simulations and Examples are presented in Sections 4 and 5. The conclusion is in Section 6.
## 2 BMDS Models
Suppose we have a set of \(n\) objects in the study. Let \(\mathbf{Z}=\{\mathbf{z}_{1},\ldots,\mathbf{z}_{n}\}\) be a set of observed points with \(\mathbf{z}_{i}=(z_{i,1},\ldots,z_{i,q})^{\top}\in\mathbb{R}^{q}\) representing the values of \(q\) attributes
in object \(i\). The value of \(q\) is usually high, which makes the visualization of the points in their original dimension hard. Let \(\mathbf{D}\) be the matrix of dissimilarities with entry \(d_{i,j}\) as the dissimilarity between objects \(i\) and \(j\). The dissimilarity matrix \(\mathbf{D}\) is computed from the observed data \(\mathbf{z}_{1},\ldots,\mathbf{z}_{n}\) with specific dissimilarity metrics such as Euclidean metric. Dissimilarity metrics used in this study will be detailed in Section 2.1. A formal definition of the metric space is given in _Supplementary_.
Let \(\mathbf{x}_{i}=(x_{i,1},\ldots,x_{i,p})^{\top}\in\mathbb{R}^{p}\) be the unobserved vector representing the values of \(p\) significant attributes in object \(i\). The goal of MDS methods is to find the set of points \(\mathbf{X}=\{\mathbf{x}_{1},\ldots,\mathbf{x}_{n}\}\) such that \(d_{i,j}\) and \(\|\mathbf{x}_{i}-\mathbf{x}_{j}\|_{p}\) are as close as possible, where \(\|\cdot\|_{p}\) represents the \(L^{p}\) norm. In such a manner, the given dissimilarities are well-reproduced by the resulting configuration. We refer to this process as object configuration (Oh and Raftery, 2001), which describes the estimation of values for objects' significant attributes.
CMDS is a commonly used dimension reduction technique for metric MDS developed by Torgerson (1952). CMDS assumes the dissimilarity to be Euclidean and takes the pairwise dissimilarities as inputs and outputs the coordinates of points in a low-dimensional space up to locations, rotations and reflections. Numerical optimization techniques can be used to find a solution to the minimization problem below:
\[\min\sum_{i\neq j=1,\ldots,n}\left(d_{i,j}-\|\mathbf{x}_{i}-\mathbf{x}_{j}\|_ {p}\right)^{2}. \tag{1}\]
The minimizers can be expressed analytically in terms of matrix eigendecompositions when the input dissimilarities satisfy the metric inequality and can be represented by Euclidean distances. CMDS can retrieve the complete configuration of objects (up to location shift) when the dissimilarities are precisely equal to the distances in the low-dimensional space and the dimension is appropriately specified. However, the dissimilarities between observed points are usually contaminated by errors, and the underlying dimensions are often unknown.
### Generalized Bayesian multidimensional scaling
While Euclidean distance is one of the most widely used distance measures, it is not scale-invariant, meaning that distances computed from features might be skewed depending on the units. Moreover, Euclidean distance becomes less useful as the dimensionality of the data increases. To satisfy the various needs of different tasks, we develop a general framework that can accommodate different distance metrics and behave robustly when outliers are present in the dissimilarities.
We restrict the dissimilarity measure \(d_{i,j}\) to be always positive, and assume \(d_{i,j}\) to follow some truncated distribution:
\[d_{i,j}\sim g\left(\delta_{i,j}\right)I\left(d_{i,j}>0\right),\qquad i\neq j, \;i,j=1,\ldots,n, \tag{2}\]
where \(I(\cdot)\) is an indicator function. The true dissimilarity measure \(\delta_{i,j}\) is modeled as the distance between object \(i\) and \(j\) using the dissimilarity metric \(\mathcal{D}\):
\[\delta_{i,j}=\mathcal{D}(\mathbf{x}_{i},\mathbf{x}_{j}). \tag{3}\]
The GBMDS framework we propose is general in nature. The various GBMDS models differ from one another based on the selection of dissimilarity metric \(\mathcal{D}\) and the choice of distribution function \(g\).
Compared with the BMDS framework proposed in Oh and Raftery (2001), we do not restrict \(d_{i,j}\) to be accompanied by Gaussian errors. The previous BMDS framework may be inadequate when dealing with dissimilarity measures that are subject to random errors or those that are non-Euclidean in nature. In addition, the assumption of utilizing a truncated Gaussian distribution to model the errors is inadequate in the presence of outliers. The presence of outliers can lead to increased uncertainty surrounding unobserved dissimilarities (\(\delta_{i,j}\)'s) beyond what can be accounted for by the tails of Gaussian distributions. We will refer to the framework in Oh and Raftery (2001) as the standard BMDS throughout this paper.
#### Dissimilarity metrics
The standard choice of dissimilarity metric \(\mathcal{D}\) on \(\mathbb{R}^{p}\) is the Euclidean metric (\(L^{2}\) norm): \(\mathcal{D}(\mathbf{x}_{i},\mathbf{x}_{j})=\|\mathbf{x}_{i}-\mathbf{x}_{j}\| _{2}=\sqrt{\sum_{k=1}^{p}(x_{i,k}-x_{j,k})^{2}}\). It is often used in MDS when the dissimilarity matrix satisfies the metric axioms and has a well-defined Euclidean interpretation.
We generalize the standard BMDS by considering cases where the dissimilarity matrix may not have a well-defined Euclidean interpretation. In this case, we can consider candidate models with non-Euclidean dissimilarity metrics. For example, Cosine metric is defined as \(\mathcal{D}(\mathbf{x}_{i},\mathbf{x}_{j})=1-\left(\sum_{k=1}^{p}x_{i,k}x_{j, k}\right)/\left(\sqrt{\sum_{k=1}^{p}x_{i,k}^{2}}\sqrt{\sum_{k=1}^{p}x_{j,k}^{2}}\right)\). The Cosine metric ranges from 0 to 1. It can be used for text analysis, as word frequencies are non-negative.
In our GBMDS framework, a variety of distributions can be considered for \(g\), including both symmetric and skewed distributions. Symmetric distributions, such as Gaussian or Student's \(t\)-distributions, are suitable in some cases, while in other scenarios, a skewed distribution is more appropriate. In what follows, we will focus on the truncated skewed Gaussian distribution. This distribution is a suitable choice when the errors are skewed. Then, we will proceed to investigate the truncated Student's \(t\)-distribution. This distribution is deemed a suitable choice for robust estimation when outliers exist.
#### GBMDS with truncated skewed Gaussian distribution
We consider that some dissimilarities can be simultaneously skewed and positive. We denote the model with truncated skewed Gaussian distribution as \(\mathcal{M}_{TSN}\) and model the dissimilarity \(d_{i,j}\) as follows:
\[d_{i,j}|\mathcal{M}_{TSN}\sim\mathcal{SN}\left(\delta_{i,j},\sigma^{2},\psi \right)I\left(d_{i,j}>0\right),\qquad i\neq j,\ i,j=1,\ldots,n,\]
where \(\sigma^{2}\in\mathbb{R}^{+}\) is the squared scale parameter and \(\psi\in\mathbb{R}\) is the shape parameter. The truncated Gaussian distribution is recovered when \(\psi\) is zero. As the absolute value of \(\psi\)
grows, the absolute skewness of the distribution increases, with negative \(\psi\) producing a left-skewed distribution and positive \(\psi\) generating a right-skewed distribution.
For a given matrix of dissimilarities \(\mathbf{D}\), the likelihood function, \(l\), of the latent variables \(\mathbf{X}=\{\mathbf{x}_{1:n}\}\), unknown parameters \(\sigma^{2}\) and \(\psi\) under \(\mathcal{M}_{TSN}\), can be written as:
\[l\left(\mathbf{D}|\mathbf{X},\sigma^{2},\psi,\mathcal{M}_{TSN}\right)\] \[\propto \left\{\sigma^{2}\left(1-F_{\delta_{i,j},\sigma,\psi}(0)\right) \right\}^{-\frac{m}{2}}\times\exp\left\{-\frac{1}{2\sigma^{2}}\text{SSR} \right\}\times\prod_{i>j}\Phi\left(\psi\frac{d_{i,j}-\delta_{i,j}}{\sigma} \right),i,j=1,\ldots,n \tag{4}\]
where \(F(\cdot)\) is the cdf of skewed Gaussian distribution, \(\text{SSR}=\sum_{i>j}\left(d_{i,j}-\delta_{i,j}\right)^{2}\) is the sum of squared residuals, \(\Phi(\cdot)\) is the standard Gaussian cdf, and \(m=n(n-1)/2\) is the total number of dissimilarities for \(n\) objects.
#### Robust GBMDS with truncated Student's t-distribution
To further relax the assumption of constant Gaussian error variance in the dissimilarity, we introduce the model with truncated Student's \(t\)-distribution. Consequently, we can accommodate different degrees of uncertainty associated with dissimilarity by using different error variances. The \(t\)-distribution is often used as an alternative to the Gaussian distribution as a more robust model to fit data with heavier tails (Lange et al., 1989; Lin and Fong, 2019). In many applications, the outliers add more uncertainty around the tails of the dissimilarity measures. Fitting the truncated \(t\)-distribution provides a longer tail.
The \(t\)-distribution can be written in the form of its scale mixtures of Gaussian representation to demonstrate its robustness property:
\[t_{\text{df}}\left(x;\mu,\sigma^{2}\right)=\int_{0}^{\infty}\mathcal{N}\left( x;\mu,\frac{\sigma^{2}}{\zeta}\right)Gamma\left(\zeta,\frac{\nu}{2},\frac{ \nu}{2}\right)d\zeta. \tag{5}\]
Equation (5) indicates that if a random variable \(x\) follows a \(t\)-distribution with mean \(\mu\), variance \(\sigma^{2}\), and degrees of freedom \(\nu\), then conditioning on \(\zeta\sim\mathit{Gamma}\left(\nu/2,\nu/2\right)\), \(x\) follows a Gaussian distribution with parameters \(\mu\) and \(\sigma^{2}/\zeta\). The \(t\)-distribution down-weighs the observations which are disparate from the majority under the Gaussian distribution. This means that observations that are outliers or significantly different from the majority of the data will have less influence on the overall distribution in the \(t\)-distribution compared to the Gaussian distribution.
We denote the model with truncated \(t\)-distribution as \(\mathcal{M}_{T}\). \(\mathcal{M}_{T}\) models the dissimilarity \(d_{i,j}\) as follows:
\[\zeta_{i,j} \sim\mathit{Gamma}\left(\nu/2,\nu/2\right),\] \[d_{i,j}|\mathcal{M}_{T} \sim\mathcal{N}\left(\delta_{i,j},\sigma^{2}/\zeta_{i,j}\right)I \left(d_{i,j}>0\right),\qquad i\neq j,\;i,j=1,\ldots,n.\]
For a given matrix of dissimilarities \(\mathbf{D}\), the likelihood function of the latent variables \(\mathbf{X}=\left\{\mathbf{x}_{1:n}\right\}\), unknown parameters \(\sigma^{2}\) and \(\zeta_{i,j}\) under \(\mathcal{M}_{T}\), can be written as:
\[l\left(\mathbf{D}|\mathbf{X},\sigma^{2},\zeta_{i,j},\mathcal{M}_ {T}\right)=\left(2\pi\sigma^{2}\right)^{-\frac{m}{2}}\times\] \[\exp\left\{\frac{1}{2}\sum_{i>j}\log\left(\zeta_{i,j}\right)- \frac{1}{2\sigma^{2}}\sum_{i>j}\zeta_{i,j}\left(d_{i,j}-\delta_{i,j}\right)^{2 }-\sum_{i>j}\log\Phi\left(\frac{\delta_{i,j}\sqrt{\zeta_{i,j}}}{\sigma}\right) \right\}, \tag{6}\]
where SSR, \(\Phi(\cdot)\), and \(m=n(n-1)/2\) are defined as in the model \(\mathcal{M}_{TSN}\). The derivation of the likelihood function under the model \(\mathcal{M}_{T}\) is given in _Supplementary_.
### Bayesian inference
#### Prior distributions
Under the Bayesian framework, the prior distributions for the unknown parameters \(\mathbf{x}_{i}\), \(\sigma^{2}\), \(\psi\), and \(\Lambda\) need to be specified in advance. We assume prior independence among parameters. For the prior of \(\mathbf{x}_{i}\), we choose a multivariate Gaussian distribution with mean \(\mathbf{0}\) and a diagonal covariance matrix \(\Lambda=\text{diag}(\lambda_{1},\ldots,\lambda_{p})\). In other words, \(\mathbf{x}_{i}\sim\mathcal{N}\left(\mathbf{0},\Lambda\right)\), independently for \(i=1,\ldots,n\). For the elements along the diagonal covariance matrix, we assume an inverse Gamma distribution for the hyperprior distribution, i.e., \(\lambda_{k}\sim\mathcal{IG}\left(\alpha,\beta_{k}\right)\), independently for \(k=1,\ldots,p\). For \(\sigma^{2}\), we use an inverse Gamma distribution for the prior distribution, i.e., \(\sigma^{2}\sim\mathcal{IG}\left(a,b\right)\). For \(\psi\), we choose a diffuse prior, i.e., \(\psi\sim\mathcal{U}\left(c,d\right)\). We denote the prior distributions of the unknown parameters as \(\pi\left(\mathbf{X}\right)\), \(\pi\left(\sigma^{2}\right)\), \(\pi\left(\psi\right)\) and \(\pi\left(\Lambda\right)\).
We use the same settings for the prior distributions of the unknown parameters \(\mathbf{x}_{i}\) and \(\sigma^{2}\) as in the model \(\mathcal{M}_{TSN}\). In addition, under the setting of model \(\mathcal{M}_{T}\), we use \(Gamma\left(\nu/2,\nu/2\right)\) as the prior distribution for \(\zeta_{i,j}\).
#### Posterior distributions
For simplicity, we introduce a new notation \(\boldsymbol{s}\) to represent all the latent variables \(\mathbf{X}\) and the unknown parameters \(\boldsymbol{\theta}\). Note that the parameters \(\boldsymbol{\theta}\) vary across different models. In the model with truncated skewed Gaussian distribution, \(\boldsymbol{\theta}\) includes \(\sigma^{2}\), \(\Lambda\), and \(\psi\). In the model with truncated Student's \(t\)-distribution, \(\boldsymbol{\theta}\) includes \(\sigma^{2}\), \(\Lambda\), \(\psi\), and \(\zeta\).
In the Bayesian framework, our interest is the posterior distribution on \(\boldsymbol{s}\) given dissimilarity matrix \(\mathbf{D}\), denoted as:
\[\pi\left(\boldsymbol{s}|\mathbf{D}\right)=\frac{\gamma\left(\boldsymbol{s}| \mathbf{D}\right)}{Z}\propto l(\mathbf{D}|\boldsymbol{s})\times\pi(\boldsymbol {s}), \tag{7}\]
where \(\gamma(\boldsymbol{s}|\mathbf{D})\) denotes the unnormalized posterior distribution, \(l(\mathbf{D}|\boldsymbol{s})\) is the likelihood function, \(\pi(\boldsymbol{s})\) is the prior on the parameters, and \(Z=\int\gamma(\boldsymbol{s}|\mathbf{D})d\boldsymbol{s}\) is the marginal likelihood.
The likelihood functions are specified in Equations 4 and 6 for \(\mathcal{M}_{TSN}\) and \(\mathcal{M}_{T}\), respectively. The prior distributions are described in the previous subsection. Since the normalizing constant \(Z\) is intractable, we will use Monte Carlo methods to approximate the posterior distributions, which will be detailed in Section 3.
#### Adaptive Bayesian inference
Let \(\mathbf{X}^{(0)}\) be the hidden variables that are associated with objects \(\mathbf{Z}^{(0)}=\{\mathbf{z}_{1},\ldots,\mathbf{z}_{n_{0}}\}\), and \(\mathbf{X}^{(1)}\) be the hidden variables associated with objects \(\mathbf{Z}^{(1)}=\{\mathbf{z}_{n_{0}+1},\ldots,\mathbf{z}_{n_{0}+n_{1}}\}\). Given dissimilarity metric \(\mathcal{D}\), dissimilarities \(\mathbf{D}^{(0)}\) is obtained from \(\mathbf{Z}^{(0)}\) and \(\mathbf{D}\) is obtained from \(\mathbf{Z}=(\mathbf{Z}^{(0)},\mathbf{Z}^{(1)})\).
In this case, \(\boldsymbol{s}\) is composed of three parts, \(\mathbf{X}^{(0)}\), \(\mathbf{X}^{(1)}\), and \(\boldsymbol{\theta}\). The posterior distribution of \(\boldsymbol{s}\) can be rewritten as:
\[\pi\left(\mathbf{X}^{(0)},\mathbf{X}^{(1)},\boldsymbol{\theta}|\mathbf{D} \right)\propto l\left(\mathbf{D}|\mathbf{X}^{(0)},\mathbf{X}^{(1)},\boldsymbol {\theta}\right)\pi\left(\mathbf{X}^{(0)}\right)\pi\left(\mathbf{X}^{(1)} \right)\pi\left(\boldsymbol{\theta}\right). \tag{8}\]
The adaptive Bayesian inference concerns the inference of \(\pi(\mathbf{X}^{(0)},\mathbf{X}^{(1)},\boldsymbol{\theta}|\mathbf{D})\) using the previous inference for \(\pi(\mathbf{X}^{(0)},\boldsymbol{\theta}|\mathbf{D}^{(0)})\) when dissimilarity data increase from \(\mathbf{D}^{(0)}\) to \(\mathbf{D}\).
When the previous dissimilarity matrix \(\mathbf{D}^{(0)}\) is not available, we denote \(\mathbf{D}^{(0)}=\emptyset\) and \(\mathbf{X}^{(0)}=\emptyset\). With our notation, the posterior distribution in Equation 7 is a special case of Equation 8 when \(\mathbf{D}^{(0)}=\emptyset\), \(\mathbf{X}^{(0)}=\emptyset\). Therefore, we will only focus on the adaptive Bayesian inference with Monte Carlo methods for Equation 8 in Section 3.
We propose to conduct adaptive Bayesian inference in two scenarios. First, in the problem of online inference, we attempt to make inference sequentially in time as data arrive or people have additional observations to update the recent results. We refer to such situations as adaptive inferences, where we are concerned with the re-computation of results that are only marginally different from those of a previously solved inference problem. The main idea is to use posteriors from the previous iteration to initialize the next iteration. The same idea also applies to the situation where the sample size of the data is significant. Instead of running the algorithm with a fixed dimension on all observations at one time, we can split the data into several batches and make inferences sequentially. We expect this is helpful in the Bayesian multidimensional scaling context as the visualization of the object can be created sequentially, which can alleviate the computational loads.
### Model comparison
As described in the previous section, the function \(g\) can take different forms. In most cases, the optimal form of \(g\), the number of significant attributes \(p\) or the applied dissimilarity metrics are unknown. In this section, we approach the problem of comparing a discrete set of Bayesian models with the Bayes factor. Consider two models \(\mathcal{M}_{1}\) and \(\mathcal{M}_{2}\) with different likelihoods and corresponding sets of parameters \(\boldsymbol{s}_{1}\) and \(\boldsymbol{s}_{2}\). In the context of this paper, \(\mathcal{M}_{1}\) and \(\mathcal{M}_{2}\) would correspond to two competing models. Examples of competing models could be \(\mathcal{M}_{TSN}\) versus \(\mathcal{M}_{T}\), or one model under a different
choice of dimension \(p\). The Bayes factor is defined as the ratio of the posterior odds to the prior odds:
\[\text{Bayes Factor}\left(\mathcal{M}_{1},\mathcal{M}_{2}\right)=\frac{P( \mathcal{M}_{1}|\mathbf{D})/P(\mathcal{M}_{2}|\mathbf{D})}{P(\mathcal{M}_{1})/ P(\mathcal{M}_{2})}.\]
When the two models have equal prior probability, i.e., \(P(\mathcal{M}_{1})=P(\mathcal{M}_{2})\), the Bayes Factor reduces to the ratio of two marginal likelihood estimates, and is given by:
\[\text{Bayes Factor}\left(\mathcal{M}_{1},\mathcal{M}_{2}\right)=\frac{\int P( \boldsymbol{s}_{1}|\mathcal{M}_{1})P(\mathbf{D}|\boldsymbol{s}_{1},\mathcal{M}_ {1})\,d\boldsymbol{s}_{1}}{\int P(\boldsymbol{s}_{2}|\mathcal{M}_{2})P( \mathbf{D}|\boldsymbol{s}_{2},\mathcal{M}_{2})\,d\boldsymbol{s}_{2}}=\frac{P \left(\mathbf{D}|\mathcal{M}_{1}\right)}{P\left(\mathbf{D}|\mathcal{M}_{2} \right)}.\]
Bayes factor can provide support to either model; a Bayes factor greater than 1 indicates support for model 1 over model 2 and vice versa. A rule of thumb, as suggested in Kass and Raftery (1995), can be viewed as guidelines for model selection from a Bayes factor. A typical challenge for using the Bayes factor is the computation of the marginal likelihood estimates, especially for MCMC-based methods (Wang et al., 2020). Marginal likelihood estimation is not straightforward in MCMC-based methods, and additional sampling procedures are needed to obtain these estimates. Several methods have been proposed to address this issue (Chib and Jeliazkov, 2001; Skilling, 2004; Robert and Wraith, 2009), but each has its drawback. One additional limitation shared by all MCMC-based marginal likelihood estimators is that they are generally biased.
There are several reasons for using the Bayesian approach for model selection over the classical tools, such as \(p\)-values and some information criteria. First, the interpretation of Bayes factors is straightforward. The posterior model probabilities can be directly interpreted as probabilities that are readily understandable by even non-statisticians. Second, Bayesian model selection is consistent, while some classical model selection tools do not guarantee consistency. Moreover, as shown in Berk (1966), Bayesian model selection will pick the model with the closest Kullback-Leibler divergence to the true model (asymptotically and under mild conditions). Third, Bayesian model selection naturally penalizes complex models and favours simpler models when the data provides roughly comparable fits. For more discussion and references, see Berger et al. (2001) and Robert et al. (2007).
On the other side, in the frequentist view, STRESS is a commonly used measure of fit for the object configuration problem (Kruskal, 1964). STRESS value is defined as
\[\text{STRESS}=\sqrt{\frac{\sum_{i>j}\left(d_{i,j}-\hat{\delta}_{i,j}\right)^{2} }{\sum_{i>j}d_{i,j}^{2}}},\]
where \(\hat{\delta}_{i,j}\) is the distance found from the estimated object configuration. MDS methods form an object configuration that minimizes the STRESS values. A smaller STRESS value indicates a better fit.
In this work, we will select the optimal Bayesian model and dimension \(p\) using the marginal likelihood estimates. We will compare and evaluate the performances of CMDS and GBMDS using the STRESS value.
### Identifiability in multidimensional scaling
Similar to other dimensional reduction methods, identification issues exist in the posterior inference of GBMDS. For instance, the center and direction of the estimated points can be arbitrary. Given this identification issue, we propose the following way to display the uncertainty measures: We apply the Procrustes transformations (Goodall, 1991) as a standardization process on all the posterior samples of \(\mathbf{x}_{i}\)'s. This transformation aligns configurations with a least-squares criterion by a combination of scaling, rotation, reflection and translation. The credible regions are then constructed from the posterior samples of \(\mathbf{x}_{i}\)'s after this Procrustes transformation for measures of uncertainty.
## 3 Adaptive Bayesian Inference using Annealed SMC
### Intermediate distributions and particle initialization
To conduct the Bayesian inference for the posterior distribution in Equation 8, we propose to design an artificial sequence of annealing intermediate target distributions following the ideas from the SMC literature (Neal, 2001; Del Moral et al., 2006, 2007; Wang et al., 2020). Specifically, we create a sequence of annealing intermediate target distributions \(\{\pi_{r}(\boldsymbol{s})\}_{0\leq r\leq R}\), such that
\[\pi_{r}\left(\boldsymbol{s}\right)\propto\gamma_{r}\left(\boldsymbol{s}\right) =\left(l\left(\mathbf{D}|\boldsymbol{s}\right)\pi\left(\boldsymbol{s}\right) \right)^{\tau_{r}}\times\tilde{\pi}_{0}\left(\boldsymbol{s}\right)^{1-\tau_{r }}, \tag{9}\]
where \(\tilde{\pi}_{0}(\boldsymbol{s})\) is a _reference distribution_ that is generally easy to sample from (Fan et al., 2011), and \(0=\tau_{0}<\tau_{1}<\ldots<\tau_{R}=1\) is a sequence of annealing parameters. If \(\tau_{r}\) is zero, the distribution becomes the reference distribution \(\tilde{\pi}_{0}(\boldsymbol{s})\). At the other extreme, the distribution is the posterior distribution of interest when the power \(\tau_{r}\) equals \(1\).
In our model, \(\boldsymbol{s}\) is a vector of all the variables in \(\mathbf{X}^{(0)}\), \(\mathbf{X}^{(1)}\), and \(\boldsymbol{\theta}\). The reference distribution can be specified for \(\mathbf{X}^{(0)}\), \(\mathbf{X}^{(1)}\) and \(\boldsymbol{\theta}\) independently:
\[\tilde{\pi}_{0}\left(\boldsymbol{s}\right)=\tilde{\pi}_{0}\left(\mathbf{X}^{ (0)}\right)\tilde{\pi}_{1}\left(\mathbf{X}^{(1)}\right)\tilde{\pi}_{0}\left( \boldsymbol{\theta}\right). \tag{10}\]
Preferably, the reference distributions should possess properties that allow for convenient sampling and proximity to the modes of the target distribution. For simplicity, we choose the reference distribution for \(\boldsymbol{\theta}\) to be its prior distribution, i.e. \(\tilde{\pi}_{0}(\boldsymbol{\theta})=\pi(\boldsymbol{\theta})\), and the reference distributions for \(\mathbf{X}^{(1)}\) to be its prior, \(\tilde{\pi}_{1}\left(\mathbf{X}^{(1)}\right)=\pi\left(\mathbf{X}^{(1)}\right)\); the reference distributions for \(\mathbf{X}^{(0)}\), denoted \(\tilde{\pi}_{0}\left(\mathbf{X}^{(0)}\right)\), is set to be a Gaussian distribution.
With a small value of \(\tau_{r}\), the intermediate target distribution is closer to the reference distribution. For parameters that rely on the prior distribution as the reference distribution, smaller \(\tau_{r}\) can result in flatter intermediate target distributions that facilitate the movement between various modes. The samples are coerced into the posterior distribution as we slowly increase the annealing parameter \(\tau_{r}\). The initialization of particles is summarized in Algorithm 1.
### Annealed SMC
Next, we will introduce in Algorithm 2 the annealed SMC algorithm along with the adaptive mechanism for choosing the annealing sequence. The annealed SMC algorithm approximates the posterior distribution \(\pi(\mathbf{s}|\mathbf{D})\) in \(R\) steps. At each step \(r\), we approximate \(\pi_{r}(\cdot)\) using a total of \(K\) particles. Each particle \(\mathbf{s}_{r,k}\) is associated with a positive weight. Let \(w_{r,k}\) denote the unnormalized weight for particle \(\mathbf{s}_{r,k}\) and let \(W_{r,k}\) denote the corresponding normalized weight. The normalization is performed by \(W_{r,k}=w_{r,k}/\sum_{k=1}^{K}w_{r,k}\).
We start by sampling initial particles from the reference distributions. Then, the annealed SMC algorithm iterates between _reweighting, propagating_, and _resampling_. The details of the three steps in the annealed SMC algorithm are given as follows.
_Step 1. Weight Update_
The incremental importance weight for particle \(k\) at iteration \(r\) is
\[\tilde{w}_{r,k}=\frac{\gamma_{r}\left(\mathbf{s}_{r,k}\right)\times\kappa^{-} \left(\mathbf{s}_{r,k},\mathbf{s}_{r-1,k}\right)}{\gamma_{r-1}\left(\mathbf{s}_{r-1,k} \right)\times\kappa^{+}\left(\mathbf{s}_{r-1,k},\mathbf{s}_{r,k}\right)}, \tag{11}\]
where the forward kernel \(\kappa^{+}(\mathbf{s}_{r-1,k},\mathbf{s}_{r,k})\) is a \(\pi_{r}\)-invariant Metropolis-Hastings kernel, and \(\kappa^{-}(\mathbf{s}_{r,k},\mathbf{s}_{r-1,k})\) is the backward kernel (Del Moral et al., 2006). The selection of the backward kernel is crucial as it will affect the variance of the normalized weights. A convenient backward kernel that allows easy computation of the weight is
\[\kappa^{-}\left(\mathbf{s}_{r,k},\mathbf{s}_{r-1,k}\right)=\frac{\gamma_{r}\left(\bm {s}_{r-1,k}\right)\times\kappa^{+}\left(\mathbf{s}_{r-1,k},\mathbf{s}_{r,k}\right)}{ \gamma_{r}\left(\mathbf{s}_{r,k}\right)}. \tag{12}\]
This approach simplifies the evaluation of weights since we do not need point-wise evaluations of the backward and forward kernels. The incremental importance weight becomes
\[\tilde{w}_{r,k}=\left[\frac{l\left(\mathbf{D}|\mathbf{s}_{r-1,k}\right)\pi(\mathbf{s }_{r-1,k})}{\tilde{\pi}_{0}(\mathbf{s}_{r-1,k})}\right]^{\tau_{r}-\tau_{r-1}}. \tag{13}\]
The weight update function for particles at iteration \(r\) is
\[W_{r,k}\propto w_{r,k}=w_{r-1,k}\tilde{w}_{r,k}.\]
Note the weight update function only depends on the particles at the previous iteration. This is implemented in Line 9 of Algorithm 2.
_Step 2. Particle Propagation_
We sample the new particles \(\mathbf{s}_{r,k}\) from \(\pi_{r}\)-invariant Metropolis-Hastings kernels. The annealed SMC algorithm can directly make use of the MCMC proposals in the particle propagation. The full conditional distributions for parameters \(\lambda_{j}\), \(\{\mathbf{x}_{1:n}\}\), \(\sigma^{2}\), \(\psi\), and \(\zeta_{i,j}\) are presented below. In each conditional posterior distribution, we use \(|\cdots\) to denote conditioning on the data and all other parameters and/or indicators. A detailed description of sampling methods is given in _Supplementary_.
The full conditional distribution for \(\lambda_{k}\) is
\[\lambda_{k}|\cdots\sim IG(\alpha+n/2,\beta_{k}+\tau_{r}s_{k}/2), \tag{14}\]
where \(s_{k}/n\) is the sample variance of the \(k\)th coordinates of \(\mathbf{x}_{i}\)'s.
The full conditional posterior distributions of \(\{\mathbf{x}_{1:n}\}\), \(\sigma^{2}\) and \(\psi\) do not admit closed forms, a random walk Metropolis-Hastings step is implemented with the Gaussian proposal densities.
For \(\mathcal{M}_{TSN}\),
\[\gamma_{r}\left(\{\mathbf{x}_{1:n}\}|\cdots,\mathcal{M}_{TSN} \right) \propto\exp\left\{-\tau_{r}\left(A+\frac{1}{2}\sum_{i=1}^{n} \mathbf{x}_{i}^{\top}\Lambda^{-1}\mathbf{x}_{i}\right)\right\},\] \[\gamma_{r}\left(\sigma^{2}|\cdots,\mathcal{M}_{TSN}\right) \propto\sigma^{-2(a+1)}\exp\left\{-\tau_{r}\left(A+\frac{b}{ \sigma^{2}}\right)\right\},\]
where \(A=\frac{1}{2\sigma^{2}}\operatorname{SSR}+\frac{m}{2}\log\left(\sigma^{2} \left(1-F_{\delta_{i,j},\sigma,\psi}(0)\right)\right)-\sum_{i>j}\log\left(\Phi \left(\psi\frac{d_{i,j}-\delta_{i,j}}{\sigma}\right)\right)\).
For \(\mathcal{M}_{T}\),
\[\gamma_{r}\left(\{\mathbf{x}_{1:n}\}|\cdots,\mathcal{M}_{T}\right) \propto\exp\left\{-\tau_{r}\left(C+\frac{1}{2}\sum_{i=1}^{n} \mathbf{x}_{i}^{\top}\Lambda^{-1}\mathbf{x}_{i}\right)\right\},\] \[\gamma_{r}\left(\sigma^{2}|\cdots,\mathcal{M}_{T}\right) \propto\sigma^{-m}\exp\left\{-\tau_{r}\left(C+\frac{b}{\sigma^{2} }\right)\right\},\]
where \(C=\frac{1}{2\sigma^{2}}\sum_{i>j}\zeta_{i,j}\left(d_{i,j}-\delta_{i,j}\right) ^{2}+\sum_{i>j}\log\Phi\left(\frac{\delta_{i,j}\sqrt{\zeta_{i,j}}}{\sigma}\right)\).
For model \(\mathcal{M}_{T}\), the full conditional distribution for \(\zeta_{i,j}\) is
\[\zeta_{i,j}|\cdots,\mathcal{M}_{T}\sim Gamma((\tau_{r}+\nu)/2,\tau_{r}(d_{i,j} -\delta_{i,j})^{2}/(2\sigma^{2})+\nu/4). \tag{15}\]
_Step 3. Particle Resampling_
To alleviate the issue that all normalized weights converge to 0 except for one particle in sequential importance sampling, we prune particles of low weights when the population becomes too unbalanced. Popular resampling schemes include, but are not limited to, multinomial resampling, systematic resampling, stratified resampling, and residual resampling (Douc and Cappe, 2005). For simplicity, we will use multinomial resampling in our implementation.
Resampling at each iteration will increase the variance of the importance weights. Therefore, the resampling step is performed only when the degeneracy of the particles reaches some threshold \(\epsilon\). At each iteration \(r\), we monitor the degeneracy of particles using the effective sampling size (ESS) (Kong, 1992):
\[\text{ESS}=\frac{1}{\sum_{k=1}^{K}\left(W_{r,k}\right)^{2}}. \tag{16}\]
The relative effective sample size (rESS) normalizes the ESS between zero and one. The rESS at iteration \(r\) can be calculated by \(\text{rESS}=\text{ESS}/K\).
The annealed SMC algorithm produces a set of particles. After the extra resampling step in the end, the output of the annealed SMC algorithm contains a list of \(K\) particles with equal weight. These particles can be used for the posterior approximation and for constructing the visualization in the lower dimensional space. To find the Bayesian estimate of \(\mathbf{X}\), we take an approximate posterior mode of \(\{\mathbf{x}_{1:n}\}\) as described in Oh and Raftery (2001). Oh and Raftery (2001) observed that the term involving \(SSR\) dominates the posterior density. Thus, the approximate posterior mode can be found by the values of \(\{\mathbf{x}_{1:n}\}\) that minimizes \(SSR\) among all \(K\) particles. The approximate posterior mode retrieves the relative positions of \(\{\mathbf{x}_{1:n}\}\), and they can be considered as the solution to the object configuration. Meaningful absolute positions of \(\mathbf{X}\) may be obtained from some suitable transformation defined by the users if needed.
Some challenges with MCMC-based approximations arise in the context of model comparison via marginal likelihood estimators as discussed in 2.3. Additional costs of separately estimating the marginal likelihood with some complicated formulas are needed in MCMC-based algorithms. By contrast, model selection can be accomplished effortlessly by the Bayes factor in the proposed annealed SMC algorithm. When resampling is not conducted at every step, the estimated marginal likelihood could be evaluated during the sampling process with the following formula:
\[\widehat{Z}_{R}=\prod_{r=1}^{R}\sum_{k=1}^{K}W_{r-1,k}\tilde{w}_{r,k}.\]
Moreover, the estimates are unbiased when the annealing sequence is fixed. Past work has shown the advantages of SMC over MCMC in the context of model comparison via marginal likelihood estimators (Del Moral, 2004; Del Moral et al., 2006; Doucet and Johansen, 2009).
```
Input : (a) Initialization of \(K\) particles: \(\{\mathbf{s}_{0,k}\}_{k=1}^{K}\); (b) Priors and reference distributions over model parameters \(\mathbf{s}=\{\{\mathbf{x}_{1:n}\},\sigma^{2},\Lambda,\psi,\zeta\}\); (c) Likelihood function \(l(\mathbf{D}|\mathbf{s})\); (d) rCESS threshold \(\phi\); (e) Resampling threshold \(\epsilon\). Output : (a) Particle population: \(\{(\mathbf{s}_{R,k},W_{R,k})\}_{k=1}^{K}\); (b) Marginal likelihood estimates: \(\hat{Z}_{R}\); (c) Total SMC iterations: R; (d) Sequence of annealing parameter: \(\{\tau_{r}\}_{r=0}^{R}\).
1 Initialize SMC iteration index: \(r\gets 1\), initialize annealing parameter: \(\tau_{0}\gets 0\), initialize marginal likelihood estimate: \(\hat{Z}_{0}\gets 1\), load initial particles \(\{\mathbf{s}_{0,k}\}_{k=1}^{K}\).
2for\(k\in\{1,2,\ldots,K\}\)do
3\(w_{r,k}=1\), \(W_{r,k}=1/K\).
4for\(r\in\{2,3,\ldots\}\)do
5for\(k\in\{1,2,\ldots,K\}\)do
6 Compute incremental importance weights: \(\tilde{w}_{r,k}=\left[\frac{l(\mathbf{D}|\mathbf{s}_{r-1,k})\pi(\mathbf{s}_{r-1,k})}{ \pi(\mathbf{s}_{r-1,k})}\right]^{\tau-\tau_{r-1}}\).
7 Determine the next annealing parameter \(\tau_{r}\) using bisection method with: \(f(\tau)=\text{rCESS}_{r}(W_{r-1,\cdot},w_{r,\cdot})=\phi\).
8for\(k\in\{1,2,\ldots,K\}\)do
9 Compute pre-resampling unnormalized weights: \(w_{r,k}=w_{r-1,k}\times\tilde{w}_{r,k}\).
10 Normalize weights: \(W_{r,k}=w_{r,k}/(\sum_{k=1}^{K}w_{r,k})\).
11 Sample particles \(\mathbf{s}_{r,k}\) from \(\pi_{r}\)-invariant Metropolis-Hastings kernels.
12 Update marginal likelihood estimates \(\hat{Z}_{r}=\hat{Z}_{r-1}\times\sum_{k=1}^{K}W_{r-1,k}\tilde{w}_{r,k}\).
13if\(\tau_{r}=1\)then
14 The total number of SMC iterations \(R\gets r\).
15 return \(R\), \(\{\tau_{r}\}_{r=0}^{R}\), \(\{(\mathbf{s}_{R,k},W_{R,k})\}_{k=1}^{K}\), and \(\hat{Z}_{R}\),
16
17else
18ifparticle degeneracy is too severe, i.e. rESS \(<\epsilon\)then
19 Resample the particles, denoted \(\{\mathbf{s}_{r,k}\}_{k=1}^{K}\);
20 Reset particle weights: \(w_{r,k}=1\), \(W_{r,k}=1/K\).
```
**Algorithm 2**Annealed_SMC
The update marginal likelihood estimates \(\hat{Z}_{r}=\hat{Z}_{r-1}\times\sum_{k=1}^{K}W_{r-1,k}\tilde{w}_{r,k}\).
The sequence of intermediate target distributions, as defined in Equation 9, is determined by choice of the annealing sequence, \(\{\tau_{r}\}\). Proper selection of the sequence of annealing parameters is one challenge in the annealed SMC. A large number of annealing parameters can improve the performance but increase the computational cost. In order to ensure the proposed particles from the current iteration can effectively approximate the subsequent intermediate target distribution, it is necessary to transition smoothly from the reference distribution (\(\tau_{0}=0\)) to the posterior distribution (\(\tau_{R}=1\)).
We apply the adaptive annealing parameter scheme discussed in Wang et al. (2020). The main idea is to select an annealing parameter \(\tau\) such that we achieve a controlled increase in particle degeneracy. The particle degeneracy between two successive intermediate distributions is measured by the relative conditional effective sample size (rCESS) (Zhou et al., 2016),
\[\mathrm{rCESS}_{r}\left(W_{r-1,\cdot},\tilde{w}_{r,\cdot}\right)=\frac{\left( \sum_{k=1}^{K}W_{r-1,k}\tilde{w}_{r,k}\right)^{2}}{\sum_{k=1}^{K}W_{r-1,k} \left(\tilde{w}_{r,k}\right)^{2}}. \tag{17}\]
Values of rCESS range from \(1/K\) to \(1\). With the \(\tilde{w}_{r,k}\) in Equation 13, \(\mathrm{rCESS}_{r}\) is a decreasing function of \(\tau_{r}\), where \(\tau_{r}\in(\tau_{r-1},1]\). The value of rCESS over iterations is controlled by choosing the annealing parameter \(\tau\) such that
\[f(\tau)=\mathrm{rCESS}_{r}\left(W_{r-1,\cdot},\tilde{w}_{r,\cdot}\right)=\phi, \tag{18}\]
where \(\phi\in(0,1)\) is a tuning parameter that controls the length of the sequence \(\tau_{r}\). Since there exists no closed-form solution for \(\tau\) by solving \(f(\tau)=\phi\), a bisection method is used to solve this one-dimensional search problem. The search interval is \(\tau_{r}\in(\tau_{r-1},1]\). Given that \(f\) is a continuous function with \(f(\tau_{r-1})-\phi>0\) and \(f(1)-\phi<0\) (otherwise set \(\tau_{r}=1\)), it follows that there must exist an intermediate point \(\tau^{*}\) with \(f(\tau^{*})=\phi\). This is implemented in Line 7 of Algorithm 2.
### Adaptive mechanism
In the previous subsection, we presented the annealed SMC algorithm for a fixed dimension. In this section, we will describe an adaptive mechanism to enable the annealed SMC algorithm to handle increasing dimensions. The complete algorithm is presented in Algorithm 3.
```
1:\(\mathbf{x}_{i}\leftarrow\mathbf{x}_{i}\), \(i=1,\ldots,n\), \(\mathbf{x}_{i}\leftarrow\mathbf{x}_{i}\), \(i=1,\ldots,n\), \(\mathbf{x}_{i}\leftarrow\mathbf{x}_{i}\), \(i=1,\ldots,n\), \(\mathbf{x}_{i}\leftarrow\mathbf{x}_{i+1}\), \(i=1,\ldots,n\), \(\mathbf{x}_{i+1}\leftarrow\mathbf{x}_{i}\), \(i=1,\ldots,n\), \(\mathbf{x}_{i+1}\leftarrow\mathbf{x}_{i}\), \(i=1,\ldots,n\), \(\mathbf{x}_{i+1}\leftarrow\mathbf{x}_{i}\), \(i=1,\ldots,n\), \(\mathbf{x}_{i}\leftarrow\mathbf{x}_{i+1}\), \(i=1,\ldots,n\), \(\mathbf{x}_{i+1}\leftarrow\mathbf{x}_{i}\), \(i=1,\ldots,n\), \(\mathbf{x}_{i+1}\leftarrow\mathbf{x}_{i}\), \(i=1,\ldots,n\), \(\mathbf{x}_{i+1}\leftarrow\mathbf{x}_{i}\), \(i=1,\ldots,n\), \(\mathbf{x}_{i+1}\leftarrow\mathbf{x}_{i}\), \(i=1,\ldots,n\), \(\mathbf{x}_{i+1}\leftarrow\mathbf{x}_{i}\), \(i=1,\ldots,n\), \(\mathbf{x}_{i+1}\leftarrow\mathbf{x}_{i}\), \(i=1,\ldots,n\), \(\mathbf{x}_{i+1}\leftarrow\mathbf{x}_{i}\), \(i=1,\ldots,n\), \(\mathbf{x}_{i+1}\leftarrow\mathbf{x}_{i}\), \(i=1,\ldots,n\), \(\mathbf{x}_{i+1}\leftarrow\mathbf{x}_{i}\), \(i=1,\ldots,n\), \(\mathbf{x}_{i+1}\leftarrow\mathbf{x}_{i}\), \(i=1,\ldots,n\), \(\mathbf{x}_{i+1}\leftarrow\mathbf{x}_{i}\), \(i=1,\ldots,n\), \(\mathbf{x}_{i+1}\leftarrow\mathbf{x}_{i}\), \(i=1,\ldots,n\), \(\mathbf{x}_{i+1}\leftarrow\mathbf{x}_{i}\), \(i=1,\ldots,n\), \(\mathbf{x}_{i+1}\leftarrow\mathbf{x}_{i}\), \(i=1,\ldots,n\), \(\mathbf{x}_{i+1}\leftarrow\mathbf{x}_{i}\), \(i=1,\ldots,n\), \(\mathbf{x}_{i+1}\leftarrow\mathbf{x}_{i}\), \(i=1,\ldots,n\), \(\mathbf{x}_{i+1}\leftarrow\mathbf{x}_{i}\), \(i=1,\ldots,n\), \(\mathbf{x}_{i+1}\leftarrow\mathbf{x}_{i}\), \(i=1,\ldots,n\), \(\mathbf{x}_{i+1}\leftarrow\mathbf{x}_{i}\), \(i=1,\ldots,n\), \(\mathbf{x}_{i+1}\leftarrow\mathbf{x}_{i}\), \(i=1,\ldots,n\), \(\mathbf{x}_{i+1}\leftarrow\mathbf{x}_{i}\), \(i=1,\ldots,n\), \(\mathbf{x}_{i+1}\leftarrow\mathbf{x}_{i}\), \(i=1,\ldots,n\), \(\mathbf{x}_{i+1}\leftarrow\mathbf{x}_{i}\), \(i=1,\ldots,n\), \(\mathbf{x}_{i+1}\leftarrow\mathbf{x}_{i}\), \(i=1,\ldots,n\), \(\mathbf{x}_{i+1}\leftarrow\mathbf{x}_{i}\), \(i=1,\ldots,n\), \(\mathbf{x}_{i+1}\leftarrow\mathbf{x}_{i}\), \(i=1,\ldots,n\), \(\mathbf{x}_{i+1}\leftarrow\mathbf{x}_{i}\), \(i=1,\ldots,n\), \(\mathbf{x}_{i+1}\leftarrow\mathbf{x}_{i}\), \(i=1,\ldots,n\), \(\mathbf{x}_{i+1}\leftarrow\mathbf{x}_{i}\), \(i=1,\ldots,n\), \(\mathbf{x}_{i+1}\leftarrow\mathbf{x}_{i}\), \(i=1,\ldots,n\), \(\mathbf{x}_{i+1}\leftarrow\mathbf{x}_{i}\), \(i=1,\ldots,n\), \(\mathbf{x}_{i+1}\leftarrow\mathbf{x}_{i}\), \(i=1,\ldots,n\), \(\mathbf{x}_{i+1}\leftarrow\mathbf{x}_{i}\), \(i=1,\ldots,n\), \(\mathbf{x}_{i+1}\leftarrow\mathbf{x}_{i}\), \(i=1,\ldots,n\), \(\mathbf{x}_{i+1}\leftarrow\mathbf{x}_{i}\), \(i=1,\ldots,n\), \(\mathbf{x}_{i+1}\leftarrow\mathbf{x}_{i}\), \(i=1,\ldots,n\), \(\mathbf{x}_{i+1}\leftarrow\mathbf{x}_{i}\), \(i=1,\ldots,n\), \(\mathbf{x}_{i+1}\leftarrow\mathbf{x}_{i}\), \(i=1,\ldots,n\), \(\mathbf{x}_{i+1}\leftarrow\mathbf{x}_{i+1}\), \(i=1,\ldots,n\), \(\mathbf{x}_{i+1}\leftarrow\mathbf{x}_{i}\), \(i=1,\ldots,n\), \(\mathbf{x}_{i+1}\leftarrow\mathbf{x}_{i+1}\), \(i=1,\ldots,n\), \(\mathbf{x}_{i+1}\leftarrow\mathbf{x}_{i}\), \(i=1,\ldots,n\), \(\mathbf{x}_{
where \(\mathbf{x}_{i}^{\mathrm{CMDS}}\) is the result from fitting CMDS on \(\mathbf{D}^{(1)}\). The variance of the reference distribution is selected in a manner that induces the particles to concentrate around the CMDS outputs. With a small value of the annealing parameter \(\tau_{r}\), the intermediate target distribution is closer to the reference distribution, which is concentrated around the CMDS outputs for \(\mathbf{D}^{(1)}\).
When \(\mathbf{D}^{(0)}\neq\emptyset\) and \(\mathbf{X}^{(0)}\neq\emptyset\), we choose the reference distribution to be a particle approximation to the posterior distribution of \(\mathbf{X}^{(0)}\) given \(\mathbf{D}^{(0)}\). The reference distribution for \(\mathbf{X}^{(0)}=\mathbf{x}_{1:n_{0}}\) is based on the results from all particles:
\[\mathbf{x}_{i}\sim\mathcal{N}\left(\hat{\mathbf{x}}_{i},\hat{\mathbf{\Sigma}} \right),\text{independently for }i=1,\ldots,n_{0},\]
where \(\hat{\mathbf{x}}_{i}\) is the particles' posterior mode and \(\hat{\mathbf{\Sigma}}\) is the estimated covariance matrix of all observations' particles' posterior modes from the previous computation. For the new incremental set \(\mathbf{X}^{(1)}\) in \(\boldsymbol{s}\), we sample initial particles from the reference distribution, which is selected as its prior distribution for simplicity. This completes the specifications of the reference distributions in Algorithm 1. Initialization of particles is implemented from Line 3 to Line 9 of Algorithm 3.
As an example, suppose we already obtain the posterior samples of \(\mathbf{X}^{(0)}\) from \(n_{0}\) old observations by running the annealed SMC algorithm, and an extra of \(n_{1}\) new observations become available. In that case, instead of running the annealed SMC algorithm from scratch using \(n=n_{0}+n_{1}\) observations, we can utilize the information from the posterior samples of \(\mathbf{X}^{(0)}\) to initialize values for the old observations, and use the prior distribution to initialize samples for the new observations \(\mathbf{X}^{(1)}\). One example that illustrates the case details with incremental dimensions is given in Section 5.1.
In general, we can consider splitting data into \(B\) batches; each batch has a size of \(n_{b}-n_{b-1}\),, for \(b=1,\ldots,B\), with \(n_{0}=0\). Figure 1 illustrates the setup for the adaptive mechanism when \(b^{\mathrm{th}}\) batch of data with size \(n_{b}-n_{b-1}\) is observed after the posterior samples of \(n_{b-1}\) old observations are obtained. Each batch is proceeded sequentially as outlined in Algorithm 3.
Figure 1: An illustration of the batch split.
```
Input : (a) Number of batches: \(B\); (b) Data: \(\mathbf{Z}=\{\mathbf{z}_{1},\ldots,\mathbf{z}_{n}\}\); (c) Dissimilarity metric: \(\mathcal{D}\). Output : (a) Marginal likelihood estimates \(\hat{Z}_{R}\); (b) Posterior approximation, \(\hat{\pi}(\boldsymbol{s})=\sum_{k=1}^{K}W_{R,k}\times\delta_{\boldsymbol{s}_{ R,k}}(\boldsymbol{s})\).
1for\(b\in\{1,2,\ldots,B\}\)do
2 Calculate dissimilarities \(\mathbf{D}\) from \(\mathbf{z}_{1:n_{b}}\) with a given dissimilarity metric \(\mathcal{D}\).
3if\(b=1\)then
4 Set both \(\mathbf{D}^{(0)}\) and \(\tilde{\pi}_{0}\left(\cdot\right)\) to \(\emptyset\).
5 Set \(\tilde{\pi}_{1}\left(\mathbf{x}_{i}\right)\) to \(\mathcal{N}\left(\mathbf{x}_{i}|\mathbf{x}_{i}^{\text{CMDS}},0.01\boldsymbol{I }\right)\) for \(i=1,\ldots,n_{1}\).
6else
7 Set \(\tilde{\pi}_{0}\left(\mathbf{x}_{i}\right)\) to \(\mathcal{N}\left(\mathbf{x}_{i}|\hat{\mathbf{x}}_{i},\hat{\boldsymbol{\Sigma}}\right)\) for \(i=1,\ldots,n_{b-1}\).
8 Set \(\tilde{\pi}_{1}\left(\mathbf{x}_{i}\right)\) to \(\mathcal{N}\left(\mathbf{x}_{i}|\mathbf{0},\Lambda\right)\) for \(i=n_{b-1}+1,\ldots,n_{b}\).
9\(\{\boldsymbol{s}_{0,k}\}_{k=1}^{K}\leftarrow\texttt{Particle\_Initialization}\left( \mathbf{D}^{(0)},\mathbf{D},\tilde{\pi}_{0}\left(\mathbf{X}\right),\tilde{\pi} _{1}\left(\mathbf{X}\right),\pi(\boldsymbol{\theta}),K\right)\)
10\(\{(\boldsymbol{s}_{R,k},W_{R,k})\}_{k=1}^{K},\hat{Z}_{R}\leftarrow\texttt{ Annealed\_SMC}\left(\{\boldsymbol{s}_{0,k}\}_{k=1}^{K},\tilde{\pi}_{0} \left(\mathbf{X}\right),\tilde{\pi}_{1}\left(\mathbf{X}\right),\pi(\boldsymbol {\theta}),l(\mathbf{D}|\boldsymbol{s}),\phi,\epsilon\right)\)
11 Posterior approximation: \(\hat{\pi}(\boldsymbol{s}^{(b)})=\sum_{k=1}^{K}W_{R,k}\times\delta_{\boldsymbol {s}_{R,k}}(\boldsymbol{s})\).
12 Compute the weighted mean \(\hat{\mathbf{x}}_{i}\) and covariance \(\hat{\boldsymbol{\Sigma}}\), \(i=1,\ldots,n_{b}\) from \(\{(\boldsymbol{s}_{R,k},W_{R,k})\}_{k=1}^{K}\).
13 Reset \(\mathbf{D}^{(0)}\leftarrow\mathbf{D}\).
```
**Algorithm 3**Adaptive_Annealed_SMC
## 4 Simulation Studies
We established the values for the prior parameters by utilizing empirical Bayes methods, following the recommendations outlined in Oh and Raftery (2001). For the prior of \(\sigma^{2}\), we chose \(a=5\) and \(b=SSR/m\) obtained from CMDS. For the prior of \(\psi\), we chose \(c=-2\) and \(d=2\). For the hyperprior of \(\lambda_{k}\), we set \(\alpha=1/2\) and \(\beta_{k}=\frac{1}{2}s_{k}^{(0)}/n\), where \(s_{k}^{(0)}/n\) is the sample variance of the \(k\)th coordinate of \(\mathbf{X}\) from CMDS. For the mixing distribution of \(\zeta_{i,j}\), we used degrees of freedom \(\nu=5\). These parameter values deliver satisfactory results in the simulation studies, and the same values of the prior parameters are used in all examples unless otherwise specified.
In the Metropolis-Hastings algorithm, the multiplicity constant of the variance of the Gaussian proposal density for generating \(\mathbf{x}_{i}\) and \(\sigma^{2}\) is chosen based on the characteristics of the data to ensure rapid mixing. Since the number of significant attributes \(p\) is often unknown, most examples use \(p=2\) for the purpose of visualization. In the annealed SMC algorithm, we set the number of particles to \(K=200\), the rCESS threshold to \(\phi=0.8\), and the resampling threshold to \(\epsilon=0.5\).
The primary objective of this simulation study is to evaluate the performance of various models under diverse data structures. We compared several candidate models, denoted as \(\mathcal{M}_{g}^{\mathcal{D}}\), where \(g\) represents the model distribution for dissimilarities and \(\mathcal{D}\) is the dissimilarity metric used during estimation. We examined two experimental settings, one with skewed errors and the other with outliers. We present the outcomes from 20 runs with different random seeds.
### Experiment 1: Data with skewed errors
We started by testing how the proposed model performs when data skewness is present. A detailed description of the data generation process with skewed errors is given in _Supplementary_. We simulated the accurate/unobserved observations \(\mathbf{X}\) from a combination of Gaussian distributions. Next, the noisy/observed observations \(\mathbf{Z}\) were generated through a two-step process. First, we introduced minor errors into all observations \(\mathbf{X}\) with the aim of simulating the systematic errors arising from data measurement. Second, varied percentages of the observations are subject to the contamination of moderate and significant errors, with the intention of replicating the scenario in which some observations are inaccurately recorded during data measurement. Specifically, moderate and significant errors were introduced into 20% and 2% of the observations, respectively. The Euclidean metric was then applied to obtain the dissimilarities \(d_{i,j}\)'s from the noisy observations \(\mathbf{Z}\) and the dissimilarities \(\tilde{d}_{i,j}\)'s from accurate observations \(\mathbf{X}\). The errors \(\epsilon_{i,j}\)'s were computed by:
\[\epsilon_{i,j}=d_{i,j}-\tilde{d}_{i,j},\qquad i\neq j,\;i,j=1,\ldots,n.\]
A histogram of the errors \(\epsilon_{i,j}\)'s from one run is shown in Figure 1(a). Our goal is to compare the performance of the proposed model \(\mathcal{M}^{\text{Euclidean}}_{TSN}\) with the standard model with truncated Gaussian \(\mathcal{M}^{\text{Euclidean}}_{TN}\) using the log marginal likelihoods when data are contaminated by some skewed errors.
Figure 1(b) shows the performance comparison in terms of the log marginal likelihood as the skewed error presents. The findings of the study demonstrate that the model incorporating a truncated skewed Gaussian exhibits better performance, as the data under consideration is primarily with skewed errors. The results provide evidence that the skewed distributions are necessary for certain circumstances for modeling purposes.
Figure 2: (a) Histogram of the errors. (b) The boxplots of the log marginal likelihood for different cases and models.
### Experiment 2: Data with outliers
In the following experiment, we used the wine dataset (Dua and Graff, 2017) to investigate the robustness of the proposed models. The wine dataset is obtained from a chemical analysis of wines planted in the same region in Italy but derived from different cultivars. The data contain 129 observations of wines with 13 constituents found in each wine. From the preliminary analysis of the wine types, 10 observations are labelled as outliers. We first calculated the Euclidean dissimilarities from the raw observations. To study the robustness of different models, we added more outliers by randomly selecting a different proportion of the dissimilarities and quadrupling their values. We considered two scenarios with varying proportions of outliers in dissimilarities; the first contains 10% outliers, and the second has 20% outliers.
Figure 3(a) shows the histograms of the dissimilarity \(d_{i,j}\)'s under the two scenarios. In scenario 2, the increased percentage of outliers leads to a heavier tail in the dissimilarity histogram. In this simulation, we assume the shape parameter \(\psi=0\) to simplify the truncated skewed Gaussian to the standard truncated Gaussian. We test the robustness of the models \(\mathcal{M}_{TN}^{\text{Euclidean}}\) and \(\mathcal{M}_{T}^{\text{Euclidean}}\) in both scenarios.
Figure 3(b) shows the marginal likelihood in log scale for the two models under the two scenarios. In scenario 1, where the data only contained 10% outliers, the histogram of the dissimilarity does not show a heavy tail. According to the left boxplot in Figure 3(b), the model \(\mathcal{M}_{TN}^{\text{Euclidean}}\) is preferred since it produces higher log marginal likelihoods overall. When the percentage of outliers is increased to 20%, an obvious heavier tail can be observed in the right histogram in Figure 3(a). This indicates fitting a more robust model \(\mathcal{M}_{T}^{\text{Euclidean}}\) is favored. We also find that the model \(\mathcal{M}_{T}^{\text{Euclidean}}\) produces smaller variances across seeds in both scenarios.
Figure 3: (a) The histograms of the dissimilarity \(d_{i,j}\) under Euclidean metrics. The left histogram is from scenario 1, where the data contain 10% outliers. The right histogram is from scenario 2, where the data contain 20% outliers. (b) The boxplots of the log marginal likelihood for different models. Red: \(\mathcal{M}_{TN}^{\text{Euclidean}}\). Blue: \(\mathcal{M}_{T}^{\text{Euclidean}}\). Dimension \(p\) is 2.
## 5 Data Applications
### NIPS text data with incremental dimensions
In the first example, we demonstrate the performances of the adaptive inference with annealed SMC algorithm on some text data with incremental dimensions. The text data is generated from some real articles from Conference on Neural Information Processing Systems (NIPS). The NIPS dataset contains NIPS conference papers published between 1987 and 2015 (Dua and Graff, 2017; Perrone et al., 2017). In this study, our focus is directed toward a subset of the NIPS dataset, comprising a matrix of word counts extracted from 55 articles. This matrix is referred to as the document-term matrix, which is constructed after tokenization, removing stop words and truncation of the corpus by only keeping words appearing more than fifty times. The document-term matrix has counts for a list of 15005 words. Instead of Euclidean dissimilarity, we consider Cosine dissimilarity, which is suitable for discrete data such as word counts since it measures how dissimilar the documents are irrespective of their sizes.
We fitted the model \(\mathcal{M}_{T}^{\text{Cosine}}\) and compared the results in terms of STRESS values and computational times. The number of significant attributes \(p\) is assumed to be 2. In this toy example, we considered the cases with \(n_{0}=10,50\), and \(n_{1}=1,5\). For each combination of \(n_{0}\) and \(n_{1}\), we looked at two cases: one uses the annealed SMC algorithm of fixed dimension on the \(n=n_{0}+n_{1}\) observations with 100 particles, while the other uses the annealed SMC algorithm of incremental dimension with 50 particles given that the results from \(n_{0}\) observations are already known. In the second case, we can achieve similar results using a small set of particles since we have borrowed information from the estimation with \(n_{0}\) observations. From Table 1, it can be seen that the STRESS values from both cases are close, and this validates the performances of the annealed SMC algorithm for incremental dimension. For the two cases with \(n_{0}=10\), the computation times had decreased by an average of 55% when applying the annealed SMC algorithm for incremental dimension. For the two cases with \(n_{0}=50\), the computation times fall by 50% when applying the adaptive inference.
\begin{table}
\begin{tabular}{c|c c} Observations & STRESS & Time (in sec) \\ \hline n = 11 & 0.8764 & 18.4 \\ \(n_{0}=10,n_{1}=1\) & 0.8568 & 8.3 \\ n = 15 & 0.8128 & 31.6 \\ \(n_{0}=10,n_{1}=5\) & 0.7842 & 13.7 \\ \end{tabular}
\begin{tabular}{c|c c} Observations & STRESS & Time (in sec) \\ \hline n = 51 & 0.7837 & 608.1 \\ \(n_{0}=50,n_{1}=1\) & 0.7734 & 318.4 \\ n = 55 & 0.8003 & 697.3 \\ \(n_{0}=50,n_{1}=5\) & 0.8036 & 328.4 \\ \end{tabular}
\end{table}
Table 1: A summary of the STRESS values and computation times from applying the annealed SMC on GBMDS to observations with different dimensions. The first and third rows present the results from applying the annealed SMC algorithm of fixed dimension to all observations. The results in the second and fourth rows come from running the annealed SMC algorithm of incremental dimension, given the results from \(n\) observations are known. All results are the averages of 20 runs.
### Geographical data
The second example aims to study the performance of the proposed method and present visualizations of the estimations with uncertainty measures. As an illustrative example to study the performances of the CMDS and BMDS methods, we considered the US cities dataset from the US Census Bureau (Census Bureau, 2021), which contains Latitude and Longitude information from 15 large US cities. To evaluate the performances of the GBMDS, we appended 10 noise variables to add complexity. These noise variables are generated from a Gaussian distribution with a mean of 0 and variance comparable to the "Latitude" or "Longitude" variables.
We performed experiments across three scenarios, each of which had a distinct set of noises added to the data. The noises are incorporated into the data by assigning varying weights to the true and noisy variables. We represent the signal-to-noise ratio as \(R_{\text{s:n}}\), which is defined as the ratio of the weight of the true signal to that of the noise. If \(R_{\text{s:n}}>1\), it indicates that there is more signal than noise. In the first scenario, we assigned equal importance to all variables, resulting in \(R_{\text{s:n}}=1\). In this case, the results depend equally on the signal and noisy variables. In the second scenario, we placed more emphasis on crucial variables such as "Latitude" and "Longitude" by decreasing the weights assigned to redundant noises, setting \(R_{\text{s:n}}=4\). In the third scenario, we tested an extreme condition where the majority of the weights were allocated to the "Latitude" and "Longitude" variables, setting \(R_{\text{s:n}}=10\). In all experiments, we normalized the weights to ensure they sum up to 1.
We employed two Bayesian methods, MCMC and annealed SMC (ASMC), to implement our proposed GBMDS. To initialize the GBMDS, we utilized the results from CMDS. For simplicity, we assumed \(\psi=0\) to reduce the model to \(\mathcal{M}_{TN}^{\text{Euclidean}}\). To ensure a fair comparison between the two Bayesian methods, we kept the computational budget constant. Specifically, we first ran annealed SMC with 300 particles and recorded the number of iterations. We then allocated the same budget to MCMC by setting the number of MCMC iterations equal to the product of the annealed SMC iterations and the number of particles.
Table 2 presents the STRESS values obtained from CMDS, GBMDS with MCMC (GBMDS-MCMC), and GBMDS with ASMC (GBMDS-ASMC) for the three scenarios. Our analysis indicates that Bayesian approaches yield lower STRESS values across all three scenarios. Furthermore, we observed that annealed SMC outperformed MCMC in terms of generating smaller STRESS values under the same computational budget in Scenarios 2 and 3. It is interesting to discover that the better performance of annealed SMC is more pronounced when the signal-to-noise ratios are high.
\begin{table}
\begin{tabular}{l l l l} & CMDS & GBMDS-MCMC & GBMDS-ASMC \\ \hline Scenario 1: \(R_{\text{s:n}}=1\) & 0.4557 & **0.3231** & 0.3521 \\ Scenario 2: \(R_{\text{s:n}}=4\) & 0.4680 & 0.4327 & **0.3910** \\ Scenario 3: \(R_{\text{s:n}}=10\) & 0.4726 & 0.4414 & **0.4103** \\ \end{tabular}
\end{table}
Table 2: A Summary of the STRESS values for different methods on US City data under scenarios 1 to 3 with different noise-to-signal ratios.
Figure 4 displays the estimated locations of the 15 US cities obtained by GBMDS-ASMC. Some transformations, such as rotation and reflection, were applied to the estimated locations from GBMDS-ASMC to fit the cities' actual geographical locations. One can observe from Figures 4 and 4 that under the equal weight scenario, several cities are geographically misplaced no matter what transformations are applied. The reason behind this mismatch is that the information in the "Latitude" and "Longitude" variables are masked by the remaining variables. The estimated locations shown in 4 and 4 lead to a closer match when higher weights are assigned to the "Latitude" and "Longitude" variables.
The Bayesian approach offers several advantages over the classical approach. In addition to producing smaller STRESS values, the Bayesian approach enables the estimation of uncertainty by leveraging samples from the posterior distribution. To this
Figure 4: Estimated locations of the 15 US cities from CMDS and GBMDS-ASMC after transformations. Sub-figures (a) to (c) are results from CMDS and (d) to (f) are from GBMDS-ASMC. \(R_{\mathrm{s:n}}=1\) in (a) and (d), \(R_{\mathrm{s:n}}=4\) in (b) and (e), \(R_{\mathrm{s:n}}=10\) in (c) and (f). For GBMDS-ASMC, the ellipses are generated from all the posterior samples with the 95% credible regions. The posterior medians of \(\mathbf{x}_{i}\)’s are served as the estimated coordinates of the 15 US cities in the two-dimensional space.
end, we performed Procrustes transformations on the posterior samples of \(\mathbf{x}_{i}\) to align each sample as closely as possible to the estimated coordinates, effectively standardizing all the posterior samples of \(\mathbf{x}_{i}\). Using the transformed posterior samples, we constructed credible regions, represented as ellipses in Figures 4(d) to 4(f). In contrast to CMDS, our GBMDS-ASMC method offers uncertainty measures, with tight credible regions in scenarios where the signal-to-noise ratio is high and wider credible regions when more noises exist in the data.
### NIH text data
In this example, we applied the MDS techniques to text data consisting of documents and words. Our aim was to showcase the application of our proposed method and investigate the effect of dimension \(p\). The dataset holds information on research grants awarded by the National Institutes of Health (NIH) in 2014 (Jones, 2021). The raw data contain 100 randomly-sampled grant abstracts and metadata. To preprocess the data, we performed tokenization and removed stop words. This results in a document-term matrix with a dimension of 23915 by 100, where each column represents an abstract and contains the word counts for all words in that abstract. We then re-weighted the word counts by multiplying an inverse document frequency vector to adjust for the relative importance of words in the entire collection of documents. The purpose of this reweighing step was to account for the varying frequencies of words across documents.
We calculated the Cosine dissimilarities of the documents and used them as input to the MDS methods. We varied the dimension \(p\) from 2 to 6 to investigate its effect on the results. Table 3 displays a comparison of CMDS and GBMDS-ASMC using STRESS. For GBMDS-ASMC, we tested four candidate models, and the optimal model (indicated in bold) was selected based on the log marginal likelihood estimates for each dimension. The results indicate that GBMDS-ASMC provides a better representation of the data in lower-dimensional space, with smaller STRESS values than CMDS. Moreover, the optimal model chosen by the log marginal likelihood estimate is consistent with the model with the smallest STRESS value for \(p>3\). For \(p=2,3\), the optimal models selected by the log marginal likelihood estimates have the third and second smallest STRESS values, respectively.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Model} & \multicolumn{6}{c|}{Dimension} \\ \cline{2-7} & 2 & 3 & 4 & 5 & 6 \\ \hline \multicolumn{2}{|c|}{CMDS} & 0.8493 & 0.8171 & 0.7892 & 0.7598 & 0.7335 \\ \hline \multirow{4}{*}{GBMDS-ASMC} & \(\mathcal{M}_{T}^{\text{Euclidean}}\) & **0.7558** & **0.6599** & 0.6571 & 0.6057 & 0.5609 \\ \cline{2-7} & \(\mathcal{M}_{T}^{\text{osine}}\) & 0.7715 & 0.6901 & 0.6503 & 0.6357 & 0.5720 \\ \cline{1-1} \cline{2-7} & \(\mathcal{M}_{TTSN}^{\text{Euclidean}}\) & 0.4921 & 0.4343 & **0.3942** & **0.3804** & **0.3499** \\ \cline{1-1} \cline{2-7} & \(\mathcal{M}_{TTSN}^{\text{osine}}\) & 0.7024 & 0.6671 & 0.6152 & 0.5974 & 0.5634 \\ \hline \end{tabular}
\end{table}
Table 3: A summary of the STRESS values from applying the different MDS methods on the text data. For the GBMDS-ASMC method, the STRESS value in bold is the optimal model selected by the largest log marginal likelihood estimates in a given dimension.
Conclusion
In this work, we developed multiple ways to model the dissimilarity measures for multi-dimensional scaling, proposed a general adaptive annealed SMC algorithm for Bayesian inference, and applied the model selection via marginal likelihoods. We considered the problem of adaptive inferences with annealed SMC algorithm for an increasing dimension of the observations and parameters. The simulation results demonstrate a significant reduction in computational time while maintaining comparable accuracy when compared to the annealed SMC with a fixed dimension. We leveraged the rich MCMC literature on classical Metropolis-Hastings moves as the basis of proposal distributions in the adaptive annealed SMC. Moreover, the adaptive annealed SMC algorithm is easy to parallelize for batches of particles, which explicitly takes advantage of parallel processors' capabilities and boosts computing. Both the simulations and real data applications demonstrate the accuracy and effectiveness of the proposed adaptive annealed SMC algorithm for dimension reduction and model comparison.
On the basis of the GBMDS estimates of object configuration over various models and a range of dimensions \(p\), we proposed to use the marginal likelihood to choose the optimal combination. Compared with choosing dimensions by a simple Bayesian criterion in Oh and Raftery (2001), our fully Bayesian approach can incorporate prior comparison and directly utilize the unbiased marginal likelihood estimator from the annealed SMC algorithm for the choice of model and dimension. In contrast to the frequentist model selection via STRESS, which generally relies on the specifically-constructed statistics applicable to particular cases, Bayesian model selection has the simplicity of a maximum likelihood method regardless of the data or model being used as well as the advantage of penalizing more complex models. Furthermore, we obtained the marginal likelihood estimate as a byproduct of sampling from the algorithm. This efficient method of estimating the marginal likelihood from the annealed SMC lays the foundation for comparing MDS models with different metrics and dimensions. This also gives a notable computational advantage of the proposed annealed SMC over the existing MCMC-based methods.
We have implemented the Procrustes transformations on the posterior samples to deal with the non-identifiability issue. The transformed posterior samples are used to construct the credible regions to display uncertainty measures. In the geographical data example, we noted that the credible regions for some observations are relatively broad, indicating that one should interpret specific patterns with care. Other appropriate transformations to post-process the posterior samples can be considered to improve the interpretability. In addition, it would be interesting to investigate the influence of the number of particles in the annealed SMC algorithm on the credible regions.
One application of MDS is to visualize objects in a reduced dimensional space. Our proposed framework can help find a good model for the dissimilarities and an optimal \(p\) for the dimension reduction. Typically, the visualization works better for the model with the Euclidean metric in a two-dimensional space. It is beyond the scope of this study to investigate the possible visualizations for the optimal \(p\)-dimensional space with a non-Euclidean metric. This will be left for exploration in future work.
Another interesting application of MDS is to cluster objects, where similar observations are grouped into clusters based on dissimilarity. With MDS techniques, we can
obtain the coordinates of objects in a low-dimensional space. Visual display of clusters in a low-dimensional space is of interest since it may provide helpful information about the relationship between the groups and the underlying data generation process. Model-based clustering with dissimilarities was proposed by Oh and Raftery (2007), and the resulting model is estimated in a Bayesian way using MCMC. One can modify the adaptive annealed SMC algorithm to perform MDS and model-based clustering simultaneously. This model-based clustering algorithm also applies to text clustering when appropriate metrics are used to describe dissimilarities between texts. Moreover, in this study, we have explored the sequential estimation of object configuration using the adaptive annealed SMC algorithm. To enhance the computational efficiency of a single run of the annealed SMC algorithm outlined in Algorithm 2, one can consider implementing the resampling method as seen in Gunawan et al. (2020) or employ more intricate approaches for parallelization. These strategies can enable the scalability of the proposed method to accommodate datasets with substantial sample sizes.
## Supplementary Material
Supplementary Materials to "Generalized Bayesian Multidimensional Scaling and Model Comparison". Supplementary materials include (i) a document with the details of the dissimilarity metrics, likelihood function derivation, details of the particle propagation step, and details of the data-generating process in simulation studies. (ii) R code which implements the proposed model and some demos from Sections 4 and 5.
|
2304.03913
|
Collective flows of clusters and pions in heavy-ion collisions at GeV
energies
|
Within the framework of the quantum molecular dynamics transport model, the
collective flows of clusters and pions in heavy-ion collisions have been
systematically investigated. The clusters are recognized by the Wigner
phase-space density approach at the stage of freeze out in nuclear collisions,
i.e., deuteron, triton, $^{3}$He and $\alpha$. The directed and elliptic flows
of protons and deuterons in the reaction of $^{197}$Au+$^{197}$Au at incident
energy 1.23\emph{A} GeV are nicely consistent with the recent HADES data. The
higher order collective flows, i.e., triangular and quadrangle flows, manifest
the opposite trends with the less amplitude in comparison with the rapidity
distributions of directed and elliptic flows. The flow structure of $^{3}$He
and $\alpha$ is very similar to the proton spectra. The influence of the pion
potential on the pion production is systematically investigated and compared
with the FOPI data via the transverse momentum, longitudinal rapidity and
collective flows in collisions of $^{197}$Au + $^{197}$Au. It is manifested
that the pion yields are slightly suppressed in the domain of mid-rapidity and
high momentum. The antiflow phenomena is reduced by implementing the pion
potential and more consistent with the FOPI data in collisions of
$^{197}$Au+$^{197}$Au at the incident energy 1.5\emph{A} GeV.
|
Heng-Jin Liu, Hui-Gan Cheng, Zhao-Qing Feng
|
2023-04-08T05:14:50Z
|
http://arxiv.org/abs/2304.03913v2
|
# Collective flows of clusters and pions in heavy-ion collisions at GeV energies
###### Abstract
Within the framework of the quantum molecular dynamics transport model, the collective flows of clusters and pions in heavy-ion collisions have been systematically investigated. The clusters are recognized by the Wigner phase-space density approach at the stage of freeze out in nuclear collisions, i.e., deuteron, triton, \({}^{3}\)He and \(\alpha\). The directed and elliptic flows of protons and deuterons in the reaction of \({}^{197}\)Au+\({}^{197}\)Au at incident energy 1.23\(A\) GeV are nicely consistent with the recent HADES data. The higher order collective flows, i.e., triangular and quadrangle flows, manifest the opposite trends with the less amplitude in comparison with the rapidity distributions of directed and elliptic flows. The flow structure of \({}^{3}\)He and \(\alpha\) is very similar to the proton spectra. The influence of the pion potential on the pion production is systematically investigated and compared with the FOPI data via the transverse momentum, longitudinal rapidity and collective flows in collisions of \({}^{197}\)Au + \({}^{197}\)Au. It is manifested that the pion yields are slightly suppressed in the domain of mid-rapidity and high momentum. The antiflow phenomena is reduced by implementing the pion potential and more consistent with the FOPI data in collisions of \({}^{197}\)Au+\({}^{197}\)Au at the incident energy 1.5\(A\) GeV.
**PACS number(s)**: 24.10.Lx, 25.70.Mn, 25.75.Ld
## I I. Introduction
Cluster and particle production in heavy-ion collisions at intermediate energies bring the in-medium information in dense matter, i.e., the correlation and binding of a few of nucleons, cluster potential in nuclear matter, in-medium properties of particles etc. The collective flows manifest the anisotropy emission of particles from the fireball formed in heavy-ion collisions [1]. The in-medium properties of hadrons and dense nuclear matter might be extracted from the analysis of collective flows, i.e., the rapidity, transverse momentum, kinetic energy spectra etc. The collective flows of nuclear fragments, free nucleons, pions, kaons and hyperons have been extensively investigated both in theories and in experiments [2, 3, 4, 5, 6, 7, 8, 9, 10, 11]. Heavy-ion collisions at GeV energies provide the possibility for investigating strongly interacting matter with the high temperature and high-baryon density in laboratories, which are significant for extracting the properties of nuclear equation of state (EOS), the first-order phase transition of quark-glue matter and hadronic matter, and for searching the critical point. The collective flows manifest the decomposition of the azimuthal angle \(\phi\) distribution of emitted particles from the Fourier expansion with respect to the reaction plane [12]. The in-plane and out-of-plane flows of particle emission are usually investigated to extract the information of early stage in heavy-ion collisions, in particular the flow magnitude, slope parameter and flow structure in phase space [13, 14, 15, 16, 17, 18, 19]. In recent years, the higher order flow harmonics of protons, deuterons and tritons in collisions of \({}^{197}\)Au + \({}^{197}\)Au at the invariant energy of \(\sqrt{s_{NN}}\)=2.4 GeV were measured by the HADES collaboration [20, 21]. Calculations with transport models gave that the triangular flow coefficient \(v_{3}\) exhibits the enhanced sensitivity to the nuclear equation of state in heavy-ion collisions at several GeV energies and a nonvanishing fourth-order coefficient \(v_{4}\) can be used to constrain the nuclear mean-field potential at high-baryon densities [17, 18].
On the other hand, the collective flows of hyperons and hypernuclides are promising observables for investigating the hypernuclear formation, hyperon-nucleon interaction potential and hyperon-nucleon scattering [22]. The azimuthal distribution of pions in heavy-ion collisions is influenced by the pion-nucleon potential, delta-nucleon interaction, rescattering cross sections of pions and nucleons and stiffness of symmetry energy. The deep analysis of pion flows is also helpful for shedding light on the uncertainties of constraining the high-density symmetry energy, which has important application in determining the mass-radius relation and the maximal mass of neutron stars, and is associated with the binary merging of neutron stars, frequency of gravitational wave signal, X-ray and neutrino spectra from neutron stars [23].
In this work, the collective flows of nuclear clusters and pions in collisions of \({}^{197}\)Au+\({}^{197}\)Au at 1.23\(A\) GeV and 1.5\(A\) GeV are systematically investigated by the Lanzhou quantum molecular dynamics (LQMD) transport model. The experimental data from the HADES and FOPI collaborations are thoroughly analyzed. The article is organized as follows. In Sec. II we briefly introduce the LQMD model and light fragment recognition. The results are discussed in Sec. III. A summary and an outlook are given in Sec. IV.
## II II. The model description
In the LQMD transport model, the production of resonances, hyperons and mesons is coupled in the reactions
of meson-baryon and baryon-baryon collisions, which has been used for the nuclear dynamics in heavy-ion collisions and hadron induced reactions [24; 25]. The temporal evolutions of nucleons and nucleonic resonances are described by Hamilton's equations of motion under the self-consistently generated two-body and three-body interaction potentials with the Skyrme force for the \(i-\)th nucleon in the system as
\[\dot{r_{i}}=\frac{\partial H}{\partial p_{i}},\quad\dot{p_{i}}=-\frac{\partial H }{\partial r_{i}}. \tag{1}\]
The Hamiltonian of baryons consists of the relativistic energy, Coulomb interaction, momentum dependent potential energy and local interaction as follows
\[H_{B}=\sum_{i}\sqrt{{\bf p}_{i}^{2}+m_{i}^{2}}+U_{Coul}+U_{mom}+U_{loc}. \tag{2}\]
Here the \({\bf p}_{i}\) and \(m_{i}\) represent the momentum and the mass of the baryons. The local interaction potential is evaluated from the energy-density functional of
\[U_{loc}=\int V_{loc}(\rho({\bf r}))d{\bf r} \tag{3}\]
with
\[V_{loc}(\rho) = \frac{\alpha}{2}\frac{\rho^{2}}{\rho_{0}}+\frac{\beta}{1+\gamma} \frac{\rho^{1+\gamma}}{\rho_{0}^{\gamma}}+E_{sym}^{loc}(\rho)\rho\delta^{2} \tag{4}\] \[+ \frac{g_{sur}}{2\rho_{0}}(\nabla\rho)^{2}+\frac{g_{sur}^{iso}}{2 \rho_{0}}[\nabla(\rho_{n}-\rho_{p})]^{2},\]
where the \(\rho_{n}\), \(\rho_{p}\) and \(\rho=\rho_{n}+\rho_{p}\) are the neutron, proton and total densities, respectively, and the \(\delta=(\rho_{n}-\rho_{p})/(\rho_{n}+\rho_{p})\) being the isospin asymmetry of baryonic matter. The parameters \(\alpha\), \(\beta\), \(\gamma\), \(g_{sur}\), \(g_{sur}^{iso}\) and \(\rho_{0}\) are set to be the values of -215.7 MeV, 142.4 MeV, 1.322, 23 MeV fm\({}^{2}\), -2.7 MeV fm\({}^{2}\) and 0.16 fm\({}^{-3}\), respectively. The set of the parameters gives the compression modulus of K=230 MeV for isospin symmetric nuclear matter at the saturation density (\(\rho_{0}=0.16\) fm\({}^{-3}\)). The surface coefficients \(g_{sur}\) and \(g_{sur}^{iso}\) are taken to be 23 MeV fm\({}^{2}\) and -2.7 MeV fm\({}^{2}\), respectively. The third term contributes the symmetry energy being of the form \(E_{sym}^{loc}=\frac{1}{2}C_{sym}(\rho/\rho_{0})^{\gamma_{s}}\). The parameter \(C_{sym}\) is taken as the value of 52.5 MeV. The \(\gamma_{s}\) could be adjusted to get the suitable case from constraining the isospin observables, e.g., the values of 0.3, 1 and 2 being the soft, linear and hard symmetry energy, respectively. Here, the linear symmetry energy is taken into account in the calculation. Combined the kinetic energy from the isospin difference of nucleonic Fermi motion, the three kinds of symmetry energy cross at the saturation density with the value of 31.5 MeV[24].
A Skyrme-type momentum-dependent potential energy is used as follows
\[U_{mom} = \frac{1}{2\rho_{0}}\sum_{i,j,j\neq i}\sum_{\tau,\tau^{\prime}}C_ {\tau,\tau^{\prime}}\delta_{\tau,\tau_{i}}\delta_{\tau^{\prime},\tau_{j}}\int \int d{\bf p}d{\bf p}^{\prime}d{\bf r} \tag{5}\] \[\times f_{i}({\bf r},{\bf p},t)[\ln(\epsilon({\bf p}-{\bf p}^{ \prime})^{2}+1)^{2}]f_{j}({\bf r},{\bf p}^{\prime},t).\]
Combined with Eq. (4), one can obtain the density, isospin and momentum dependent single-nucleon potential as
\[U_{\tau}(\rho,\delta,{\bf p}) =\alpha\left(\frac{\rho}{\rho_{0}}\right)+\beta\left(\frac{\rho} {\rho_{0}}\right)^{\gamma}+E_{sym}^{loc}(\rho)\delta^{2} \tag{6}\] \[+ \frac{\partial E_{sym}^{loc}(\rho)}{\partial\rho}\rho\delta^{2}+ E_{sym}^{loc}(\rho)\rho\frac{\partial\delta^{2}}{\partial\rho_{\tau}}\] \[+ \frac{1}{\rho_{0}}C_{\tau,\tau}\int d{\bf p}^{\prime}f_{\tau}({ \bf r},{\bf p})[\ln(\epsilon({\bf p}-{\bf p}^{\prime})^{2}+1)]^{2}\] \[+ \frac{1}{\rho_{0}}C_{\tau,\tau^{\prime}}\int d{\bf p}^{\prime}f_{ \tau^{\prime}}({\bf r},{\bf p})\] \[\times [\ln(\epsilon({\bf p}-{\bf p}^{\prime})^{2}+1)]^{2}.\]
Here \(\tau\neq\tau^{\prime}\), \(\partial\delta^{2}/\partial\rho_{n}=4\delta\rho_{p}/\rho^{2}\) and \(\partial\delta^{2}/\partial\rho_{p}=-4\delta\rho_{n}/\rho^{2}\). The nucleon effective (Landau) mass in nuclear matter of isospin asymmetry \(\delta=(\rho_{n}-\rho_{p})/(\rho_{n}+\rho_{p})\) with \(\rho_{n}\) and \(\rho_{p}\) being the neutron and proton density, respectively, is calculated through the potential as \(m_{\tau}^{*}=m_{\tau}/\left(1+\frac{m_{\tau}}{|{\bf p}|}\frac{dU_{\tau}}{d{\bf p }^{\prime}}\right)\) with the free mass \(m_{\tau}\) at Fermi momentum \({\bf p}={\bf p}_{F}\). Here, \(C_{\tau,\tau}=C_{mom}(1+x),C_{\tau,\tau^{\prime}}=C_{mom}(1-x)(\tau\neq\tau^{ \prime})\) and the isospin symbols \(\tau\) and \(\tau^{\prime}\) represent proton or neutron, respectively. The parameters \(C_{mom}\) and \(\epsilon\) were determined by fitting the real part of optical potential as a function of incident energy from the proton-nucleus elastic-scattering data. In the calculation, we take the values of 1.76 MeV and 500 \(c^{2}/GeV^{2}\) for \(C_{mom}\) and \(\epsilon\), respectively, which result in an effective mass ratio \(m^{*}/m=0.75\) in the nuclear media at saturation density for symmetric nuclear matter. The parameter \(x\) as the strength of the isospin splitting with the value of -0.65 is taken in this work, which has the mass splitting of \(m_{n}^{*}>m_{p}^{*}\) in the nuclear medium.
### 2.1 light fragment recognition
For the light fragment with \(Z\leq\)2, the Wigner phase-space density is used to evaluate the probability of fragment formation. It is assumed that the cold clusters are created at the freeze-out stage in heavy-ion collisions. The momentum distribution of a cluster with \(M\) nucleons and \(Z\) protons for a system with \(A\) nucleons is given by
\[\frac{dN_{M}}{d{\bf P}} = G_{M}{A\choose M}{M\choose Z}\frac{1}{A^{M}}\int\prod_{i=1}^{Z}f _{p}({\bf r}_{i},{\bf p}_{i})\prod_{i=Z+1}^{M}f_{n}({\bf r}_{i},{\bf p}_{i}) \tag{7}\] \[\times \rho^{W}({\bf r}_{k_{1}},{\bf p}_{k_{1}},...,{\bf r}_{k_{M-1}},{\bf p }_{k_{M-1}})\] \[\times \delta({\bf P}-({\bf p}_{1}+...+{\bf p}_{M}))d{\bf r}_{1}d{\bf p}_{ 1}...d{\bf r}_{M}d{\bf p}_{M}.\]
Here the \(f_{n}\) and \(f_{p}\) are the neutron and proton phase-space density, which are obtained by performing Wigner transformation based on Gaussian wave packet. The relative coordinate \({\bf r}_{k_{1}},...,{\bf r}_{k_{M-1}}\) and momentum \({\bf p}_{k_{1}},...,{\bf p}_{k_{M-1}}\) in the \(M-\)nucleon rest frame are used for
calculating the Wigner density \(\rho^{W}\)[26; 27]. The spin-isospin statistical factor \(G_{M}\) is 3/8, 1/12 and 1/96 corresponding to M=2, 3 and 4, respectively. The root-mean-square radii of intending a cluster is needed for the Wigner density, i.e., 1.61 fm and 1.74 fm for triton and \({}^{3}\)He. The method is also extended to recognize the hyper-nuclide production in heavy-ion collisions [28]. It should be noticed that the nuclear structure effect is neglected by the method and the \(\alpha\) yields are strongly underestimated in comparison with the FOPI data [29]. The clusters can be also created in nucleon-nucleon or nucleon-cluster collisions, which are determined by the correlation of nucleons and are associated with the nuclear density and nucleon momentum [30].
### 2.2 Pion production in heavy-ion collisions
At the near threshold energy, the production of pions are mainly contributed from the direct process and decay of the resonances \(\Delta(1232)\), \(N^{*}(1440)\) and \(N^{*}(1535)\). The relation channels are given as follows
\[NN\leftrightarrow N\Delta,\quad NN\leftrightarrow NN^{*},\quad NN \leftrightarrow\Delta\Delta,\] \[\Delta\leftrightarrow N\pi,\ N^{*}\leftrightarrow N\pi,NN \to NN\pi(s-state). \tag{8}\]
The cross sections of each channel to produce resonances are parameterized by fitting the experimental data calculated with the one-boson exchange model [31]. The energy- and momentum- dependent decay width is used in the calculation. The probabilities of specific decay channels are as follows [32]
\[\Delta^{+}\leftrightarrow\frac{1}{3}\left(\pi^{+}+n\right)+ \frac{2}{3}\left(\pi^{0}+p\right),\] \[\Delta^{0}\leftrightarrow\frac{1}{3}\left(\pi^{-}+p\right)+ \frac{2}{3}\left(\pi^{0}+n\right),\] \[\Delta^{-}\leftrightarrow 1\left(n+\pi^{-}\right),\Delta^{++} \leftrightarrow 1\left(p+\pi^{+}\right). \tag{9}\]
The coefficient of branching ratio is determined by the square of the Clebsch-Gordan coefficients.
The transportation of pion in nuclear medium is also given by
\[H_{M}=\sum_{i=1}^{N_{M}}\left[V_{i}^{Coul}+\omega\left(p_{i},\rho_{i}\right) \right]. \tag{10}\]
The Coulomb interaction is given by
\[V_{i}^{Coul}=\sum_{j=1}^{N_{B}}\frac{e_{i}e_{j}}{r_{ij}}, \tag{11}\]
where the \(N_{M}\) and \(N_{B}\) are the total numbers of mesons and baryons including charged resonances, respectively. It should be noted that the pion meson is taken as the point particle and the Coulomb interaction between mesons is neglected owing to the limited numbers in comparison with the baryons.
The pion energy in the nuclear medium is composed of the isoscalar and isovector contributions as
\[\omega_{\pi}\left(p_{i},\rho_{i}\right)=\omega_{isoscalar}\left(p_{i},\rho_{ i}\right)+C_{\pi}\tau_{z}\delta\left(\rho/\rho_{0}\right)^{\gamma_{\pi}}. \tag{12}\]
The coefficient \(C_{\pi}=\rho_{0}\hbar^{3}/(4f_{\pi}^{2})=36\) MeV, and the isospin quantities are taken as \(\tau_{z}\)= -1, 0, and 1 for \(\pi^{+}\), \(\pi^{0}\), and \(\pi^{-}\), respectively [33]. The isospin asymmetry \(\delta\)=\(\left(\rho_{n}-\rho_{p}\right)/(\rho_{n}+\rho_{p}\) ) and the quantity \(\gamma_{\pi}\) adjusts the isospin splitting of the pion optical potential. We take \(\gamma_{\pi}\)=2 in the model. For the evaluation of the isoscalar part, we chose \(\Delta\)-hole model [34; 35] which is given by
\[\omega_{isoscalar}\left(p_{i},\rho_{i}\right) =S_{\pi}\left(p_{i},\rho_{i}\right)\omega_{\pi-like}\left(p_{i}, \rho_{i}\right) \tag{13}\] \[+S_{\Delta}\left(p_{i},\rho_{i}\right)\omega_{\Delta-like}\left( p_{i},\rho_{i}\right).\]
The probability of the pion component satisfies the relation by
\[S_{\pi}\left(p_{i},\rho_{i}\right)+S_{\Delta}\left(p_{i},\rho_{i}\right)=1. \tag{14}\]
The value of the probability is determined from the pion self-energy as [36]
\[S\left(p_{i},\rho_{i}\right)=\frac{1}{1-\partial\Pi\left(\omega\right)/ \partial\omega^{2}}, \tag{15}\]
where the pion self-energy is given by
\[\Pi=p_{i}^{2}\frac{\chi}{1-g^{\prime}\chi}, \tag{16}\]
the Migdal parameter \(g^{\prime}\sim 0.6\) and
\[\chi=-\frac{8}{9}\left(\frac{f_{\Delta}}{m_{\pi}}\right)^{2}\frac{\omega_{ \Delta}\rho\hbar^{3}}{\omega_{\Delta}^{2}-\omega^{2}}exp\left(-2p_{i}^{2}/b^{ 2}\right). \tag{17}\]
\(\omega_{\Delta}=(m_{\Delta}^{2}+p_{i}^{2})^{1/2}-m_{N}\), \(m_{\pi}\), \(m_{N}\), and \(m_{\Delta}\) are the pion, nucleon, and delta masses, respectively. The \(\pi N\Delta\) coupling constant \(f_{\Delta}\sim 2\) and the cutoff factor \(\sim 7m_{\pi}\). Two eigenvalues of \(\omega_{\pi-like}\) and \(\omega_{\Delta-like}\) are obtained from the pion dispersion relation as
\[\omega^{2}=p_{i}^{2}+m_{\pi}^{2}+\Pi\left(\omega\right). \tag{18}\]
The \(\Delta\)-nucleon interaction is estimated via the nucleon optical potential by
\[U_{\Delta^{-}}=U_{n},\quad U_{\Delta^{++}}=U_{p},\quad U_{\Delta^ {+}}=\frac{1}{3}U_{n}+\frac{2}{3}U_{p},\] \[U_{\Delta^{0}}=\frac{1}{3}U_{p}+\frac{2}{3}U_{n}, \tag{19}\]
where the \(U_{n}\) and \(U_{p}\) are the single-particle potentials for neutron and proton in Eq. (6), respectively. The N\({}^{*}\)-nucleon potential is taken as the same with the nucleon-nucleon potential.
The energy balance in the decay of resonances and reabsorption of pion in nuclear medium is satisfied by the relation
\[\sqrt{m_{R}^{2}+{\bf p}_{R}^{2}}+U_{R}(\rho,\delta,{\bf p}) =\sqrt{m_{N}^{2}+\left({\bf p}_{R}-{\bf p}_{\pi}\right)^{2}}+U_{N}( \rho,\delta,{\bf p}) \tag{20}\] \[+\omega_{\pi}\left({\bf p}_{\pi},\rho\right)+V_{\pi N}^{Coul}.\]
The \({\bf p}_{R}\) and \({\bf p}_{\pi}\) are the momenta of resonance and pion, respectively. The \(U_{R}\) and \(U_{N}\) are the sing-particle potentials for resonance and nucleon. The last term \(V_{\pi N}^{Coul}\) has the contribution only for the charged pair channels of \(\triangle^{0}\leftrightarrow\pi^{-}+p\) and \(\triangle^{++}\leftrightarrow\pi^{+}+p\). The optical potential can be evaluated from the in-medium energy by \(V_{\pi}^{opt}=\omega_{\pi}({\bf p},\rho)-(m_{\pi}^{2}+{\bf p}^{2})^{1/2}\). Recently, the influence of the pion potential on pion dynamics in heavy-ion collisions has been extensively investigated with different transport models [33; 37; 38; 39; 40; 41; 42]. Shown in Fig. 1 is the pion optical potential as functions of the pion momentum and baryon density at the saturation density with the isospin asymmetry \(\delta=(\rho_{n}-\rho_{p})/(\rho_{n}+\rho_{p})\). It is obvious that isospin splitting of the pion potential appears and the effect is pronounced in the domain of high baryon density, which impacts the charged-pion ratios in heavy-ion collisions. The minimum position with the momentum dependence is close to the resonance energy (p=290 MeV/c). The difference of charged pion potentials is similar to the contribution of the s-wave potential by fitting the chiral perturbation theory calculations with the positive value of \(V_{\pi^{-}}^{opt}-V_{\pi^{+}}^{opt}\)[41].
### 2.3 Collective flows
It is well known that the EOS and the nuclear dynamics in heavy-ion collisions have been widely studied through the analysis of collective flows, such as nucleon, light particle, and meson flows [4; 5; 6; 10]. The azimuthal distributions of particles produced in the intermediate energy heavy-ion collisions is conveniently parameterised by a Fourier decomposition with the coefficients \(v_{n}(p_{t},y)\) has
\[\frac{dN}{N_{0}d\phi}\left(y,p_{t}\right) =1+2v_{1}(y,p_{t})\cos(\phi)+2v_{2}(y,p_{t})\cos(2\phi) \tag{21}\] \[+2v_{3}(y,p_{t})\cos(3\phi)+2v_{4}(y,p_{t})\cos(4\phi),\]
in which \(p_{t}=\sqrt{p_{x}^{2}+p_{y}^{2}}\) and \(y=\frac{1}{2}\ln\frac{E+p_{x}}{E-p_{x}}\) are the transverse momentum and the longitudinal rapidity with the total energy \(E\), respectively. The directed flow \(v_{1}=\left\langle p_{x}/p_{t}\right\rangle=\left\langle cos(\phi)\right\rangle\) and the elliptic flow \(v_{2}=\left\langle(p_{x}^{2}-p_{y}^{2})/p_{t}^{2}\right\rangle=\left\langle cos (2\phi)\right\rangle\) manifest the competition between the in-plane (\(v_{2}>0\)) and out-of-plane (\(v_{2}<0\)) particle emissions. The triangular flow \(v_{3}=\left\langle\left(p_{x}^{3}-3p_{x}p_{y}^{2}\right)/p_{t}^{3}\right\rangle =\left\langle\cos(3\phi)\right\rangle\) and the quadrangular flow \(v_{4}=\left\langle\left(p_{x}^{4}+p_{y}^{4}-6p_{x}^{2}p_{y}^{2}\right)/p_{t}^{ 4}\right\rangle=\left\langle\cos(4\phi)\right\rangle\) manifest the anisotropic distributions on the plane perpendicular to the beam direction. The directed flow in the reaction plane is influenced by the pressure gradient of nuclear matter formed heavy-ion collisions. The difference between in-plane and out-of-plane emission of particles is embodies via the elliptic flow distribution. The flow rapidity and transverse momentum distributions of particles are related to the collision centrality of reaction system, mean-field potential, scattering and reabsorption process of particle in nuclear medium, symmetry energy etc.
## III III. Results and discussion
The cluster production in heavy-ion collisions is associated with the nucleon (cluster) mean-field potential,
Figure 1: (a) Density and (b) momentum dependence of the optical potential of pion in the neutron-rich nuclear matter with the isospin asymmetry \(\delta=\)0.2.
Pauli principle, nucleon-nucleon (cluster) collisions etc, in which the structure effects influence the cluster configuration and bound system in nuclear medium. The formation of clusters is contributed from the correlation of nucleons in phase-space and also from the fragmentation process in nuclear collisions. The statistical decay of primary fragments in the spinodal reactions dominates the low-energy cluster formation (below 10 MeV/nucleon). The medium and high-energy cluster production is contributed from the coalescence approach at the final stage and partly attributed from the correlation of nucleons during the collisions. In the past several decades, the nuclear fragmentation reactions have been extensively investigated both in experiments and in theories, in particular on the issues of spinodal multifragmentation, liquid-gas phase transition, properties of highly excited nuclei, symmetry energy at subsaturation densities etc [43; 44; 45; 46; 47]. The nuclear clusters with the charge number Z\(\leq\)2 manifest the baryonic matter properties in high-energy heavy-ion collisions and the bound state in the hadronization of quark-gluon plasma (QGP), might be the probes of the first order phase transition. On the other hand, the cluster spectra in phase space may shed light on the nuclear equation of state at high-baryon density. The collective flow manifest the phase space distribution of particles, in which the in-medium effect and EOS influence the flow structure. The collective flows have been measured by the HADES collaboration [20]. We calculated the collective flows of proton and light nuclei produced in the \({}^{197}\)Au + \({}^{197}\)Au collisions at the incident energy of 1.23\(A\) GeV and with the semicentral collisions with the impact parameter of b=6 fm. The collective flows of protons is given in Fig. 2 and compared with the HADES data. It is obvious that the directed flows are nicely consistent with the experimental data. The 'U' shape of elliptic flow \(v_{2}\) is basically reproduced, which is caused from the out-of-plane emission of protons bounced off the reaction plane. The triangular flow \(v_{3}\) exhibits an opposite trend with the structure of \(v_{1}\). The amplitude of quadrangular flow \(v_{4}\) is very small and almost isotropic distribution. The flow coefficient becomes more and more weak with increasing the flow order. The transverse momentum dependence of collective flows of protons is shown in Fig. 3. The difference between the calculations and HADES data
Figure 2: The rapidity distribution of the collective flows of protons in the reaction of \({}^{197}\)Au+\({}^{197}\)Au at incident energy 1.23\(A\) GeV and with the collision parameter of b=6 fm. The results are averaged over the transverse momentum in the domain \(1.0<p_{t}<1.5\) GeV/c. The experimental data are taken from the HADES collaboration [20].
is pronounced at the high transverse momenta, which is caused partially by neglecting the cluster construction of nucleon and cluster correlation in the nuclear collisions. The transverse momentum spectra of protons are also influenced by the Pauli principle in the evolution and nucleon-nucleon collisions.
The recognition of cluster in heavy-ion collisions needs to include the bound of few-nucleon system and the Pauli principle, i.e., the construction of deuteron, triton, \({}^{3}\)He and \({}^{4}\)He, in which the binding energy varies with the baryon density and dissociation of cluster into nucleons in nuclear medium takes place in the evolution of reaction system. The cluster distribution in phase-space is also a sensitive probe for extracting the nuclear matter properties. Shown in Fig. 4 is the collective flows of deuterons within the transverse momentum range of \(1.0<p_{t}<1.5\) GeV/c in collisions of \({}^{197}\)Au+\({}^{197}\)Au at 1.23\(A\) GeV and compared with the recent HADES data [20]. Similar to the proton flows, the directed and elliptic flows are nicely reproduced with the Wigner phase-space density method, namely the 'S' and 'U' shape structure. The in-plane emission of deuterons becomes obvious in the projectile-like and target-like region and the squeeze-out phenomena is pronounced in the mid-rapidity domain. The triangular flow manifests an opposite trend with the directed flow. The amplitude of quadrangular flow is very small and difficulty to reproduce the HADES data. The first and second order harmonics of anisotropic expansion are shown in Fig. 5 for the \({}^{3}\)He and \({}^{4}\)He production. A bit of difference with proton and deuteron emission is that the in-plane formation dominates the \({}^{3}\)He and \({}^{4}\)He production, which is caused that the part of cluster yields are recognized from the participating and spectator nucleons. More binding system of \({}^{4}\)He (28.3 MeV) is available for the cluster formation, e.g., the larger yields of \({}^{4}\)He in comparison with \({}^{3}\)He and triton in Fermi-energy heavy-ion collisions. However, the nuclear effect becomes more and more weak with increasing the beam energy in competition with the direct multi-fragmentation, in which the cluster yields rapidly decrease with the mass number. Sophisticated investigation on the cluster formation (deuteron, triton, \({}^{3}\)He and \({}^{4}\)He) in nuclear collisions is in progress.
Pions in heavy-ion collisions are very important for constraining the stiffness of symmetry energy. Ever since, the pion production near the threshold energy has attracted more and more attention. Before using the pion observable to extract the high-density behavior of sym
Figure 3: The same as in Fig. 3, but for the momentum spectra in the rapidity region of \(-0.25<y_{cm}<-0.15\).
Figure 4: Collective flows of deuterons within the transverse momentum range of \(1.0<p_{\rm t}<1.5\) GeV/c in collisions of \({}^{197}\)Au+\({}^{197}\)Au at \(1.23A\) GeV. The experimental data are taken from the HADES collaboration [20].
Figure 5: Collective flows of \({}^{3}\)He and \({}^{4}\)He within the transverse momentum range of \(1.0<p_{\rm t}<1.5\) GeV/c.
metry energy, it is necessary to explore the dynamics of pions produced in heavy-ion collisions, such as rapidity distribution of yields, transverse momentum spectra, collective flows etc. At the beam energies near the threshold value (280 MeV), pions are mainly produced via the decay of the resonance \(\Delta(1232)\). When the pions are emitted from the reaction zone, they undergo the multiple processes \(\Delta\rightarrow\pi(N)\rightarrow\Delta\). The yields are robust to be measured in experiments for constraining the high-density symmetry energy by the \(\pi^{-}/\pi^{+}\) ratio. The dynamics of the pion emission calculated by transport models is helpful for the understanding on the experimental observables. We calculated the rapidity and transverse momentum distributions of pions produced in \({}^{197}\)Au + \({}^{197}\)Au at the incident energy of 1.5\(A\) GeV and at the impact parameter of b=5 fm as shown in Fig. 6. The solid curves denote the results with the pion potential, and the dashed curves represent the ones without the pion potential. It is concluded that the pions are mainly produced in the mid-rapidity region, and the influence of \(\pi\)-N potential on charged pions is obvious. With the attractive potential, the pions are more favorable to be captured by the surrounding nucleons. The inclusion of the optical potential enhances the production of low-energy pions, but the production of high-energy pions are suppressed due to the reabsorption process. The effects of pion potential on the yields of pions gradually vanish with the increase of pion momentum. Considering the combined effects of Coulomb interaction and the optical potential characterized by its momentum dependence, it is easy to understand the transverse momentum distribution of pions. The difference of \(\pi^{+}\) and \(\pi^{-}\) is due to the combined effects by the optical potential and Coulomb interaction in nuclear medium.
The pion mesons are usually rescattered and reabsorbed by the surrounding nucleons, in which the primordial pions are created at the compression stage in nuclear collisions. This gives rise to an apparent pion flow and offers a powerful tool for exploring the pion dynamics in nuclear medium and equation of the state of nuclear matter [48; 49; 50; 51]. The collective flows of pions were firstly measured by the Bevalac streamer chamber group in 800 MeV/nucleon Ne induced reactions [49]. At the energies near the threshold value, the pions are mainly from the decay of \(\Delta(1232)\). So the \(\Delta\)-nucleon interaction potential and scattering are also important for the pion production. The emission of \(\Delta\) would tend to create the similar directed flow of protons. But the rescattering and reabsorption processes of pions with the spectator nucleons lead to the appearance of antiflows of pions in comparison with the proton flows [52]. The collective flows manifest the phase space distribution of particles, in which the in-medium effect and EOS influences the flow structure. The pion flows have been measured and extensive investigated by the the FOPI collaboration [9]. We calculated the directed flows of charged pions produced in the \({}^{197}\)Au + \({}^{197}\)Au collisions at the incident energy of 1.5\(A\) GeV and with the semicentral collisions (b=5 fm) as shown in Fig. 7. The reduced impact parameter is obtained by \(b_{0}=b/1.15(A_{p}^{1/3}+A_{t}^{1/3})\) with \(A_{p}\) and \(A_{t}\) being the mass numbers of projectile and target nuclei, respectively. It is obvious that, when there is no pion potential, the direct flows of the charged pions both show countercurrents, as described in the preceding paragraph, but the result is close to experimental data when an attractive potential is included, as compared to the case without. The well-known S shape is apparent in the distributions of the directed flow of \(\pi^{-}\). Also, the directed flow of \(\pi^{+}\) shows the famous shadowing effect due to the existence of \(\pi\)-N potential. The difference between \(\pi^{-}\) and \(\pi^{+}\) is caused by the Coulomb interaction and the isospin effect. The elliptic flow in \({}^{197}\)Au + \({}^{197}\)Au is also investigated in the same way as in directed flow, as shown in Fig. 8. Although there is a difference between our results and the data, they are the same in trend when the pion potential is included. Like in Ref. [10], the flow spectra can be well reproduced in near-central collisions. However, the calculations overpredict the experimental data in peripheral collisions owing to the difference in the choices of observables for impact parameter. But it still shows that pions are out-of-plane emission. Shown in Fig. 9 is the transverse momentum distributions of collective flows. It is obvious that the pion potential influences the high-momentum elliptic flows both \(\pi^{-}\) and \(\pi^{+}\). The difference of directed flows is very small. The flow structure is similar to the FOPI data [9].
## IV Conclusion
In summary, the collective flows of clusters and pions produced in collisions of \({}^{197}\)Au+\({}^{197}\)Au are thoroughly investigated within the LQMD transport model. The experimental data from the HADES and FOPI collaboration are systematically analyzed. The directed and elliptic flows of protons and deuterons are nicely reproduced and the out-of-plane emission dominates the cluster production in the rapidity domain of -0.5\(<y/y_{proj}<\)0.5. The triangular flow manifests the opposite structure in comparison with the directed flow but the less amplitude than the HADES data. The positive values of transverse momentum spectra of triangular and quadrangular flows are strongly underestimated. The in-plane emission of \({}^{3}\)He and \({}^{4}\)He dominates the cluster production. The attractive pion potential leads to the reduction of pion production in the mid-rapidity region. The directed flows of pions in the reaction of \({}^{197}\)Au + \({}^{197}\)Au at 1.5\(A\) GeV are nicely reproduced with the inclusion of pion potential. The strongly antiflow phenomena of \(\pi^{+}\) is reduced and more consistent with the FOPI data. The more attractive \(\pi^{+}\) potential leads to the in-plane emission and the elliptic flow structure is the same with the experimental
data.
**Acknowledgements** This work was supported by the National Natural Science Foundation of China (Projects No. 12175072 and No. 11722546) and the Talent Program of South China University of Technology (Projects No. 20210115).
|
2308.14162
|
Scaling regimes for wormlike chains confined to cylindrical surfaces
under tension
|
We compute the free energy of confinement ${\cal{F}}$ for a wormlike chain
(WLC), with persistence length $l_p$, that is confined to the surface of a
cylinder of radius $R$ under an external tension $f$ using a mean field
variational approach. For long chains, we analytically determine the behavior
of the chain in a variety of regimes, which are demarcated by the interplay of
$l_p$, the Odijk deflection length ($l_d=(R^2l_p)^{1/3}$), and the Pincus
length ($l_f = {k_BT}/{f}$, with $k_BT$ being the thermal energy). The theory
accurately reproduces the Odijk scaling for strongly confined chains at $f=0$,
with ${\cal{F}}\sim Ll_p^{-1/3}R^{-2/3}$. For moderate values of $f$, the Odijk
scaling is discernible only when ${l_p}\gg R$ for strongly confined chains.
Confinement does not significantly alter the scaling of the mean extension for
sufficiently high tension. The theory is used to estimate unwrapping forces for
DNA from nucleosomes.
|
Greg Morrison, D. Thirumalai
|
2023-08-27T17:46:04Z
|
http://arxiv.org/abs/2308.14162v2
|
# Scaling regimes for wormlike chains confined to cylindrical surfaces under tension
###### Abstract
We compute the free energy of confinement \(\mathcal{F}\) for a wormlike chain (WLC), with persistence length \(l_{p}\), that is confined to the surface of a cylinder of radius \(R\) under an external tension \(f\) using a mean field variational approach. For long chains, we analytically determine the behavior of the chain in a variety of regimes, which are demarcated by the interplay of \(l_{p}\), the Odijk deflection length (\(l_{d}=(R^{2}l_{p})^{1/3}\)), and the Pincus length (\(l_{f}=k_{B}T/f\), with \(k_{B}T\) being the thermal energy). The theory accurately reproduces the Odijk scaling for strongly confined chains at \(f=0\), with \(\mathcal{F}\sim Ll_{p}^{-1/3}R^{-2/3}\). For moderate values of \(f\), the Odijk scaling is discernible only when \(l_{p}\gg R\) for strongly confined chains. Confinement does not significantly alter the scaling of the mean extension for sufficiently high tension. The theory is used to estimate unwrapping forces for DNA from nucleosomes.
## 1 Introduction
There are compelling reasons for understanding the static and dynamics of confined polymers because of their relevance in filtration, gel permeation chromatography, translocation of polymers and polypeptide chains through microporous membranes, and passage of newly synthesized proteins through the ribosome tunnel. These and other considerations prompted several theoretical studies, starting with pioneering studies [1, 2], which triggered subsequent theories and simulations that probed the fate of flexible polymers
in pores with regular geometries [3, 4, 5, 6, 7, 8] as well as in the related case of random media [9, 10] is well understood.
The situation is somewhat more complicated when considering semi-flexible polymers or worm-like chains, WLC, in confined spaces. Spatial confinement of WLC plays an important role in many biological systems, such as, [11, 12, 13, 14, 15], including histone wrapping in chromatin [16, 17, 18, 19] and nanolithography [13, 20]. Here, we consider a WLC that is wrapped around a cylinder whose radius is \(R\), and is subject to mechanical force (Fig. 1). Besides the total contour length (\(L\)), there are three length scales that control the statistics of the WLC polymer. The first is the persistence length \(l_{p}\), which is a measure of the resistance of polymers to bending. For long unconfined chains the free energy is a function of \(l_{p}\). In the mean field theory, proposed here, the polymer is globally restricted to be wound around the cylinder, which is equivalent to restraint enforced by a soft harmonic potential. Consequently, the Odijk length or the deflection length, \(l_{d}=(R^{2}l_{p})^{1/3}\), emerges [21] coupling the chain stiffness and radius of confinement. In many biological contexts, the system is often under an external field elongating the chain, such as external tension (\(f\)) unravelling histone-wrapped DNA for replication [19]. An external tension or mechanical force is captured by the Pincus length [22], \(l_{f}=(\beta f)^{-1}\) with \(\beta=1/k_{B}T\). The interplay of \(l_{f}\), and \(l_{p}\), and \(l_{d}\) on the conformations of the WLC is not fully
Figure 1: Schematic diagram of the chain backbone, with a fixed distance between monomers \(a\) and resistance to bending characterized by the persistence length \(l_{p}\), the confinement to the surface of a cylinder of radius \(R\), and an external tension \(f\) acting on the endpoints of the chain.
understood. The problem has some relevance to nucleosomes, consisting of DNA wrapped around histone proteins, which is a building block for chromosomes. Hence, the approximate theory developed here might illustrate an aspect of polymer theory in describing the physics of chromatin [11, 17].
In this paper, we propose a mean-field approach to study the properties of a wormlike chain confined to the surface of a cylinder [23] under the application of an external tension. Because \(l_{p}\) is comparable to the contour length, \(L\), excluded volume interactions may be neglected. We recover the known [21, 24, 20, 25] dependence of free energy on the deflection length \(l_{d}=(R^{2}l_{p})^{1/3}\) at \(f=0\). The theory predicts the coefficient of the leading term of the scaling of free energy, which is in good agreement with numerical results [23]. For moderate values of \(f\neq 0\) (possibly relevant to DNA unwrapping from nucleosomes) and strong confinement (\(l_{p}/R\gg 1\)), we show that that the free energy scales quadratically with \(l_{f}\). At high external tensions, we find that the effect of confinement is perturbative, with \(1-\langle Z\rangle/L\sim\sqrt{l_{f}/l_{p}}\), independent of the radius of confinement.
## 2 Mean Field Theory
### Formulation and approximations
In order to understand the equilibrium properties of a cylindrically confined WLC under tension, we developed a mean-field theory to arrive at analytically tractable results. The energy of a configuration of the chain is determined by a variety of terms that are sketched in Fig. 1. The system is characterized by the spacing between monomers (\(a\)), the number of bonds in the chain (the chain length \(L=(N-1)a\approx Na\) with \(N\gg 1\)), the persistence length of the unconfined chain (\(l_{p}\)), the confinement radius (\(R\)) aligned with the \(z\)-axis, and the external tension (\(f\)) that is also applied in the \(z\) direction. Let the position of each monomer be \({\bf r}_{i}=(x_{i},y_{i},z_{i})\), and define \({\bf u}_{i}={\bf r}_{i+1}-{\bf r}_{i}\) as the bond vector, and \(\hat{\bf u}_{i}={\bf u}_{i}/|{\bf u}_{i}|\) the unit bond vector. The statistics of a surface-confined chain can be described by a constrained Kratky-Porod (KP) Hamiltonian[26]\(\beta H_{KP}=\frac{l_{p}}{a}\sum_{n=0}^{N}\hat{\bf t}_{i}\cdot\hat{\bf t}_{i+1}- \beta f(z_{N}-z_{0})\), with confinement to the surface of the cylinder requiring \(x_{i}^{2}+y_{i}^{2}=R^{2}\) for all \(i\).
The KP model is mathematically difficult to work with because of two rigid constraints: the fixed bond length, \(|{\bf u}_{i}|=a\), and the constraint that monomers be spatially constrained transverse to the \(z\) axis, which should
be enforced through the relation, \(x_{i}^{2}+y_{i}^{2}=R^{2}\). On the mean field level, we replace these rigid constraints for the confined WLC with softer harmonic restraints (an approach that has been fruitfully applied previously [27; 25; 28; 29; 30]). The form of the MF Hamiltonian can be found by writing the monomer distribution function explicitly, in order to identify a physically meaningful harmonic approximation to the rigid constraints. It is straightforward to show that (up to a constant) the statistical weight is \(\Psi_{S}=(\prod_{n=0}^{N}\delta[x_{n}^{2}+y_{n}^{2}-R^{2}])\times(\prod_{n=1}^ {N}\delta[|\Delta{\bf r}_{n}|-a])\times(\prod_{n=1}^{N-2}e^{-l_{p}/2a^{3}( \Delta{\bf r}_{n+1}-\Delta{\bf r}_{n})^{2}})\times e^{+\beta f(z_{L}-z_{0})}\) where \(\Delta{\bf r}={\bf r}_{n+1}-{\bf r}_{n}\) and \(\beta=1/k_{B}T\). The first term enforces confinement of the chain to the cylinder (affecting only the \(x\) and \(y\) coordinates of the polymer), the second term enforces the constant monomer spacing constraint, the third term accounts for the WLC's resistance to bending, and the fourth term accounts for the external tension. Each of the \(\delta\) functions may be written as \(\delta(x_{n}^{2}+y_{n}^{2}-R^{2})\propto\int dk_{n}e^{iak_{n}[(x_{n}^{2}+y_{n }^{2})/R^{2}-1]}\) and \(\delta(|\Delta{\bf r}|-a)\propto\int d\lambda_{n}e^{-ia\lambda_{n}[\Delta{\bf r }_{n}^{2}/a^{2}-1]}\), leading to,
\[\Psi_{S} \propto \int_{-i\infty}^{i\infty}\prod_{n}dk_{n}d\lambda_{n}\exp\bigg{[}- a\sum_{n=1}^{N}k_{n}\bigg{(}\frac{x_{n}^{2}+y_{n}^{2}}{R^{2}}-1\bigg{)}\] \[-a\sum_{n=1}^{N-1}\lambda_{n}\bigg{(}\frac{\Delta{\bf r}_{n}^{2}} {a^{2}}-1\bigg{)}-\beta f(z_{L}-z_{0})-\frac{1}{2}a\sum_{n=1}^{N-2}l_{p}\frac{( \Delta{\bf r}_{n+1}-\Delta{\bf r}_{n})^{2}}{a^{4}}\bigg{]}.\]
The \(\delta\) functions constrain enforce a fixed monomer separation and fixed transverse distance in the confined dimension. Here, we focus solely on surface confinement, but note that a MF approach has been applied to the confinement to the interior and surface [27] of a sphere. An equivalent volume constraint could be applied the interior of the cylinder through a constraint of \(x_{i}^{2}+y_{i}^{2}\leq R^{2}\). However, a harmonic constraint on volume confinement requires an estimate of the average distance of each monomer in the transverse direction (which cannot be predicted at the mean field level).
Although exact, the formulation in Eq. 1 is difficult to work with directly. Analytical progress becomes possible by assuming that the integrals over the Fourier variables are sharply peaked, in the same manner as in our previous studies [27; 31]. The partition function \(Z=\prod_{n}d^{3}{\bf r}_{n}\Psi_{S}(\{{\bf r}_{n}\})\equiv\int\prod_{n}d\lambda _{n}dk_{n}\exp(-{\cal F}[\{\lambda_{n},k_{n}\}])\) defining the nondimensional free energy functional \({\cal F}\) as an integral over all the monomer coordinates, and the linearity of the Fourier transform allows us to write \({\cal F}={\cal F}_{x}+{\cal F}_{y}+{\cal F}_{z}-a\sum_{n=1}^{N}k_{n}-a\sum_{n=1 }^{N-1}\lambda_{n}\), with \(e^{-{\cal F}_{x}}\equiv\int\prod_{n}dx_{n}e^{-{\cal H}_{c}[\{x_{n}\}]}\) and \(e^{-{\cal F}_{z}}\equiv\int\prod_{n}dz_{n}e^{-{\cal H}_{u}[\{z_{n}\}]}\)
where we define the confined Hamiltonian as, \({\cal H}_{c}[\{x_{n}\}]\equiv a\sum_{n=1}^{N-2}l_{p}(\Delta x_{n+1}-\Delta x_{n})^ {2}/2a^{4}+a\sum_{n=1}^{N-1}\lambda_{n}\Delta x_{n}^{2}/a^{2}+a\sum_{n=1}^{N}k_{ n}x_{n}^{2}/R^{2}\). Similarly, the unconfined Hamiltonian is, \({\cal H}_{u}[\{z_{n}\}]\equiv a\sum_{n=1}^{N-2}l_{p}(\Delta z_{n+1}-\Delta z_{n })^{2}/2a^{4}+a\sum_{n=1}^{N-1}\lambda_{n}\Delta z_{n}^{2}/a^{2}+\beta f\sum_{ n=1}^{N-1}\Delta z_{n}.\) If the free energy is sharply peaked around some \(\{\lambda_{n},k_{n}\}=\{\lambda_{n}^{*},k_{n}^{*}\}\) that minimizes \({\cal F}\), the partition function can be written approximately as \(Z\approx Z^{*}=e^{-{\cal F}^{*}}\). Because the Hamiltonians \({\cal H}_{c}\) and \({\cal H}_{u}\) are uncoupled and quadratic in the monomer coordinates, it is straightforward to integrate over the internal coordinates of the polymer exactly. Along the confined axes, we find \({\cal F}_{x}={\cal F}_{y}=\frac{1}{2}\log[{\rm Det}({\bf Q})]\), where the elements of the matrix \(({\bf Q})_{nm}\) are the coefficients associated with \(x_{n}x_{m}\) in Eq. 2.1. The explicit form of \(({\bf Q})_{nm}\) is given in Appendix A of an earlier work [27]. In the unconfined direction, completing the square and noting the translational invariance of the system along the cylinder axis, allows us to write \({\cal F}_{z}=\frac{1}{2}\log[{\rm Det}({\bf P})]e^{\frac{1}{4}(\beta f)^{2}a \sum_{n=1}^{N-1}\lambda_{n}^{-1}}\), with [31]\(({\bf P})_{ij}=\lambda_{i}\delta_{ij}-l_{p}\delta_{i,j\pm 1}/2a^{2}\). These expressions can in principle be used to determine the stationary phase values for the Fourier variables by setting \(\partial{\cal F}/\partial\lambda_{n}|_{\{\lambda_{n},k_{n}\}=\{\lambda_{n}^{* },k_{n}^{*}\}}=\partial{\cal F}/\partial k_{n}|_{\{\lambda_{n},k_{n}\}=\{ \lambda_{n}^{*},k_{n}^{*}\}}=0\). However, the resulting \(2N-1\) equations become intractable for large \(N\), thus making it necessary to make additional approximations.
The matrices \({\bf P}\) and \({\bf Q}\) are bidiagonal and tridiagonal, respectively, with a regular structure except near the endpoints of the chain. The high symmetry of the matrices that underly the equations suggest that we should seek symmetric solutions for the stationary values of \(\lambda_{n}^{*}\) and \(k_{n}^{*}\). In both the unconfined[31] and spherically confined[27] cases, it was shown that the tractable equations that reproduced exact theoretical results could be found for mean field parameters separated into bulk terms and endpoint terms[31, 27], with \(\lambda_{n}^{*}=\lambda\) and \(k_{n}=k\) for the interior points on the chain (those with \(2<n<N-2\)). Excess endpoint fluctuations generally require that \(\lambda_{1}=\lambda_{N-1}\neq 1\) and \(k_{1}=k_{N}\neq k\neq k_{2}=k_{N-1}\) (with the equalities due to symmetry arguments). This approximation allows us to write \({\cal H}_{c}[\{x_{n}\}]={\cal H}_{c}^{(b)}[\{x_{n}\}]+{\cal H}_{c}^{(e)}[\{x_{n }\}]\) and \({\cal H}_{u}[\{\Delta z_{n}\}]={\cal H}_{c}^{(b)}[\{\Delta z_{n}\}]+{\cal H}_{c }^{(e)}[\{\Delta z_{n}\}]\), where the superscript \(b\) denotes an extensive bulk term, depending on the mean field parameters \(\lambda\) and \(k\). The superscript \(e\) denoting an intensive endpoint term depending on the more complicated values of the mean field variables near the ends of the chain.
In the continuum limit, it can be shown that the bulk form of the Hamilto
nians become,
\[{\cal H}_{c}^{(b)}[x(s)] = \int_{0}^{L}ds\biggl{(}\frac{l_{p}}{2}\ddot{x}^{2}(s)+\lambda\dot{x }^{2}(s)+k\frac{x^{2}(s)}{R^{2}}\biggr{)} \tag{2}\] \[{\cal H}_{u}^{(b)}[\dot{z}(s)] = \int_{0}^{L}ds\biggl{(}\frac{l_{p}}{2}\ddot{z}^{2}(s)+\lambda\dot {z}^{2}(s)-\beta f\dot{z}(s)\biggr{)}. \tag{3}\]
The form for \({\cal H}_{c}^{(b)}\) is identical to the spherically confined Hamiltonian of the wormlike chain in our previous work [27], while the form of \({\cal H}_{u}^{(b)}\) is identical to that for an unconfined chain [31]. Note that \(\lambda\) is the same in the two Hamiltonians in Eq. 3 because of the condition \(\langle{\bf u}^{2}\rangle=\langle u_{x}^{2}+u_{y}^{2}+u_{z}^{2}\rangle=1\) couples the three components of the bending vector. However, the mean field parameter \(k\) occurs only in the confined Hamiltonian and enforces \(\langle x^{2}+y^{2}\rangle=R^{2}\) constraint. The assumption that \(\lambda\) is isotropic causes inaccuracies in the predicted mean extension of a chain using the MF approach [32]. A MF theory that avoids this assumption has been developed [30], which does at the cost of greater complexity in the model. As we will show below, the expected scaling of the free energy in the limits of high tension and strong confinement are both recovered using an isotropic assumption, suggesting that the overall scaling laws will be accurate but with potentially inaccurate coefficients. We also ignore the endpoint effects by neglecting the Hamiltonians \({\cal H}_{c}^{(e)}\) and \({\cal H}_{u}^{(e)}\). This approximation simplifies the mathematics greatly, but restricts our analysis to very long chains. Neglecting the endpoint effects, the mean field equations for the extensive contribution to the free energy in the continuum are \(\partial{\cal F}/\partial\lambda=\partial{\cal F}/\partial k=0\), with \(e^{-{\cal F}}=\int{\cal D}[{\bf r}(s)]e^{-{\cal H}_{c}^{(b)}[x(s)]-{\cal H}_{c }^{(b)}[y(s)]-{\cal H}_{u}^{(b)}[z(s)]+L(\lambda+k)}\).
### Calculation of the Free Energy
It is straightforward to perform the path integrals over the confined [27] and unconfined [31] dimensions to explicitly compute the partition function \(e^{-{\cal F}}\), from which it is possible to derive the mean field equations for constant \(\lambda\) and \(k\). One can readily recognize that Eq. 3 describes a quantum harmonic oscillator after a change of variables, for which an exact propagator is known[33]. In the confined dimension, it is straightforward to show[27] that the action in Eq. 2 is minimized by a path satisfying \(\frac{l_{p}}{2}x^{(4)}(s)-\lambda\ddot{x}(s)+\frac{k}{R^{2}}x(s)=0\). It is readily observed (after some tedious mathematics) that the solution is expressible in terms of the frequencies \(w_{\pm}^{2}=\frac{\lambda}{l_{p}}(1\pm\sqrt{1-2kl_{p}/\lambda^{2}R^{2}}\) ). It is possible to integrate over the internal degrees of freedom of the chain exactly [27], which results in an unwieldy expression. However, with the
assumption that \(\sinh(L\omega_{\pm})\approx\cosh(L\omega_{\pm})\approx e^{L\omega_{\pm}}/2\) (satisfied for \(L/l_{p}\gg 1\)), it is possible to simplify the confinement free energy as,
\[\frac{\mathcal{F}}{L}\approx\bigg{(}\omega_{+}+\omega_{-}-k\bigg{)}+\bigg{(} \sqrt{\frac{\lambda}{2l_{p}}}-\lambda-\frac{(\beta f)^{2}}{4\lambda}\bigg{)}, \tag{4}\]
where the first term is the contribution from integration over the chain configurations along the two confined axes, and the second is the contribution from the single unconfined axis. Endpoint effects are neglected in deriving Eq. 4, and is only valid for very long chains (where \(L\) is larger than all other length scales in the problem). The average extension of the chain, under tension, can be calculated using,
\[\langle Z\rangle=\frac{\partial\mathcal{F}}{\partial(\beta f)}=\frac{\beta fL}{ 2\lambda(l_{p},R,f)}, \tag{5}\]
where we have explicitly included the dependence of the mean field solution of \(\lambda\) on the physical parameters at play. Eq. 5 is similar to the result found in the case of an unconfined chain under an external tension[28], which is straightforward to evaluate once the mean field solution for \(\lambda\) is known.
To determine the free energy and mean extension, we must solve the MF equations \(\partial\mathcal{F}/\partial k=\partial\mathcal{F}/\partial\lambda=0\). The resulting equations are greatly simplified by noting that
\[k=\frac{l_{p}R^{2}}{2}\omega_{1}^{2}\omega_{2}^{2} \lambda=\frac{l_{p}}{2}(\omega_{1}^{2}+\omega_{2}^{2}) \tag{6}\] \[\frac{\partial\omega_{\pm}}{\partial\lambda}=\pm\frac{\omega_{ \pm}}{l_{p}(\omega_{+}^{2}-w_{-}^{2})} \frac{\partial\omega_{\pm}}{\partial k}=\mp\frac{1}{l_{p}R^{2} \omega_{\pm}(\omega_{+}^{2}-w_{-}^{2})}. \tag{7}\]
After some algebra, the variational equations for \(\lambda\) and \(k\), respectively, become
\[\frac{1}{l_{p}(\omega_{+}+\omega_{-})}=1-\frac{1}{\sqrt{8\lambda l _{p}}}-\frac{(\beta f)^{2}}{4\lambda^{2}}\frac{1}{l_{p}(\omega_{+}+\omega_{-} )}=R^{2}\omega_{+}\omega_{-}. \tag{8}\]
As the left-hand side of both equalities in the above equation are identical, we can readily solve for \(k\) in terms of \(\lambda\), with,
\[k=\frac{l_{p}}{2R^{2}}\bigg{(}1-\frac{1}{\sqrt{8\lambda l_{p}}}-\frac{1}{4} \left(\frac{\beta f}{\lambda}\right)^{2}\bigg{)}^{2} \tag{9}\]
so that the confinement parameter \(k\) at the mean field level can be determined exactly. The mean field parameter \(\lambda\), enforcing the inextensibility of the chain, requires the solution of a complicated equation in Eq. 8, after substitution of the exact solution for \(k\). Although it may not be possible to solve for the exact values analytically for all \(l_{p}\), \(R\), and \(f\), it is straightforward to numerically determine the mean field values accurately for any \(l_{p}\) and \(R\). The asymptotic behavior of the roots can be readily determined in certain limits, as discussed in the next section.
## 3 Results
### Scaling for weak confinement and weak tension
An asymptotic solution to the Mean Field equations can be determined in a variety of parameter regimes. The simplest scenario is the limit where both the cylindrical confinement and the external tension are weak. In the limit of \(R/l_{f}\ll 1\) (with \(l_{f}=k_{B}T/f\) the Pincus length) and \(l_{p}/R\ll 1\), the chain is weakly confined and we expect the mean field solution to be a perturbation on the solution to the unconfined three dimensional mean field theory. It has been shown [31] that \(\lambda\sim 9/8l_{p}\) in the absence of external force or confinement. In order to determine the asymptotic behavior of the mean field parameter, we expand in a series for small values of the rescaled variables \(\tilde{l}_{p}\equiv l_{p}/R\ll 1\) and \(\varphi=R/l_{f}\ll 1\), with \(\lambda R=\frac{9}{8l_{p}}+\sum_{m,n=0}^{\infty}b_{nm}\tilde{l}_{p}{}^{n} \varphi^{m}\) for some coefficients \(b_{nm}\). Substitution for the exact solution for \(k\) in Eq. 9 into the MF equation for \(\lambda\) in eq. 8 allows us to perform a series expansion in higher order of \(\tilde{l}_{p}\) and \(\varphi\), both assumed to be small. Iteratively solving for the lowest order coefficients \(b_{ij}\) shows \(\lambda\approx\frac{9}{8l_{p}}-\frac{4l_{p}}{9R^{2}}+\frac{4l_{p}}{9l_{f}^{2} }+O(l_{p}^{3}/R^{4})\), to leading order in \(\tilde{l}_{p}\) and \(\varphi\). To leading order, the free energy under weak confinement and weak external tension is
\[\frac{\mathcal{F}}{L}\approx\frac{9}{8l_{p}}+\frac{2l_{p}}{9R^{2}}\left(1- \frac{8l_{p}^{2}}{81R^{2}}\right)-\frac{2l_{p}}{3l_{f}^{2}}\qquad(l_{p}\ll R, R/l_{f}f\ll 1). \tag{10}\]
Note that retaining higher order terms in \(\lambda\) does not affect the scaling coefficients in eq. 10. It is interesting to note that the leading order contribution of the tension is independent of the radius of the cylinder, with the confinement only entering into the free energy through coupling with the persistence length. This is due to the distinct axes over which each energetic contribution acts, each of which are perturbative in this limit. Not surprisingly for weakly confined chains, the deflection length \(l_{d}=(l_{p}R^{2})^{1/3}\) does not enter
into the free energy. We note that these scaling coefficients are not likely to be precise, as has been previously noted for the MF solutions in multiple contexts [28, 30, 32]. However, the scaling with each variable is expected to be accurate.
For a weakly confined chain, the average extension in eq. 5 becomes
\[\frac{\langle Z\rangle}{L}\approx\frac{4}{9}\frac{l_{p}/l_{f}}{1-2\left(\frac{4 l_{p}}{9R}\right)^{2}}, \tag{11}\]
to fourth order in \(\tilde{l}_{p}\) and \(\varphi\), growing linearly with \(f\) as long as \(l_{p}\ll R\). The linear increase for low forces and weak confinement is shown in the purple triangles of Fig. 2 (A), satisfying the expected linear scaling.
Figure 2: (A) Average chain extension (Eq. 5) as a function of \(f\) (with \(R=1\) held fixed) for various values of \(l_{p}\): \(l_{p}/R=1\) (purple triangles), \(10\) (red circles), and \(100\) (blue squares). The extension increases linearly with force for small \(f\), and the linear scaling is unperturbed by confinement effects. (B) The approach to full extension, \(1-\langle Z\rangle/L\), as a function of \(f\) scales as \(f^{-1/2}\) for large external tension, but deviations from this scaling occur for very stiff chains.
### Strongly confined chains under weak tension
It is also possible to determine the scaling behavior of the polymer under strong confinement (with \(l_{p}\gg R\)) but still constraining the external tension to be weak (with \(R\beta f\ll 1\)). A one-dimensional WLC in the absence of tension on the mean field level will satisfy \(\lambda\sim 1/8l_{p}\), and we expect that \(\lambda\) must converge to this value for sufficiently small \(R\) or sufficiently large \(l_{p}\) (since transverse fluctuations must vanish in either limit). The effect of confinement is contained in the mean field variable \(k\), which should capture the transverse statistics of the chain. It is known that the Odijk length scale, \(l_{d}\propto(R^{2}l_{p})^{1/3}\), emerges for strongly confined WLCs to the interior of a cylinder [21, 25]. A stiff chain will predominantly be aligned with the cylinder axis, and \(l_{d}\) would be the typical distance along the chain for transverse fluctuations (a consequence of the mean field approach) to encounter the walls (causing a deflection). A similar scaling has been observed for surface confined chains [23] and we expect the deflection length to emerge at the MF level because the hard constraint is replaced by a soft harmonic potential. As a consequence, the stiff chain 'deflects' off the soft harmonic potential rather than a rigid wall.
In the limit of \(\tilde{l}_{p}\equiv l_{p}/R\gg 1\) and \(\varphi=R/l_{f}\ll 1\) we expect the leading order behavior of \(\lambda\) can be recovered with the ansatz \(\lambda\approx\frac{1}{8l_{p}}+\sum_{n=4}^{\infty}c_{n}\tilde{l}_{d}^{-n/3}+ \sum_{m=1}^{\infty}d_{m}\varphi^{m}\). This is similar to the expression for \(\lambda\) in the weakly confined case but with the expansion in terms of small \(\varphi\) and large \(\tilde{l}_{p}^{1/3}\), and ignores cross-terms (assumed to be higher order, as was the case for weak confinement and low tension). The restriction of \(n\geq 4\) in the sum over \(\tilde{l}_{p}\) ensures we recover the expected 1-d scaling for \(R=0\), since the leading order behavior is expected to be \(\lambda=1/8l_{p}\) for \(R=0\). Substitution into Eqs. 8 using 9 and taking the limit of small \(\varphi\) and large \(\tilde{l}_{p}\) shows that \(c_{4}=0\) and \(c_{5}=1/(4\times 2^{1/3})\) cancel the lowest order terms in \(\tilde{l}_{p}\), while \(d_{1}=0\) and \(d_{2}=4\) cancels the leading term in \(\varphi\). Substitution of this solution into the free energy yields
\[\frac{{\cal F}}{L} \approx \frac{3}{2^{5/3}l_{d}}-\frac{2l_{p}}{l_{f}^{2}}+\frac{l_{d}}{l_{f }^{2}}+\frac{1}{8l_{p}}, \tag{12}\]
with the first term the leading order in the confinement, the second to leading order in \(l_{f}\), the third term the lowest order coupling between the competing length scales \(l_{d}\) and \(l_{f}\), and the fourth term the 1-dimensional MF free energy. In the absence of force, this expression for the free energy at a large eternal tension scales as \(L/l_{d}\), agreeing with the known result for a
strongly confined WLC [21]. The coefficient of the leading term in Eq. 12 is \(3/2^{5/3}\approx 0.945\), which will dominate in the limit of \(R\to 0\). This compares reasonably well with the theoretical prediction for a surface-confined chain, with Eq. 38 in a previous study [23], which predicts a coefficient of \(0.8416\) by solving an exact nonlinear Fokker-Plank equation. We also note that the coefficient of the leading term of Eq. 12 is exactly twice the value Eq. [8] in an earlier report of DNA in confined to the interior of a cylinder [25]. This agreement along with the accurate prediction of the scaling coefficient in the previously studied case of spherical confinement using the mean field theory [27] suggest that even the calculated numerical coefficients arising may generally be reliable.
The scaling is confirmed by the numerical calculation for \(l_{p}/R\gtrsim 10\) at \(f=0\)
Figure 3: (A) Numerical roots for the extensive component of the mean field free energy for \(R\beta f=0\) (black stars), 1 (purple triangles), 10 (red circles), and 100 (blue squares) as \(l_{p}\) is varied and \(R\) held fixed. The scalings derived for the three asymptotic regimes are indicated by the gray dashed lines. The free energy is shifted by a factor \(L/l_{f}\) to counter the leading term in Eq. 15. (B) The coefficients \(\alpha\) for the curves in (A) show that the scaling of \({\cal F}\sim l_{p}^{-1}\) and \({\cal F}\sim l_{p}^{-1/3}\) are recovered for sufficiently weak or sufficiently strong confinement respectively. Furthermore, \({\cal F}\sim l_{p}^{-1/2}\) is observed only for very large forces.
in the black stars in Fig. 3, and for non-zero forces the scaling is recovered for sufficiently strong confinement. Fig. 3(B) shows that non-zero tensions can significantly alter the onset of the Odijk regime. Even an external tension as low as, \(R/l_{f}=1\) (the purple triangles), delays the onset of the transition to the Odijk scaling, \(\alpha=\partial\log({\cal F})/\partial\log(l_{p}/R)=-\frac{1}{3}\), by orders of magnitude in \(l_{p}/R\). In particular, we note that modeling the histone as a long cylinder of radius \(R\approx 3.15\)nm and DNA with \(l_{p}\approx 50\)nm will have a free energy of confinement satisfying the Odijk scaling at \(f=0\) (since \(l_{p}/R\approx 16\)). The binding of DNA to histones is known to be interrupted by forces on the order of \(f\approx 10\)pN [19] and the unbinding of DNA already bound to histones occurs [19, 18] at forces \(\approx 20-30\)pN (or \(7.7\lesssim\frac{R}{l_{f}}\lesssim 23.0\)). The intermediate external tension in the red squares of Fig. 3 (with \(R/l_{f}=10\)) falls within this range, indicating that the Odijk scaling may not be discernible by tensile forces over a wide range of biologically relevant conditions. It is interesting to note that confinement does not have a similarly strong affect on the scaling for the extension of a WLC under tension. The extension of the chain for small \(f\) and large \(l_{p}/R\) becomes
\[\frac{\langle Z\rangle}{L}\approx\frac{4l_{p}/l_{f}}{1+\left(\frac{2R}{l_{p}} \right)^{2/3}}, \tag{13}\]
with the variations in the denominator being weak due to the approximation \(l_{p}/R\gg 1\). Note that the scaling is linear in \(l_{p}\), despite the emergence of the deflection length as the dominant length scale in the free energy in Eq. 12. We expect that even stiff chains will deviate only slightly from linear scaling (confirmed in the red squares and blue circles of Fig. 2(B). The leading term in Eq. 13 differs from the extension of a weakly confined chain by a factor of \(\frac{1}{9}\), which implies that strongly confined chains are expected to have a greater average extension at the same external tension as weakly confined chains (a physically sensible result). This is confirmed in Fig. 4, which shows that a decrease in the radius of the cylinder at fixed \(l_{p}\), decreases the midpoint of extension (\(f_{m}\), the force at which \(\langle Z\rangle/L=0.5\)) from \(l_{p}\beta f_{m}\sim 2\) by nearly an order of magnitude. For parameters matching the wrapping of DNA around a histone core, with \(l_{p}\approx 16\) and \(R\approx 3.15\)nm, we predict that the midpoint of the transition is greatly reduced from its unconfined value, with \(f_{m}(R\rightarrow\infty)\approx 0.16\)pN for the unconfined chain[28] and \(f_{m}(R=3.15)\approx 0.025\)pN.
### Confined chains under high tension
Finally, we consider the limit of external tensions that are strong enough to be comparable to confinement effects, satisfying \(\frac{R}{l_{f}}\gtrsim l_{p}/R\). To determine the asymptotic form of the free energy in this limit, we define \(\tilde{l}_{p}=l_{p}/R\equiv\sigma\times R/l_{f}=\sigma\varphi\) for some unconstrained \(\sigma\) (assumed finite, but not restricted to be large or small), and expand \(\lambda\) in the limit of large \(\varphi\). In this limit, it is straightforward to show that \(\lambda R\approx\frac{\varphi}{2}+\frac{3}{8\sqrt{\sigma}}+\frac{(9\sigma-8)} {32\sigma\varphi}\) to leading order in \(\varphi\gg 1\). In the high-force and high-tension limit, the free energy becomes,
\[\frac{\mathcal{F}}{L} \approx -\frac{\varphi}{R}+\frac{3}{2R\sqrt{\sigma}}+\frac{1}{2R\varphi} \left(1+\frac{9}{16\sigma}\right) \tag{14}\] \[= -\beta f+\frac{3}{2}\sqrt{\frac{\beta f}{l_{p}}}+\frac{9}{32l_{p} }+\frac{1}{2\beta fR^{2}}. \tag{15}\]
Note that the free energy here does not depend on the deflection length \(l_{d}\sim(l_{p}R^{2})^{1/3}\) that was seen for strongly confined chains with low tension. Thus, a strong force significantly alter the free energy of confinement. The predicted scaling is confirmed in the blue circles in Fig. 3 for very large forces (with \(R\beta f=100\)). We do not observe the expected limits of \(\mathcal{F}\sim(l_{p}/l_{f})^{-1/2}\) or \(\mathcal{F}\sim(l_{p}R^{2})^{-1/3}\) for strong confinement and intermediate tension (the purple triangles or red squares with \(R/l_{f}=1\) or \(10\) respectively), but instead
Figure 4: Extension \(\langle Z\rangle/L\) for fixed \(l_{p}=1\) as \(R\) and \(f\) are varied. The midpoint of the force extension curve for sufficiently large \(R/l_{p}\sim 10^{3}\) occurs near \(l_{p}\beta f_{m}\sim 2\), but is reduced by nearly a factor of \(8.8\) for small \(R/l_{p}\sim 10^{-3}\) to \(l_{p}\beta f_{m}\sim 0.23\), in good agreement with the predicted reduction by a factor of \(9\) between weakly and strongly confined chains.
an extended crossover region between the expected scaling laws emerges. The average extension of the chain in this limit is similar to the unconfined WLC, with
\[\frac{\langle Z\rangle}{L}=1-\frac{3}{4\sqrt{l_{p}\beta f}}, \tag{16}\]
with the highest order dependence on the confinement radius scaling as \((\frac{R}{l_{f}})^{-2}\), which is assumed to be small in this limit. Note that the inextensibility of the chain is respected in the mean field theory (since \(\langle Z\rangle\to L\) as \(f\rightarrow\infty\)), and that Eq. 16 (valid for sufficiently high forces) is identical to that of the unconfined chain. The approach to full extension is shown in Fig. 2(B), confirming the scaling of \(1-\langle Z\rangle/L\sim f^{-1/2}\) for large \(f\) regardless of the strength of confinement. Deviations from this scaling occur only for much weaker forces (with \(l_{p}\beta f\lesssim 1\)) for strongly confined chains.
## 4 Conclusions
In this paper, we calculated the free energy and linear extension of a worm-like chain confined to the surface of a cylinder with an applied external tension using a mean field approach. We conclude with the following additional comments. (1) Our method recovers the Odijk scaling of the free energy in the absence of force and the one-dimensional extension profile for a WLC in the limit of small cylinder radius. (2) The coefficient in Eq. 12 in the free energy expression obtained here is fairly close to the result obtained previously [23], which shows that the mean field theory not only predicts the scaling relation but also yields the coefficients that are fairly accurate. (3) For a nucleosome, \(\frac{l_{p}}{R}\) lies in the range (5-10) depending on the concentration and valence of counter ions. Moreover, experiments [18] show that the first stage of DNA unwrapping occurs at \(f\approx(3-5)\) pN. For these parameters, the theory predicts that \(\alpha=\partial\log(\mathcal{F})/\partial\log(l_{p}/R)\approx-0.5\) (red curve in Fig. 3B), which corresponds to strong external tension and modest confinement.
**Acknowledgements:** DT is grateful to Fyl Pincus for free lessons, with historical connotations, on polymer physics. This work was supported by grants from the National Science Foundation (CHE 2320256, PHY 2014141 and PHY 2019745) and the Welch Foundation through the Collie-Welch Chair (F-0019).
|
2303.09527
|
Fairness-aware Differentially Private Collaborative Filtering
|
Recently, there has been an increasing adoption of differential privacy
guided algorithms for privacy-preserving machine learning tasks. However, the
use of such algorithms comes with trade-offs in terms of algorithmic fairness,
which has been widely acknowledged. Specifically, we have empirically observed
that the classical collaborative filtering method, trained by differentially
private stochastic gradient descent (DP-SGD), results in a disparate impact on
user groups with respect to different user engagement levels. This, in turn,
causes the original unfair model to become even more biased against inactive
users. To address the above issues, we propose \textbf{DP-Fair}, a two-stage
framework for collaborative filtering based algorithms. Specifically, it
combines differential privacy mechanisms with fairness constraints to protect
user privacy while ensuring fair recommendations. The experimental results,
based on Amazon datasets, and user history logs collected from Etsy, one of the
largest e-commerce platforms, demonstrate that our proposed method exhibits
superior performance in terms of both overall accuracy and user group fairness
on both shallow and deep recommendation models compared to vanilla DP-SGD.
|
Zhenhuan Yang, Yingqiang Ge, Congzhe Su, Dingxian Wang, Xiaoting Zhao, Yiming Ying
|
2023-03-16T17:44:39Z
|
http://arxiv.org/abs/2303.09527v1
|
# Fairness-aware Differentially Private Collaborative Filtering
###### Abstract.
Recently, there has been an increasing adoption of differential privacy guided algorithms for privacy-preserving machine learning tasks. However, the use of such algorithms comes with trade-offs in terms of algorithmic fairness, which has been widely acknowledged. Specifically, we have empirically observed that the classical collaborative filtering method, trained by differentially private stochastic gradient descent (DP-SGD), results in a disparate impact on user groups with respect to different user engagement levels. This, in turn, causes the original unfair model to become even more biased against inactive users. To address the above issues, we propose **DP-Fair**, a two-stage framework for collaborative filtering based algorithms. Specifically, it combines differential privacy mechanisms with fairness constraints to protect user privacy while ensuring fair recommendations. The experimental results, based on Amazon datasets, and user history logs collected from Etsy, one of the largest e-commerce platforms, demonstrate that our proposed method exhibits superior performance in terms of both overall accuracy and user group fairness on both shallow and deep recommendation models compared to vanilla DP-SGD.
Collaborative Filtering, Fairness, Differential Privacy +
[MISSING_PAGE_POST]
[
Footnote †: [
Footnote †: [
Footnote †: [
Footnote †: [
Footnote †: [
[
[MISSING_PAGE_POST]
[
[MISSING_PAGE_POST]
[
Footnote: [
Footnote: [
Footnote: [
[
Footnote: [
[
Footnote: [
Footnote: [
[
Footnote: [
[
Footnote: [
[
Footnote: [
[
Footnote: [
[
Footnote: [
[
Footnote: [
[
Footnote: [
[
[
Footnote: [
[
Footnote: [
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
[
unfairness aggravation brought by DP-SGD, called **DP-Fair**. Specifically, the first stage of DP-Fair applies noise perturbation to user and item embeddings separately, thereby improving utility while providing privacy guarantees over vanilla DP-SGD. In the second stage, we apply a post-processing step to enforce user group fairness on the final recommendation list via solving of an integer programming problem. Our experimental results on Amazon benchmark datasets and user history logs collected on Etsy, demonstrate that our proposed algorithm outperforms vanilla DP-SGD based collaborative filtering in terms of overall recommendation performance and user-side group fairness.
## 2. Preliminaries
### Collaborative Filtering Models
Let \(\mathcal{H}=\{u_{1},\cdots,u_{n_{1}}\}\) and \(\mathcal{V}=\{u_{1},\cdots,u_{n_{2}}\}\) be the sets of users and items, respectively. Let \(\mathcal{H}_{u}=\{o\in\mathcal{V}\}\) denote the set of items that user \(u\) had positive interactions with. It is worth noting that we treat all interactions as binary implicit feedback (e.g. one if there is a click). Explicit feedback such as rating \(r\) (e.g. 1-5) are converted to one if \(r>3\) otherwise zero. \(\mathcal{H}\) denote the collection of all \(\mathcal{H}_{u}\). Let \(n\) be the number of total positive interactions, i.e. \(|\mathcal{H}|=n\). Let \(\mathbf{x}_{u}\in\mathbb{R}^{n_{1}},\mathbf{x}_{o}\in\mathbb{R}^{n_{2}}\) be the one-hot encoding of the user \(u\) and item \(o\), respectively. Let \(\mathbf{z}_{u}=U\mathbf{x}_{u}\in\mathbb{R}^{d},\mathbf{z}_{u}=V\mathbf{x}_{o }\in\mathbb{R}^{d_{2}}\) denote the corresponding latent embeddings. We also employ \(W\) to denote any other potential feature extraction parameters and \(\Theta=(U,V,W)\) to denote all learnable parameters.
A collaborative filtering latent factor model \(f_{\Theta}:\mathcal{U}\times\mathcal{V}\rightarrow\mathbb{R}\) is learning to infer the implicit feedback pattern once a learning to rank loss \(l\) is given. In this work, we focus on the classic Bayesian Personalized Ranking (BPR) (Srivastava et al., 2014) loss, as follows,
\[l(f_{\Theta})=\sum_{u,v,v^{\prime}}-\log\sigma(f_{\Theta}(\mathbf{x}_{u}, \mathbf{x}_{v})-f_{\Theta}(\mathbf{x}_{u},\mathbf{x}_{v^{\prime}}))+\frac{ \lambda}{2}\|\Theta\|^{2},\]
where \(\sigma\) is the sigmoid function, \(\lambda\) is the regularization parameter, \(v\in\mathcal{H}_{u}\), and \(v^{\prime}\in\mathcal{H}_{u}^{-}=\mathcal{V}\backslash\mathcal{H}_{u}\) denotes an item that user \(u\) does not provide implicit feedback. Since the positive interactions are usually more sparse, we slightly abuse the notation and let \(\mathcal{H}^{-}\) also denote a uniformly sub-sampled set of itself such that \(|\mathcal{H}^{-}|=|\mathcal{H}|=n\). After learning, a recommendation list \(\mathcal{R}^{k}_{u}=\{o\in\mathcal{V}\}\) for each user \(u\) is produced based on top \(k\) ranking scores \(\{f_{\Theta}(u,\mathcal{V})\}\).
### Differential Privacy and DP-SGD
We first introduce the definition of differential privacy, which is given as follow.
**Definition 1**.: For any \(\epsilon,\delta>0\), an (randomized) algorithm \(\mathcal{A}\) is said to be \((\epsilon,\delta)\)-differentially private if for all neighboring datasets \(D,D^{\prime}\) that differs by at most one example, and for all possible output sets \(\Theta\) by \(\mathcal{A}\), there holds
\[\mathbb{P}[\mathcal{A}(D)\in\Theta]\leq\exp(\epsilon)\mathbb{P}[\mathcal{A}( D^{\prime})\in\Theta]+\delta.\]
where \(\epsilon\) denotes the privacy budget (smaller values indicates a stronger privacy guarantee) and \(\delta\) denotes tolerance of probability that the privacy guarantee fails. In practice, it often requires \(\delta\ll\frac{1}{n}\). Since users' sensitive information can be inferred from the interaction data, the private dataset in this case is \(D=\mathcal{H}\cup\mathcal{H}^{-}\).
At each iteration \(t\), DP-SGD performs gradient norm clipping with some bound \(C\) and Gaussian noise addition with variance \(\sigma^{2}\) on the received gradients \(G_{t}\), and then performs regular SGD on the model parameter \(\Theta_{t}\) based on the new gradients \(\tilde{G}_{t}\). If one randomly samples a batch \(\mathcal{B}_{t}\subseteq\mathcal{H}\) of size \(m\), then for each example \(S\in\mathcal{B}_{t}\), DP-SGD runs as
\[\tilde{G}_{t}(S)= G_{t}(S)/\text{max}\{1,\|G_{t}(S)\|_{2}/C\} \tag{2}\] \[\tilde{G}_{t}(S)= \tilde{G}_{t}(S)+\mathcal{N}(0,\sigma^{2}\text{I}) \tag{1}\]
### User-side Fairness
Let \(\mathcal{R}^{K}_{u}\) denote the recommendation list to user \(u\) originally. We utilize the non-parity unfairness measure initially introduced in Kamishima et al. (2014), stated as follow:
**Definition 2**.: Given a recommendation evaluation metric \(\mathcal{M}\), the use group fairness with respect to groups \(A\) and \(B\) is defined as
\[\mathbb{E}_{u}[\mathcal{M}(\mathcal{R}^{K}_{u})|u\in A]=\mathbb{E}_{u}[ \mathcal{M}(\mathcal{R}^{K}_{u})|u\in B].\]
Empirically, the user group fairness is measured by
\[\mathcal{F}_{U}(\mathcal{R}^{K};A,B)=\Big{|}\frac{1}{|A|}\sum_{u\in A}\mathcal{ M}(\mathcal{R}^{K}_{u})-\frac{1}{|B|}\sum_{u\in B}\mathcal{M}(\mathcal{R}^{K}_{u}) \Big{|}.\]
Due to different user activity levels, recommender systems would usually underperform against users who have less historical interactions. This bias can be amplified when differential privacy is incorporated in the model as demonstrated in (Bang et al., 2017). Following the classical 80/20 rule, we select the top 20% users based on their engagement activities as the frequent/active group \(A\) and the rest as the infrequent/inactive group \(B\).
## 3. Fairness-Aware Differentially Private Collaborative Filtering
There are two stages in our proposed framework, DP-Fair. Given a private dataset \(D\) with user provided privacy budgets \((\epsilon,\delta)\), the first
Figure 1. NDCG@10 (%) on various datasets (See descriptions in Section 4) between active users (Blue) and inactive users (Green). In each subplot, left two bars labeled with Non-DP are NeuMF model trained by standard SGD. Right two bars are learned by DP-SGD with \(\epsilon=1\).
stage applies DP-SGD for training the BPR and providing privacy guarantee. It is worth noting that at Line 6 we replace the uniform DP-SGD with separated ones on user and item. This procedure can avoid unnecessary norm clipping and noise addition (Hou et al., 2019; Wang et al., 2020) since user and item gradients may be different in scale during training. Overall, this tailored DP step will lead to better utility than vanilla DP-SGD with less perturbed gradients.
In the second stage, in order to mitigate the unfair treatment, we employ a post-processing approach (Hou et al., 2019). At Line 12, once top-\(K\) ranking lists \(\mathcal{R}^{K}\) are available, we re-rank them by maximizing the sum of prediction scores under the user group fairness constraint
\[\max_{\mathcal{R}^{k}} \sum_{u\in\mathcal{U}}\sum_{v\in\mathcal{R}^{k}_{u}}f_{\Theta}( \mathbf{x}_{u},\mathbf{x}_{v})\] \[\text{s.t.} \mathcal{F}_{U}(\mathcal{R}^{k};A,B)\leq\alpha\text{ and }\mathcal{R}^{k}_{u}\subseteq\mathcal{R}^{K}_{u}\quad\forall u\in \mathcal{U}. \tag{3}\]
One can consider \(\mathcal{R}^{K}\) as a binary matrix \(R\in\mathbb{R}^{n_{1}\times K}\) where \(R_{u,v}=1\) means item \(v\) is recommended to the user \(u\). Hence the optimization problem (3) can be translated and solved as a \(0-1\) integer programming problem. For ranking metric \(\mathcal{M}\), we pick the commonly used F1 score, which makes the computation more efficient than NDCG since it avoids the position discounted effect. It is worth noting that since this method is post-processing based on the recommendation list only, it will not break the differential privacy guarantee over the learned parameters \(f_{\Theta}\)(Bordes and Kastner, 2019).
## 4. Experiments
### Experimental Setup
#### 4.1.1. **Datasets**
We utilized two distinct sources of data. Firstly, we employ a benchmark dataset, namely the Amazon review dataset (5-core), which includes product reviews from the _Grocery & Gourmet Food_ and _Beauty_ categories (Krishnam et al., 2019). Since both are encoded with explicit feedback through ratings, we transform them into binary feedback. Secondly, we collect and sample one month's worth of user history logs from two categories--_Home & Living_ and _Craft Supplies & Tool_, on Etsy, one of the largest e-commerce platforms. We consider users' clicks as positive feedback in both datasets.
#### 4.1.2. **Implementation Details**
In this experiment, we perform a randomized 8:1:1 split of the datasets to create training, validation, and test sets. We consider both shallow and deep recommendation models, namely BPR-MF (Hou et al., 2019) and NeuMF (Wang et al., 2020). To find the best clipping bounds, we follow the DP tuning strategy in McMahan et al. (Mahan et al., 2019) via pre-trainining. We set pre-ranker \(K=20\) and re-ranker \(k=10\). We fix the privacy parameter \(\delta=\frac{1}{n^{1.5}}\) and employ the Opacus1 module to conduct DP-SGD steps. We also employ the Gurobi2 solver to solve the re-ranking problem in Eq. (3).
\begin{table}
\begin{tabular}{l|l|l|l|l} \hline \hline Name & \# User & \# Item & \# Interactions & Sparsity \\ \hline Home \& Living & 6,538 & 3,924 & 100,855 & 99.61\% \\ Craft Supplies \& Tool & 4,488 & 5,569 & 159,445 & 99.36\% \\ Grocery \& Gourmet Food & 14,681 & 8,713 & 151,254 & 99.88\% \\ Beauty & 22,363 & 12,101 & 198,502 & 99.93\% \\ \hline \hline \end{tabular}
\end{table}
Table 1. Statistics of datasets.
Figure 2. Top: Performance results with respect to different values of clipping bound \(C\) in terms of NCDG. Bottom: Performance results with respect to different levels of fairness constraint \(\alpha\) in terms of F1.
### Experimental Analysis
#### 4.2.1. **Main Results.**
Based on Table 2, several conclusions can be drawn. Firstly, the results are reported for three different privacy budget settings: the non-private setting (\(\epsilon=\infty\)), a loose privacy budget setting (\(\epsilon=10\)), and a tight budget setting (\(\epsilon=1\)). As expected and consistent with the literature, the utility in terms of NDCG and F1 degrades as the privacy constraint is tightened. Secondly, our proposed DP-Fair algorithm outperforms DP-SGD in terms of overall utility for both NDCG and F1, regardless of the privacy budget \(\epsilon\). This improvement is attributed to our custom noise addition and gradient clipping technique, which is also applicable in the non-private setting where the algorithm is identical to SGD with a re-ranking step. While it may seem counter-intuitive that enforcing fairness would improve utility, our results show that this is due to the improvement in the utility of inactive users, who make up 80% of all users, thereby boosting the overall performance. Finally, we observe a significant reduction in the \(\mathcal{F}_{U}\) gap by DP-Fair over DP-SGD. This can be attributed to the post-processing step in DP-Fair, which identifies \(k=10\) fairness-aware items out of the \(K=20\) list.
#### 4.2.2. **Hyperparameter Effects.**
In this experiment, we fix the privacy budget \(\epsilon\) to 1. Our first objective is to examine the selection of the clipping bound. To simplify the analysis, we impose \(C_{u}=C_{\varrho}=C\). Based on the results presented in Figure 2, we conclude that both excessively small or large values of \(C\) have a negative impact on the NCDG. When the clipping parameter is too small, the average clipped gradient can be biased. Conversely, increasing the norm bound \(C\) leads to the addition of more noise to the gradients. In addition, we investigate the impact of the fairness level selection. As the fairness requirements become stricter, the performance of the active group decreases, while that of the inactive group improves.
## 5. Conclusions
In this paper, we empirically observe the unfairness gap between the active and inactive users will be widened by the incorporation of DP-SGD in classical collaborative filtering based recommendation models. We propose custom differentially private gradient mapping incorporating an integer programming scheme to enhance its fairness between active and inactive users. Experiments on real-world e-commerce datasets show that DP-Fair outperforms DP-SGD in both utility and fairness metrics.
\begin{table}
\begin{tabular}{l l l l c c c c c c c c c c c} \hline \hline \multirow{3}{*}{Model} & \multirow{3}{*}{Metric} & \multicolumn{1}{c}{} & \multicolumn{4}{c}{\(\epsilon=\infty\)} & \multicolumn{4}{c}{\(\epsilon=10\)} & \multicolumn{4}{c}{\(\epsilon=1\)} \\ \cline{3-14} & & \multicolumn{1}{c}{} & \multicolumn{1}{c}{Total} & \multicolumn{1}{c}{Act.} & \multicolumn{1}{c}{\(\uparrow\)} & \multicolumn{1}{c}{InAct.} & \multicolumn{1}{c}{\(\uparrow\)} & \multicolumn{1}{c}{\(\mathcal{F}_{U}\)} & \multicolumn{1}{c}{Total} & \multicolumn{1}{c}{Act.} & \multicolumn{1}{c}{\(\uparrow\)} & \multicolumn{1}{c}{InAct.} & \multicolumn{1}{c}{\(\uparrow\)} & \multicolumn{1}{c}{\(\mathcal{F}_{U}\)} & \multicolumn{1}{c}{Total} & \multicolumn{1}{c}{Act.} & \multicolumn{1}{c}{\(\uparrow\)} & \multicolumn{1}{c}{InAct.} & \multicolumn{1}{c}{\(\uparrow\)} & \multicolumn{1}{c}{InAct.} & \multicolumn{1}{c}{\(\uparrow\)} & \multicolumn{1}{c}{\(\mathcal{F}_{U}\)} & \\ \hline \multicolumn{14}{c}{Home \& Living} \\ \hline \multirow{3}{*}{BPR-MF} & NDCG & DP-SGD & 11.57 & 15.17 & 10.73 & 4.44 & 10.77 & 14.93 & 9.67 & 5.25 & 10.21 & 14.44 & 9.15 & 5.29 \\ & & DP-Fair & **12.04** & 14.18 & **11.52** & **2.57** & **11.19** & 14.09 & **10.44** & **3.64** & **10.61** & 13.55 & **9.87** & **3.68** \\ & F1 & DP-SGD & 4.85 & 7.17 & 4.39 & 2.77 & 4.56 & 6.96 & 3.90 & 2.97 & 4.31 & 6.84 & 3.67 & 3.17 \\ & & DP-Fair & **4.89** & 6.57 & **4.51** & **2.07** & **4.61** & 6.38 & **4.11** & **2.27** & **4.33** & 6.30 & **3.83** & **2.47** \\ \hline \multirow{3}{*}{NeuMF} & NDCG & DP-SGD & 12.24 & 17.36 & 11.12 & 6.21 & 11.61 & 16.73 & 10.17 & 7.19 & 11.19 & 16.64 & 9.82 & 6.82 \\ & & DP-Fair & **12.93** & 16.12 & **12.13** & **3.99** & **12.05** & 15.78 & **11.12** & **4.66** & **11.59** & 15.01 & **10.74** & **4.27** \\ & F1 & DP-SGD & **5.03** & 7.65 & 4.37 & 3.28 & **4.78** & 7.47 & 4.10 & 3.36 & 4.58 & 7.61 & 3.82 & 3.79 \\ & & DP-Fair & **5.08** & 7.28 & **4.53** & **2.75** & **4.81** & 7.20 & **4.21** & **2.99** & **4.64** & 6.96 & **4.05** & **2.91** \\ \hline \multicolumn{14}{c}{Craft Supplies \& Tools} \\ \hline \multirow{3}{*}{BPR-MF} & NDCG & DP-SGD & 12.63 & 16.51 & 11.72 & 4.79 & 11.76 & 16.28 & 10.57 & 5.72 & 11.15 & 15.75 & 10.00 & 5.75 \\ & & DP-Fair & **13.59** & 16.02 & **13.01** & **3.01** & **12.64** & 15.91 & **11.80** & **4.11** & **11.98** & 15.32 & **11.15** & **4.17** \\ & F1 & DP-SGD & 5.30 & 7.89 & 4.80 & 3.09 & 4.99 & 7.30 & 4.27 & 3.61 & 4.71 & 7.48 & 4.02 & 3.45 \\ & & DP-Fair & **5.36** & 7.22 & **4.95** & **2.16** & **5.06** & 7.01 & **4.52** & **2.49** & **4.75** & 6.91 & **4.21** & **2.70** \\ \hline \multirow{3}{*}{NeuMF} & NDCG & DP-SGD & 13.57 & 19.12 & 12.37 & 6.74 & 12.88 & 18.37 & 11.32 & 7.05 & 12.41 & 18.35 & 10.93 & 7.41 \\ & & DP-Fair & **14.55** & 17.76 & **13.65** & **4.11** & **13.56** & 17.62 & **12.51** & **5.11** & **13.16** & 17.49 & **12.08** & **5.40** \\ & F1 & DP-SGD & 5.91 & 9.01 & 5.14 & 3.86 & 5.62 & 8.98 & 4.78 & 4.19 & 5.39 & 8.71 & 4.56 & 4.15 \\ & & DP-Fair & **5.98** & 8.55 & **5.33** & **3.32** & **5.64** & 8.41 & **4.95** & **3.46** & **5.46** & **4.71** & **3.73** \\ \hline \multicolumn{14}{c}{Grocery \& Gourmet Food} \\ \hline \multirow{3}{*}{BPR-MF} & NDCG & DP-SGD & 10.65 & 14.24 & 9.80 & 4.45 & 9.90 & 14.04 & 8.81 & 5.23 & 9.21 & 13.57 & 8.12 & 5.45 \\ & & DP-Fair & **11.34** & 13.88 & **10.85** & **3.03** & **10.54** & 13.27 & **9.70** & **3.58** & **9.79** & 12.64 & **9.08** & **3.56** \\ & F1 & DP-SGD & 4.22 & 6.04 & 3.79 & 2.26 & 4.03 & 5.92 & 3.52 & 2.39 & 3.78 & 5.74 & 3.28 & 2.46 \\ & & DP-Fair & **4.28** & 5.78 & **3.91** & **1.87** & **4.09** & 5.71 & **3.67** & **2.05** & **3.82** & 5.44 & **3.41** & **2.03** \\ \hline \multirow{3}{*}{NeuMF} & NDCG & DP-SGD & 11.40
|
2301.07373
|
When every finitely generated ideal is S-principal
|
In this paper, we introduce the concept of $S$-B\'ezout ring, as a
generalization of B\'ezout ring. We investigate the relationships between
$S$-B\'ezout and other related classes of rings. We establish some
characterizations of $S$-B\'ezout rings. We study this property in various
contexts of commutative rings including direct product, localization, trivial
ring extensions and amalgamation rings. Our results allow us to construct new
original classes of $S$-B\'ezout rings subject to various ring theoretical
properties. Furthermore, we introduce the notion of nonnil $S$-B\'ezout ring
and establish some characterizations.
|
Mohamed Chhiti, Salah Eddine Mahdou, Moutu Abdou Salam Moutui
|
2023-01-18T08:42:11Z
|
http://arxiv.org/abs/2301.07373v2
|
# On S-Bezout rings
###### Abstract.
In this paper, we introduce the concept of \(S\)-Bezout ring, as a generalization of Bezout ring. We investigate the relationships between \(S\)-Bezout and other related classes of rings. We establish some characterizations of \(S\)-Bezout rings. We study this property in various contexts of commutative rings including direct product, localization, trivial ring extensions and amalgamation rings. Our results allow us to construct new original classes of \(S\)-Bezout rings subject to various ring theoretical properties. Furthermore, we introduce the notion of nonnil \(S\)-Bezout ring and establish some characterizations.
Key words and phrases:\(S\)-Bezout, localization, direct product, \(S\)-principal ideal, pullback, trivial ring extension, amalgamation 2010 Mathematics Subject Classification: 13A15, 13B99, 13E15
## 1. Introduction
Throughout this whole paper, all rings are assumed to be commutative with nonzero identity and all modules are nonzero unital. In the last forty years, there have been many extensions of classical notions used in multiplicative ideal theory, with the need of generalizing many properties, including Bezout domains, to broader contexts. The notion of Bezout ring is central in commutative algebra and enlarging this class as well as producing new original examples can be of some interest. In [4], Anderson and Zafrulah extended the class of Bezout domains in the following way: they called a domain \(A\) an almost Bezout domain if, given any two elements \(a,b\in A,\) there exists an integer \(n\geq 1\) such that the ideal \((a^{n},b^{n})\) of \(A\) is principal. Later, Mahdou, Mimouni and Moutui enlarged the notion almost Bezout domain to ring with zero divisors. In recent years, the concept of \(S\)-property has an important place in commutative algebra and they draw attention by several authors. In [5], Anderson and Dumitrescu introduced the concept of \(S\)-finite modules, where \(S\) is a multiplicatively subset as follows: an \(R\)-module \(M\) is called an \(S\)-finite module if there exist a finitely generated \(R\)-submodule \(N\) of \(M\) and \(s\in S\) such that \(sM\subseteq N.\) Also, they introduced the concept of \(S\)-Noetherian rings as follows : a ring \(R\) is called \(S\)-Noetherian if every ideal of \(R\) is \(S\)-finite. Recently, in [14], Bennis and El Hajoui investigated the \(S\)-versions of finitely presented modules and coherent modules which are called, respectively, \(S\)-finitely presented modules and S-coherent modules.
Introduction
Let \(R\) be a ring and \(M\) be an \(R\)-module. A ring \(R\) is said to be _\(R\)-module_ if \(R\) is a ring and \(M\) is \(R\)-module. A ring \(R\) is said to be _\(R\)-module_ if \(R\) is \(R\)-module. A ring \(R\) is said to be _\(R\)-module_ if \(R\) is \(R\)-module. A ring \(R\) is said to be _\(R\)-module_ if \(R\) is \(R\)-module. A ring \(R\) is said to be _\(R\)-module_ if \(R\) is \(R\)-module. A ring \(R\) is said to be _\(R\)-module_ if \(R\) is \(R\)-module. A ring \(R\) is said to be _\(R\)-module_ if \(R\) is \(R\)-module. A ring \(R\) is said to be _\(R\)-module_ if \(R\) is \(R\)-module. A ring \(R\) is said to be _\(R\)-module_ if \(R\) is \(R\)-module. A ring \(R\) is said to be _\(R\)-module_ if \(R\) is \(R\)-module. A ring \(R\) is said to be _\(R\)-module_ if \(R\) is \(R\)-module. A ring \(R\) is said to be _\(R\)-module_ if \(R\) is \(R\)-module. A ring \(R\) is said to be _\(R\)-module_ if \(R\) is \(R\)-module. A ring \(R\) is said to be _\(R\)-module_ if \(R\) is \(R\)-module. A ring \(R\) is said to be _\(R\)-module_ if \(R\) is \(R\)-module. A ring \(R\) is said to be _\(R\)-module_ if \(R\) is \(R\)-module. A ring \(R\) is said to be _\(R\)-module_ if \(R\) is \(R\)-module. A ring \(R\) is said to be _\(R\)-module_ if \(R\) is \(R\)-module. A ring \(R\) is said to be _\(R\)-module_ if \(R\) is \(R\)-module. A ring \(R\) is said to be _\(R\)-module_ if \(R\) is \(R\)-module. A ring \(R\) is said to be _\(R\)-module_ if \(R\) is \(R\)-module. A ring \(R\) is said to be _\(R\)-module_ if \(R\) is \(R\)-module. A ring \(R\) is said to be _\(R\)-module_ if \(R\) is \(R\)-module. A ring \(R\) is said to be _\(R\)-module_ if \(R\) is \(R\)-module. A ring \(R\) is said to be _\(R\)-module_ if \(R\) is \(R\)-module. A ring \(R\) is said to be _\(R\)-module_ if \(R\) is \(R\)-module. A ring \(R\) is said to be _\(R\)-module_ if \(R\) is \(R\)-module. A ring \(R\) is said to be _\(R\)-module_ if \(R\) is \(R\)-module. A ring \(R\) is said to be _\(R\)-module_ if \(R\) is \(R\)-module. A ring \(R\) is said to be _\(R\)-module_ if \(R\) is \(R\)-module. A ring \(R\) is said to be _\(R\)-module_ if \(R\) is \(R\)-module. A ring \(R\) is said to be _\(R\)-module_ if \(R\) is \(R\)-module. A ring \(R\) is said to be _\(R\)-module_ if \(R\) is \(R\)-module. A ring \(R\) is said to be _\(R\)-module_ if \(R\) is \(R\)-module. A ring \(R\) is said to be _\(R\)-module_ if \(R\) is \(R\)-module. A ring \(R\) is said to be _\(R\)-module_ if \(R\) is \(R\)-module. A ring \(R\) is said to be _\(R\)-module_ if \(R\) is \(R\)-module. A ring \(R\) is said to be _\(R\)-module_ if \(R\) is \(R\)-module. A ring \(R\) is said to be _\(R\)-module_ if \(R\) is \(R\)-module. A ring \(R\) is said to be _\(R\)-module_ if \(R\) is \(R\)-module. A ring \(R\) is said to be _\(R\)-module_ if \(R\) is \(R\)-module. A ring \(R\) is said to be _\(R\)-module_ if \(R\) is \(R\)-module. A ring \(R\) is said to be _\(R\)-module_ if \(R\) is \(R\)-module. A ring \(R\) is said to be _\(R\)-module_ if \(R\) is \(R\)-module. A ring \(R\) is said to be \(R\)-module_ if \(R\) is \(R\)-module. A ring \(R\) is said to be _\(R\)-module_ if \(R\) is \(R\)-module. A ring \(R\) is said to be \(R\)-module. A ring \(R\) is said to be _\(R\)-module_ if \(R\) is \(R\)-module. A ring \(R\) is said to be \(R\)-module.
\(\phi(\frac{a}{b})=\frac{a}{b}\) for every \(a\in R\) and every \(b\in R\setminus Z(R)\). Then \(\phi\) is a ring homomorphism from \(T(R)\) into \(K\), and \(\phi\) restricted to \(R\) is also a ring homomorphism from \(R\) into \(K\) given by \(\phi(x)=\frac{x}{1}\) for every \(x\in R\). Observe that if \(R\in\mathcal{H}\), then \(\phi(R)\in\mathcal{H}\), \(Ker(\phi)\subseteq Nil(R)\), \(Nil(T(R))=Nil(R)\), \(Nil(R_{Nil(R)})=\phi(Nil(R))=Z(\phi(R))\), \(T(\phi(R))=R_{Nil(R)}\) is quasilocal with maximal ideal \(Nil(\phi(R))\), and \(R_{Nil(R)}/Nil(\phi(R))=T(\phi(R))/Nil(\phi(R))\) is the quotient field of \(\phi(R)/Nil(\phi(R))\).
In this paper, we introduce the concept of \(S\)-Bezout ring, as a generalization of Bezout ring. We investigate the relationships between \(S\)-Bezout and other related classes of rings. We establish some characterizations of \(S\)-Bezout rings. We study this property in various contexts of commutative rings including direct product, localization, trivial ring extensions and amalgamation rings. Our results allow us to construct new original classes of \(S\)-Bezout rings subject to various ring theoretical properties. Moreover, we introduce the notion of nonnil \(S\)-Bezout ring and establish some characterizations. For a ring \(R\), we denote respectively by \(Nil(R)\), \(U(R)\), \(Z(R)\) the ideal of all nilpotent elements of \(R\), the multiplicative group of units of \(R\) and the zero divisor elements of \(R\). For an integral domain \(R\), we denote by \(qf(R)\), the quotient field of \(R\).
## 2. Main Results
Let \(R\) be a ring and let \(S\) be a multiplicatively closed subset of \(R\). Recall that a ring \(R\) is Bezout if every finitely generated ideal of \(R\) is principal. First, we introduce a weak version of Bezout ring called \(S\)-Bezout ring in the following way:
**Definition 2.1**.: _Let \(R\) be ring and \(S\) be a multiplicative set of \(R\). Then \(R\) is said to be \(S\)-Bezout, if every finitely generated ideal of \(R\) is \(S\)-principal._
It is worthwhile noting that any Bezout ring is \(S\)-Bezout for every multiplicative set \(S\) of \(R\). However, the converse is not true in general as shown in the following examples which illustrate non-Bezout \(S\)-Bezout rings.
**Example 2.2**.: _Let \(R:=\mathbb{Z}\propto(\mathbb{Z}/2\mathbb{Z})^{\infty}\) be the trivial ring extension of \(\mathbb{Z}\) by the \(\mathbb{Z}\)-module \((\mathbb{Z}/2\mathbb{Z})^{\infty}\) and \(S:=\{2^{n},0)/n\in\mathbb{N}\}\) be a multiplicatively set of \(R\). Then:_
1. \(R\) _is an_ \(S\)_-Bezout ring._
2. \(R\) _is not a Bezout ring._
**Proof.**
1. Let \(J\) be a finitely generated proper ideal of \(R\) and set \(I:=\{a\in\mathbb{Z}/(a,e)\in J\) for some \(e\in(\mathbb{Z}/2\mathbb{Z})^{\infty}\}\). Two cases are then possible: Case 1. \(I\neq 0\).
In this case, say \(I=n\mathbb{Z}\), for some \(n\in\mathbb{Z}-\{0\}\). Let \(e\in(\mathbb{Z}/2\mathbb{Z})^{\infty}\) such that \((n,e)\in J\). So, \(J\) is \(S\)-principal since \[(2,0)J\subseteq R(n,e)\subseteq J\] since \((2,0)J=R(2n,0)=R(n,e)(2,0)\subseteq R(n,e)\), as desired. Case 2. \(I=0\). Then \(J\) is also \(S\)-principal since \[(2,0)J\subseteq R(0,0)\subseteq J\] since \((2,0)J\subseteq(2,0)(0\propto(\mathbb{Z}/2\mathbb{Z})^{\infty})=R(0,0)\), as desired.
2. Let \((e,f)\) be linearly independant elements of the \(\mathbb{Z}\)-module \((\mathbb{Z}/2\mathbb{Z})^{\infty}\) and set \(J=R(0,e)+R(0,f)\). It is clear that \(J\) is not a principal ideal of \(R\), as desired.
**Example 2.3**.: _Let \(R_{1}\) be a Bezout ring, \(R_{2}\) be a non-Bezout ring, \(R:=R_{1}\times R_{2}\) and set \(S:=\{(1,1),(1,0)\}\) be a multiplicative set of \(R\). For instance take \(R_{1}:=K[x]\) the polynomial ring with coefficient in a field \(K\), and \(R_{2}:=F[X,Y]\), the polynomial ring in two variables \(X,Y\) over a field \(F\). It is well known that \(R_{2}\) is a non-Bezout domain. Then:_
1. \(R\) _is an_ \(S\)_-Bezout ring._
2. \(R\) _is a non-Bezout ring._
**Proof.**
1. We claim that \(R\) is an \(S\)-Bezout ring. Indeed, let \(I:=I_{1}\times I_{2}\) be a finitely generated ideal of \(R:=R_{1}\times R_{2}\). Hence, \(I_{1}\) is a finitely generated ideal of a Bezout ring \(R_{1}\) and so \(I_{1}=R_{1}a_{1}\) for some \(a_{1}\in R_{1}\). Hence, \[(1,0)(I_{1}\times I_{2})\subseteq I_{1}\times 0(=R(a_{1},0))\subseteq I_{1} \times I_{2}\] and so \(I:=I_{1}\times I_{2}\) is an \(S\)-principal ideal, as desired.
2. \(R:=R_{1}\times R_{2}\) is a non-Bezout ring since \(R_{2}\) is a non-Bezout ring.
Recall that a ring \(R\) is called an \(S\)-PIR if every ideal is \(S\)-principal, that is for every ideal \(I\) there exist a principal ideal \(J\) and \(s\in S\) such that \(sI\subseteq J\subseteq I\). Clearly, any \(S\)-PIR is an \(S\)-Bezout ring for any multiplicatively closed subset \(S\) of \(R\).
**Example 2.4**.: _Let \(R\) be any non-Bezout domain and set \(S:=R-\{0\}\) be a multiplicatively set of \(R\). Then:_
1. \(R\) _is_ \(S\)_-PIR. In particular,_ \(R\) _is_ \(S\)_-Bezout ring._
2. \(R\) _is not a Bezout ring._
**Proof.**
1. Let \(I\) be a proper ideal of \(R\) and let \(s\in I-\{0\}\). Hence, \(sI\subseteq Rs\subseteq I\) and so \(I\) is \(S\)-principal since \(Rs\) is a principal ideal of \(R\), as desired.
2. By hypothesis.
Recall that an ideal \(I\) is said \(n\)-generated if it can be generated by \(n\) elements. It is well known that a Bezout ring is a ring in which every \(n\)-generated (resp., \(2\)-generated ideal) is principal. Now, we generalize this result to \(S\)-Bezout ring.
**Proposition 2.5**.: _Let \(R\) be ring and \(S\) be a multiplicative set of \(R\). Then \(R\) is an \(S\)-Bezout ring if and only if every \(2\)-generated ideal is \(S\)-principal._
**Proof.** If \(R\) is an \(S\)-Bezout ring, then every \(2\)-generated ideal is \(S\)-principal. Conversely, assume that every \(2\)-generated ideal is \(S\)-principal. It suffices to shows that if every \(n\)-generated ideal is \(S\)-principal, then every \((n+1)\)-generated ideal is \(S\)-principal.
Assume that every \(n\)-generated ideal is \(S\)-principal and let \(I=\sum_{i=1}^{n+1}Ra_{i}\) be an \((n+1)\)-generated ideal of \(R\). Then there exist \(s\in S\) and \(a\in R\) such that
\[s\sum_{i=1}^{n}Ra_{i}\subseteq Ra\subseteq\sum_{i=1}^{n}Ra_{i}\]
by hypothesis since \(\sum_{i=1}^{n}Ra_{i}\) is an \(n\)-generated ideal of \(R\). Using the fact that \(sRa_{n+1}\subseteq Ra_{n+1}\), we have
\[s\sum_{i=1}^{n+1}Ra_{i}\subseteq s\sum_{i=1}^{n}Ra_{i}+Ra_{n+1}\subseteq Ra+ Ra_{n+1}\subseteq\sum_{i=1}^{n+1}Ra_{i}.\]
On the other hand, \(Ra+Ra_{n+1}\) is \(2\)-generated, then by hypothesis, there exist \(s^{\prime}\in S\) and \(b\in R\) such that
\[s^{\prime}(Ra+Ra_{n+1})\subseteq Rb\subseteq Ra+Ra_{n+1}(\subseteq\sum_{i=1}^ {n+1}Ra_{i}).\]
But, we have \(ss^{\prime}\sum_{i=1}^{n+1}Ra_{i}\subseteq s^{\prime}(Ra+Ra_{n+1})\). Therefore,
\[ss^{\prime}\sum_{i=1}^{n+1}Ra_{i}\subseteq Rb\subseteq\sum_{i=1}^{n+1}Ra_{i}\]
as \(ss^{\prime}\in S\), which completes the proof.
The next theorem establishes another characterization of \(S\)-Bezout ring.
**Theorem 2.6**.: _Let \(R\) be a ring and \(S\) be a multiplicative set of \(R\). Then \(R\) is an \(S\)-Bezout ring if and only if every \(S\)-finite ideal of \(R\) is \(S\)-principal._
**Proof.** If every \(S\)-finite ideal of \(R\) is \(S\)-principal, then it is clear that \(R\) is \(S\)-Bezout. Conversely, assume that \(R\) is \(S\)-Bezout and let \(I\) be an \(S\)-finite ideal of \(R\). Then there exist a finitely generated ideal \(J\) of \(R\) and \(s\in S\) such that:
\[sI\subseteq J\subseteq I.\]
Hence, \(J\) is \(S\)-principal and so there exist \(s^{\prime}\in S\) and \(a\in R\) such that
\[s^{\prime}J\subseteq Ra\subseteq J.\]
Therefore,
\[ss^{\prime}I\subseteq s^{\prime}J\subseteq Ra\subseteq J\subseteq I\]
and so \(I\) is \(S\)-principal, as desired.
Next, we investigate the stability of \(S\)-Bezout ring property under localization.
**Proposition 2.7**.: _Let \(R\) be a ring and \(S\subseteq R\) be a multiplicative set of \(R\). Then:_
1. _If_ \(R\) _is an_ \(S\)_-Bezout ring, then_ \(S^{-1}R\) _is a Bezout ring._
2. _Assume that_ \(R\) _is_ \(S\)_-Noetherian (not necessary Noetherian). Then_ \(R\) _is an_ \(S\)_-Bezout ring if and only if_ \(S^{-1}R\) _is a Bezout ring._
**Proof.** (1) Assume that \(R\) is \(S\)-Bezout and let \(J\) be a finitely generated ideal of \(S^{-1}R\). There exists a finitely generated ideal \(I\) of \(R\) such that \(J=S^{-1}I\). Then \(I\) is \(S\)-principal and hence \(J:=S^{-1}I\) is principal, as desired.
(2) If \(R\) is an \(S\)-Bezout ring, then by assertion (1) above, \(S^{-1}R\) is a Bezout ring. Conversely, assume that \(S^{-1}R\) is a Bezout ring with the hypothesis that \(R\) is \(S\)-Noetherian (not necessary Noetherian). Then \(S^{-1}R\) is a PIR. Then \(R\) is an \(S\)-PIR by [5, Proposition 2(g)]. Hence, \(R\) is an \(S\)-Bezout ring, as desired.
Contrary to Proposition 2.7, the following proposition shows that we do not always need the assumption "\(S\)-Noetherian".
**Proposition 2.8**.: _Let \(R\) be a ring and \(S\subseteq U(R)\) be a multiplicative set of \(R\). Then \(R\) is an \(S\)-Bezout ring if and only if \(S^{-1}R\) is a Bezout ring._
If we choose \(S\subseteq U(R)\), then we obtain the following known corollary.
**Corollary 2.9**.: _Every Noetherian Bezout ring is a PIR._
For a prime ideal \(P\) of \(R\), we define \(R\) to \(P\)-Bezout if \(R\) is \((R-P)\)-Bezout. We establish the following characterization for a semilocal ring.
**Theorem 2.10**.: _For a semilocal ring \(R\), the following statements are equivalent:_
1. \(R\) _is a Bezout ring._
_._
2. \(R\) _is a_ \(P\)_-Bezout ring for all prime ideal_ \(P\) _of_ \(R\)_._
3. \(R\) _is a_ \(M\)_-Bezout ring for all maximal ideal_ \(M\) _of_ \(R\)_._
**Proof.**\((1)\Rightarrow(2)\) Straightforward.
\((2)\Rightarrow(3)\) Clear.
\((3)\Rightarrow(4)\) We can use the same proof of [5, Proposition 12].
The next proposition examines the \(S\)-Bezout ring property under homomorphic image.
**Proposition 2.11**.: _Let \(R\) be a ring, \(I\) be a finitely generated ideal of \(R\) and \(S\) be a multiplicatively set of \(R\). If \(R\) is an \(S\)-Bezout ring, then \(R/I\) is an \((S+I)\)-Bezout ring. The converse is true if there is \(s_{0}\in S\) such that \(s_{0}I=0\)._
**Proof.** Assume that \(R\) is \(S\)-Bezout. Let \(J/I\) be a finitely generated ideal of \(R/I\), where \(J\) is a finitely generated ideal of \(R\). Then there exist \(s\in S\) and \(a\in R\) such that
\[sJ\subseteq Ra\subseteq J\]
since \(R\) is a Bezout ring. Hence, we have
\[(s+I)(J/I)\subseteq(R/I)(a+I)\subseteq J/I\]
and so \(J/I\) is \((S+I)\)-principal. Hence, \(R/I\) is \((S+I)\)-Bezout. Conversely, assume that \(R/I\) is \((S+I)\)-Bezout and there exists \(s_{0}\in S\) such that \(s_{0}I=0\). Consider a finitely generated ideal \(J\) of \(R\). Then \(((J+I)/I)\) is a finitely generated ideal of an \((S+I)\)-Bezout ring \(R/I\). Hence, there exist \(s\in S\) and \(a\in R\) such that:
\[(s+I)((J+I)/I)\subseteq(R/I)(a+I)\subseteq(J+I)/I.\]
By multiplying the above equation by \(s_{0}\) and since \(s_{0}I=0\), we obtain :
\[ss_{0}J\subseteq R(s_{0}a)\subseteq s_{0}J\subseteq J,\]
and so \(J\) is \(S\)-principal. Hence, \(R\) is an \(S\)-Bezout ring which completes the proof.
Now we study the stability of \(S\)-Bezout property under homomorphism. Let \(A\) and \(B\) be two rings, \(f:A\to B\) be a ring homomorphism and \(I\) be an ideal of \(A\). We denote by \(I^{e}\), the extension of \(I\) defined by the ideal of \(B\) generated by \(f(I)\); and by \(J^{c}\) the contraction of \(J\) defined by the set of antecedents of elements of \(f(A)\cap J\), that is, \(J^{c}=\{a\in A|f(a)\in J\}\).
**Proposition 2.12**.: _Let \((A,B)\) be a pair of rings, \(f:A\to B\) be a ring homomorphism and \(S\) be a multiplicative set of \(A\) such that \(I^{ce}=I\) for each ideal \(I\) of \(B\) and \(J^{c}\) is a finitely generated ideal of \(A\) for each finitely generated ideal \(J\) of \(B\). If \(A\) is an \(S\)-Bezout ring, then \(B\) is an \(f(S)\)-Bezout ring._
**Proof.** Assume that \(A\) is an \(S\)-Bezout ring. Let \(J\) be a finitely generated ideal of \(B\). From assumption \(J^{c}\) is a finitely generated ideal of \(A\) which is \(S\)-Bezout. So, \(J^{c}\) is \(S\)-principal and therefore there exist \(s\in S\) and \(a\in A\) such that \(sJ^{c}\subseteq aA\subseteq J^{c}\). It follows that \(f(s)J=f(s)J^{ce}\subseteq(aA)^{e}\subseteq J^{ce}=J\). Hence, \(f(s)J\subseteq f(a)B\subseteq J\), making \(J\), an \(f(S)\)-principal ideal of \(B\). Finally, \(B\) is an \(f(S)\)-Bezout ring, as desired. \(\square\)
It is clear that if \(S_{1},...,S_{n}\) are multiplicative sets of rings \(R_{1},...,R_{n}\) respectively, then \(S=\prod_{i=1}^{n}S_{i}\) is a multiplicative set of \(R=\prod_{i=1}^{n}R_{i}\). Next, we study the stability of the \(S\)-Bezout property under direct product.
**Proposition 2.13**.: _Let \(S_{1},...,S_{n}\) be multiplicative sets of rings \(R_{1},....,R_{n}\) respectively. Set \(R=\prod_{i=1}^{n}R_{i}\) and \(S=\prod_{i=1}^{n}S_{i}\) a multiplicatively closed subset of \(R\). The following statements are equivalent:_
1. \(R\) _is an_ \(S\)_-Bezout ring._
2. \(R_{i}\) _is an_ \(S_{i}\)_-Bezout ring for all_ \(i=1,...,n\)_._
**Proof.** It suffices to prove the result for \(n=2\).
Assume that \(R_{1}\times R_{2}\) is an \((S_{1}\times S_{2})\)-Bezout ring and let \(I_{i}\) be a finitely generated ideal of \(R_{i}\), for \(i=1,2\). Then \(I_{1}\times I_{2}\) is a finitely generated ideal of an \((S_{1}\times S_{2})\)-Bezout ring \(R_{1}\times R_{2}\). Hence, there exist \(s_{i}\in S_{i}\) and \(a_{i}\in R_{i}\) such that
\[(s_{1},s_{2})(I_{1}\times I_{2})\subseteq(R_{1}\times R_{2})(a_{1},a_{2}) \subseteq I_{1}\times I_{2},\]
and so \(s_{1}I_{1}\subseteq R_{1}a_{1}\subseteq I_{1}\) and \(s_{2}I_{2}\subseteq R_{2}a_{2}\subseteq I_{2}\). Hence, \(R_{i}\) is an \(S_{i}\)-Bezout ring, as desired. Conversely, assume that \(R_{i}\) is an \(S_{i}\)-Bezout ring for each \(i=1,2\) and let \(I_{1}\times I_{2}\) be a finitely generated ideal of \(R_{1}\times R_{2}\). Hence, there exist \(s_{i}\in S_{i}\) and \(a_{i}\in R_{i}\) such that \(s_{1}I_{1}\subseteq R_{1}a_{1}\subseteq I_{1}\) and \(s_{2}I_{2}\subseteq R_{2}a_{2}\subseteq I_{2}\) and so we have
\[(s_{1},s_{2})(I_{1}\times I_{2})\subseteq(R_{1}\times R_{2})(a_{1},a_{2}) \subseteq I_{1}\times I_{2}.\]
Hence, \(R_{1}\times R_{2}\) is an \((S_{1}\times S_{2})\)-Bezout ring which completes the proof. \(\square\)
The next proposition studies the \(S\)-Bezout property under the ring extension \(A\subseteq B\), where \((A,B)\) is a pair of rings.
**Proposition 2.14**.: _Let \(A\subseteq B\) be a ring extension such that \(IB\cap A=I\) for each ideal \(I\) of \(A\) and \(S\subseteq A\) a multiplicative set. If \(B\) is an \(S\)-Bezout ring, then so is \(A\)._
**Proof.** Let \(I\) be a finitely generated ideal of \(A\). Since \(B\) is an \(S\)-Bezout ring, then there exist \(s\in S\) and \(b\in B\) such that \(sIB\subseteq bB\subseteq IB\). Clearly \(b\in IB\) and so can be picked from \(I\). Therefore, \(sI=sIB\cap A\subseteq bB\cap A\subseteq bA\subseteq I\), making \(I\) an \(S\)-principal ideal of \(A\). Hence, \(A\) is an \(S\)-Bezout ring, as desired.
Let \(R:=A\propto E\) be the trivial ring extension of a ring \(A\) by an \(A\)-module \(E\). Note that if \(S\) is a multiplicative set of \(R\), then \(S_{0}=\{a\in A/(a,e)\in S\) for some \(e\in E\}\) is a multiplicative set of \(A\). Conversely, if \(S_{0}\) is a multiplicative set of \(A\), then \(S:=S_{0}\propto N\) is a multiplicative set of \(R\) for every submodule \(N\) of \(E\) such that \(S_{0}N\subseteq N\). In particular, \(S_{0}\propto 0\) and \(S_{0}\propto E\) are multiplicative sets of \(R\). Our next result examines the transfer of the \(S\)-Bezout property in trivial ring extension in the special setting \(E\) is a finitely generated \(A\)-module.
**Theorem 2.15**.: _Let \(A\) be a ring, \(E\) be a finitely generated \(A\)-module, \(S_{0}\) be a multiplicatively closed subset of \(A\), \(R:=A\propto E\) be the trivial ring extension of \(A\) by \(E\) and \(S:=S_{0}\propto E\) be a multiplicatively closed subset of \(R\). If \(R\) is an \(S\)-Bezout ring, then \(A\) is an \(S_{0}\)-Bezout ring and every finitely generated submodule \(F\) of \(E\) is \(S\)-cyclic (in particular \(E\) is \(S\)-cyclic). The converse holds if every ideal of \(R\) is homogeneous._
Before proving the previous theorem, we establish the following lemma.
**Lemma 2.16**.: _Let \(A\) be a ring, \(E\) be an \(A\)-module, \(F\) be a submodule of \(E\) and \(S\) be a multiplicatively closed subset of \(A\). Then \(F\) is \(S\)-cyclic if and only if \(0\propto F\) is an \(S\propto E\)-principal ideal of \(A\propto E\)._
**Proof.** Assume that \(F\) is \(S\)-cyclic. Then there exist \(s\in S\) and \(e\in F\) such that
\[sF\subseteq Ae\subseteq F,\]
and so
\[0\propto sF\subseteq 0\propto Ae\subseteq 0\propto F,\]
therefore,
\[(s,0)0\propto F\subseteq A\propto E(0,e)\subseteq 0\propto F.\]
Hence, \(0\propto F\) is an \(S\propto E\)-principal ideal of \(A\propto E\). Conversely, assume that \(0\propto F\) is an \(S\propto E\)-principal ideal of \(A\propto E\). Then there exists \((s,e)\in S\propto E\) such that
\[(s,e)0\propto F\subseteq A\propto E(a,e^{\prime})\subseteq 0\propto F,\]
for some \((a,e^{\prime})\in A\propto E\),
and so
\[0\propto sF\subseteq Aa\propto(Ae^{\prime}+aE)\subseteq 0\propto F\]
therefore,
\[0\propto sF\subseteq 0\propto Ae^{\prime}\subseteq 0\propto F.\]
Hence, \(sF\subseteq Ae^{\prime}\subseteq F.\) Finally, \(F\) is \(S\)-cyclic, as desired. \(\square\)
**Proof of Theorem 2.15.** Assume that \(R\) is an \(S\)-Bezout ring. Let \(F\) be a finitely generated submodule of \(E\). Then \(0\propto F\) is a finitely generated ideal of \(R\) by [1, Theorem 2(1)] and so is \(S\)-principal. Therefore, \(F\) is \(S\)-cyclic by Lemma 2.16. On the other hand, consider a finitely generated ideal \(I\) of \(A\). From [1, Theorem 7(1)], \(I\propto IE\) is a finitely generated ideal of \(R\). Consequently, there exist \((s,e)\in S\) and \((a,e^{\prime})\in I\propto IE\) such that
\[(s,e)I\propto IE\subseteq A\propto E(a,e^{\prime})\subseteq I\propto IE,\]
so,
\[sI\subseteq Aa\subseteq I.\]
Hence, \(I\) is \(S_{0}\)-principal, making \(A\), an \(S_{0}\)-Bezout ring. Conversely, assume that every ideal of \(R\) is homogeneous (recall from [6], that homogeneous ideals of \(R\) have the form \(I\propto F\) for some ideal \(I\) of \(A\) and some submodule \(F\) of \(E\) with \(IE\subseteq F\)). Let \(I\propto F\) be a finitely generated ideal of \(R\). By [1, Theorem 9(1)], \(I\) is a finitely generated ideal of \(A\) and \(F\) is a finitely generated submodule of \(E\). So, there exist \((s,s^{\prime})\in S_{0}^{2}\), \(a\in I\) and \(e\in F\) such that
\[sI\subseteq Aa\subseteq I,\]
and
\[s^{\prime}F\subseteq Ae\subseteq F.\]
Set \(t=ss^{\prime}\in S\), then
\[tI\subseteq Aa\subseteq I\]
and
\[tF\subseteq Ae\subseteq Ae+aE\subseteq F,\]
and so
\[tI\propto tF\subseteq Aa\propto(Ae+aE)\subseteq I\propto F,\]
therefore,
\[(t,0)I\propto F\subseteq A\propto E(a,e)\subseteq I\propto F.\]
Hence, \(I\propto F\) is \(S_{0}\propto E\)-principal. Finally, \(R\) is an \(S\)-Bezout ring, as desired.
The following theorem investigates the transfer of \(S\)-Bezout ring property in various context of trivial ring extensions.
**Theorem 2.17**.: _Let \(A\) be a ring, \(E\) be an \(A-\)module, \(S_{0}\) be a multiplicatively closed subset of \(A\), \(R:=A\propto E\) be the trivial ring extension of \(A\) by \(E\) and \(S:=S_{0}\propto E\) be a multiplicatively closed subset of \(R\). Then the following statements hold:_
1. _If_ \(R\) _is an_ \(S\)_-Bezout ring, then_ \(A\) _is an_ \(S_{0}\)_-Bezout ring._
2. _Assume that_ \(A\) _is an integral domain which is not a field,_ \(K=qf(A)\)_, and_ \(R:=A\propto K\) _be the trivial ring extension of_ \(A\) _by_ \(K\)_. Then_ \(R\) _is an_ \(S\)_-Bezout ring if and only if_ \(A\) _is an_ \(S_{0}\)_-Bezout domain._
3. _Assume that_ \((A,M)\) _is a local ring and_ \(E\) _is an_ \(A-\)_module such that_ \(ME=0\) _and_ \(S_{0}\nsubseteq U(R)(=R-(M\propto E)\)_. Then_ \(R\) _is an_ \(S\)_-Bezout ring if and only if_ \(A\) _is an_ \(S_{0}\)_-Bezout ring._
**Proof.** (1) Assume that \(R\) is an \(S\)-Bezout ring and let \(I\) be a finitely generated proper ideal of \(A\) generated by \((a_{i})_{i=1,\ldots,n}\). Then the finitely generated
ideal \(J\) of \(R\) generated by \((a_{i},0)_{i=1,\ldots,n}\) is \(S\)-principal, that is there exist \((s_{0},f)\in S\) and \((a,e)\in R\) such that
\[(s_{0},f)J\subseteq R(a,e)\subseteq J\]
and so
\[s_{0}I\subseteq Aa\subseteq I.\]
Therefore, \(I\) is an \(S_{0}\)-principal and hence \(A\) is an \(S_{0}\)-Bezout ring.
(2) If \(R\) is an \(S\)-Bezout ring, then \(A\) is an \(S_{0}\)-Bezout domain by assertion (1) above. Conversely, let \(J\) be a finitely generated proper ideal of \(R\). Set \(I:=\{a\in A/(a,e)\in J\) for some \(e\in K\}\). Two cases are then possible:
Case 1. \(I=0\).
Necessarily, \(J=0\propto(1/b)L\) for some \(b\neq 0\in A\) and some finitely generated proper ideal \(L\) of \(A\). Therefore, \(L\) is an \(S_{0}\)-principal since \(A\) is an \(S_{0}\)-Bezout ring and so there exist \(s_{0}\in S_{0}\) and \(a\in A\) such that
\[s_{0}L\subseteq Aa\subseteq L\]
and we have
\[(s_{0},0)J\subseteq R(0,a/b)\subseteq J.\]
Hence, \(J\) is an \(S\)-principal ideal of \(R\), as desired.
Case 2. \(I\neq 0\).
Let \((a,e)\in J\) such that \(a\neq 0\). Then \((a,e)(0\propto K)=0\propto K\subseteq J\); equivalently, \(J=I\propto IK=I\propto K\), where \(I\) is a finitely generated ideal of an \(S_{0}\)-Bezout ring \(A\). Hence, there exist \(s_{0}\in S_{0}\) and \(a\in A\) such that
\[s_{0}I\subseteq Aa\subseteq I,\]
and so we have
\[(s_{0},0)J\subseteq R(a,0)\subseteq J.\]
Hence, \(J\) is an \(S\)-principal ideal, as desired.
(3) If \(R\) is an \(S\)-Bezout ring, then A is an \(S_{0}\)-Bezout ring by assertion (1) above. Conversely, assume that \(A\) is an \(S_{0}\)-Bezout ring and \(S_{0}\nsubseteq U(R)(=R-(M\propto E)\), and let \(J\) be a finitely generated ideal of \(R\). Set \(I:=\{a\in A\) such that \((a,e)\in J\) for some \(e\in E\}\) which is a finitely generated ideal of \(A\). Since \(A\) is an \(S_{0}\)-Bezout ring, then there exist \(s_{0}\in S_{0}\) and \(a\in A\) such that
\[s_{0}I\subseteq Aa\subseteq I.\]
By multiplying the above equation by an element \(s\in S_{0}\cap M\), we may assume that \(s_{0}\in M\) (since we have \(ss_{0}I\subseteq Asa\subseteq sI\subseteq I\)). On the other hand, there exists \(e\in E\) such that \((a,e)\in J\) since \(a\in I\). Hence, \((as_{0},0)=(a,e)(s_{0},0)\in J\) and so we have
\[(s_{0}s_{0},0)J\subseteq R(s_{0},0)(a,e)=R(s_{0}a,0)\subseteq J.\]
Therefore, \(J\) is an \(S\)-principal ideal of \(R\), which completes the proof. \(\square\)
Next, we use the transfer of the \(S\)-Bezout property in the trivial ring extension to provide some new original examples of \(S\)-Bezout rings that are not Bezout.
**Example 2.18**.: _Let \((A,P)\) be a local Bezout domain, \(E=A/P\) be an \(A\)-module such that \(Dim_{\frac{A}{P}}(E)\neq 1\) and let \(S_{0}\) be a multiplicatively closed subset of \(A\) such that \(S_{0}\cap P^{\prime}\neq\emptyset\). Consider the trivial ring extension of \(A\) by \(E\)\(R:=A\propto E\) and \(S:=S_{0}\propto E\) a multiplicatively closed subset of \(R\). Then:_
1. \(R\) _is an_ \(S\)_-Bezout ring._
2. \(R\) _is not a Bezout ring._
**Proof.** (1) Observe that \(PE=0\) and \(S_{0}\not\subseteq U(R)\). By assertion (3) of Theorem 2.17, \(R\) is an \(S\)-Bezout ring (as \(A\) is an \(S_{0}\)-Bezout ring).
(2) Assume by the way of contradiction that \(R\) is a Bezout ring. Since \(Dim_{\frac{A}{P}}(E)\neq 1\), then \(Dim_{\frac{A}{P}}(E)\geq 2\). Pick two elements \(h,k\in E\) such that {h,k} is \(A/P\)-linearly independant set and consider the finitely generated ideal \(J:=R(0,h)+R(0,k)\) of \(R\). One can easily check \(J\) is not principal, which is a contradiction. Therefore, \(R\) is not a Bezout ring. \(\square\)
We denote by \(\overline{H}\), the closure integral of integral domain \(H\) in its quotient field. Recall from [4] that an integral domain \(H\) is almost Bezout if and only if the integral closure \(\overline{H}\) of \(H\) is a Prufer domain with torsion class group and \(H\subseteq\overline{H}\) is a root extension. Now, we show how one can build an example of an \(S\)-Bezout almost Bezout ring which is not Bezout via trivial ring extension of an integral domain by a vector space over its quotient field, as shown below.
**Example 2.19**.: _Let \(F\) be a field of characteristic \(p>0\) and let \(F\subseteq L\) be a purely inseparable field extension. Consider the integral domain \(H=F+XL[X],\)\(K:=qf(A)\), \(R:=H\propto K\) the trivial ring extension of \(H\) by \(K\) and \(S:=S_{0}\propto E\) be a multiplicative closed subset of \(R\), where \(S_{0}:=H-\{0\}\) is a multiplicative closed subset of \(H\). Then:_
1. \(R\) _is an_ \(S\)_-Bezout ring._
2. \(R\) _is an almost Bezout ring._
3. \(R\) _is not a Bezout ring._
**Proof.**
(1) By assertion (2) of Theorem 2.17, \(R\) is an \(S\)-Bezout ring, as \(H\) is an \(S_{0}\)-Bezout ring.
(2) Note that \(\overline{H}=L[X]\) is a Prufer domain with torsion class group and for each \(Q\) in \(L[X]\) there exists \(n\geq 0\) such that \(Q^{p^{n}}\in H.\) Therefore, \(H\subset\overline{H}\) is a root extension. From [4, Corollary 4.8(1)], \(H\) is an almost Bezout domain and so is \(R\)[25, Theorem 3.1(2)].
(3) \(R\) is not a Bezout ring since \(H\) is a homomorphic image of \(R\) which is not Bezout (as \(H\) is not integrally closed).
Theorem 2.17 provides a new example of \(S\)-Bezout ring which is not Bezout, as shown below.
**Example 2.20**.: _Let \(A\) be a non-Bezout integral domain which is not a field, \(K=qf(A)\), and \(R:=A\propto K\) be the trivial ring extension of \(A\) by \(K\). Set \(S_{0}=A-\{0\}\) and \(S:=S_{0}\propto E\). Then \(R\) is an non-Bezout \(S\)-Bezout ring by the assertion_ (2) _Theorem 2.17_ (as \(A\) is a non-Bezout \(S_{0}\)-Bezout ring)._
Now, we turn our attention to the transfer of the \(S\)-Bezout ring property to amalgamation of rings \(R:=A\bowtie^{f}J\). It is worthwhile observing that if \(S^{\prime}\) is a multiplicative closed subset of \(R\), then \(S_{0}=\{a\in A/(a,f(a)+j)\in S^{\prime}\) for some \(j\in J\}\) is multiplicative closed subset of \(A\). Conversely, if \(S_{0}\) is a multiplicative set of \(A\), then \(S^{\prime}:=S_{0}\bowtie^{f}0\) and \(S_{0}\bowtie^{f}J\) are multiplicative sets of \(R\).
**Theorem 2.21**.: _Let \(A\) and \(B\) be two rings, \(J\) an ideal of \(B\) and let \(f:A\longrightarrow B\) be a ring homomorphism. Then:_
1. _If_ \(A\bowtie^{f}J\) _is an_ \(S^{\prime}\)_-Bezout ring, then_ \(A\) _is an_ \(S_{0}\)_-Bezout ring._
2. _Assume that_ \(f(S_{0})\cap J\neq\emptyset\)_. Then_ \(A\bowtie^{f}J\) _is an_ \(S^{\prime}\)_-Bezout ring if and only if_ \(A\) _is an_ \(S_{0}\)_-Bezout ring._
3. _Assume that_ \(f(S_{0})\cap J=\emptyset\)_,_ \(J:=ann(f(S_{0}))\) _and every proper ideal of_ \(A\bowtie^{f}J\) _is homogenous. Then_ \(A\bowtie^{f}J\) _is an_ \(S^{\prime}\)_-Bezout ring if and only if_ \(A\) _is an_ \(S_{0}\)_-Bezout ring._
**Proof.** (1) Assume \(R:=A\bowtie^{f}J\) is an \(S^{\prime}\)-Bezout ring and let \(I\) be a finitely generated proper ideal of \(A\) generated by \((a_{i})_{i=1,\ldots,n}\). Then the finitely generated ideal \(J\) of \(R\) generated by \((a_{i},f(a_{i}))_{i=1,\ldots,n}\) is \(S\)-principal, that is there exist \((s_{0},f(s_{0})+j)\in S\) and \((a,f(a)+j^{\prime})\in R\) such that
\[(s_{0},f(s_{0})+j)J\subseteq R(a,f(a)+j^{\prime})\subseteq J\]
and so
\[s_{0}I\subseteq Aa\subseteq I.\]
Therefore, \(I\) is \(S_{0}\)-principal and hence \(A\) is an \(S_{0}\)-Bezout ring.
(2) Assume that \(f(S_{0})\cap J\neq\emptyset\). If \(A\bowtie^{f}J\) is an \(S^{\prime}\)-Bezout ring, then by assertion (1) above, \(A\) is an \(S_{0}\)-Bezout ring. Conversely, assume that \(A\) is \(S_{0}\)-Bezout. Let \(L\) be a finitely generated ideal of \(A\bowtie^{f}J\). Consider the ideal \(I:=\{a\in A/(a,f(a)+j)\in L\) for some \(j\in J\}\) of \(A\). Since \(L\) is a finitely generated proper ideal of \(A\bowtie^{f}J\), then it is easy to see that \(I\) is a finitely generated proper ideal of \(A\). Using the fact that \(A\) is an \(S_{0}\)-Bezout ring, then \(I\) is an \(S_{0}\)-principal ideal of \(A\). So, there exist \(s_{0}\in S_{0}\) and \(a\in A\) such that \(s_{0}I\subseteq aA\subseteq I\). From assumption, there is \(t\in S_{0}\) such that \(f(t)\in J\). Observe that \(ts_{0}I\subseteq atA\subseteq tI\subseteq I\). Therefore, \((ts_{0},0)L\subseteq ts_{0}I\times 0\subseteq atA\times 0\subseteq(a,f(a))tA \times 0\subseteq(a,f(a))A\bowtie^{f}J\subseteq L\), since \(f(t)\in J\) and \(a\in I\). Hence, \(L\) is an \(S^{\prime}\)-principal ideal of \(A\bowtie^{f}J\), making \(A\bowtie^{f}J\), an \(S^{\prime}\)-Bezout ring.
(3) Assume that \(f(S_{0})\cap J=\emptyset\), \(J:=ann(f(S_{0}))\) and every proper ideal
of \(A\bowtie^{f}J\) is homogenous. If \(A\bowtie^{f}J\) is an \(S\)-Bezout ring, then by assertion (1) above, \(A\) is an \(S_{0}\)-Bezout ring. Conversely, assume that \(A\) is an \(S_{0}\)-Bezout ring. Let \(L\) be a finitely generated proper ideal of \(A\bowtie^{f}J\). From assumption, \(L:=I\bowtie^{f}J\) for some finitely generated proper ideal \(I\) of \(A\). Since \(A\) is an \(S_{0}\)-Bezout ring, then there exist \(s_{0}\in S_{0}\) and \(a\in A\) such that \(s_{0}I\subseteq aA\subseteq I\). We claim that \((s_{0},f(s_{0}))L\subseteq(a,f(a))A\bowtie^{f}J\subseteq L\). Indeed, let \((i,f(i)+k)\in L\). Then \(s_{0}i=ar\) for some \(r\in A\). So, \((s_{0},f(s_{0}))(i,f(i)+k)=(s_{0}i,f(s_{0}i)+kf(s_{0}))=(s_{0}i,f(s_{0}i))=(ar, f(ar))=(a,f(a))(r,f(r))\in(a,f(a))A\bowtie^{f}J\subseteq L\). Hence, it follows that \(I\bowtie^{f}J\) is an \(S^{\prime}\)-principal ideal of \(A\bowtie^{f}J\). Finally, \(A\bowtie^{f}J\) is an \(S^{\prime}\)-Bezout ring, as desired.
Next, let \(I\) be a _proper_ ideal of \(A\). The (amalgamated) duplication of \(A\) along \(I\) is a special amalgamation given by
\[A\bowtie I:=A\bowtie^{id_{A}}I=\big{\{}(a,a+i)/a\in A,i\in I\big{\}}.\]
The following corollary is an immediate consequence of Theorem 2.21 on the transfer of the \(S\)-Bezout property into duplications.
**Corollary 2.22**.: _Let \(A\) be a ring and \(I\) be an ideal of \(A\). Then:_
1. _If_ \(A\bowtie I\) _is an_ \(S^{\prime}\)_-Bezout ring, then_ \(A\) _is an_ \(S_{0}\)_-Bezout ring._
2. _Assume that_ \(S_{0}\cap I\neq\emptyset\)_. Then_ \(A\bowtie I\) _is an_ \(S^{\prime}\)_-Bezout ring if and only if so is_ \(A\) _is an_ \(S_{0}\)_-Bezout ring._
3. _Assume that_ \(S_{0}\cap I=\emptyset\)_,_ \(I:=Ann(f(S_{0}))\) _and every proper ideal of_ \(A\bowtie I\) _is homogenous. Then_ \(A\bowtie I\) _is_ \(S^{\prime}\)_-Bezout if and only if_ \(A\) _is an_ \(S_{0}\)_-Bezout ring._
Theorem 2.21 enriches the current literature with a new original class of \(S\)-Bezout rings which are not Bezout. The following examples show how to construct such rings.
**Example 2.23**.: _Let \(A\) be any \(S\)-Bezout ring which is not Bezout (for instance take \(A:=B\propto E\) be the trivial ring extension of \(B\) by \(E\), \(B:=\mathbb{Z}\) be the ring of integers, \(E:=(\mathbb{Z}/2\mathbb{Z})^{\infty}\) be a \(\mathbb{Z}/2\mathbb{Z}\)-vector space, see example 2.2). Consider the surjective ring homomorphism \(f:A\to B\) and \(S_{0}:=\{(2^{n},0)/n\in\mathbb{N}\}\) be a multiplicative set of \(A\) and \(J:=2\mathbb{Z}\) be an ideal of \(B\). Then:_
1. \(A\bowtie^{f}J\) _is an_ \(S^{\prime}\)_-Bezout ring._
2. \(A\bowtie^{f}J\) _is not a Bezout ring._
**Proof.** (1) First, observe that \(f(S_{0})\cap J\neq\emptyset\). Since \(A\) is an \(S_{0}\)-Bezout ring (by Example 2.2), then by assertion (2) of Theorem 2.21, it follows that \(A\bowtie^{f}J\) is an \(S^{\prime}\)-Bezout ring.
(2) We claim that \(A\bowtie^{f}J\) is not a Bezout ring. Indeed, \(A\) which is a homomorphic image of \(A\bowtie^{f}J\) is not Bezout and the fact that the Bezout property is stable under factor ring, it follows that \(A\bowtie^{f}J\) is not a Bezout
ring.
**Example 2.24**.: _Let \(A:=A_{1}\times A_{2}\) be a ring (for instance \(A:=\mathbb{Z}\times K[X,Y]\) where \(\mathbb{Z}\) is the ring of integers and \(K[X,Y]\) the polynomial ring in two variables \(X,Y\) over a field \(K\)). Consider the multiplicative sets \(S_{1}:=\{3^{n}/n\in\mathbb{N}\}\) and \(S_{2}:=K[X,Y]-\{0\}\) of \(\mathbb{Z}\) and \(K[X,Y]\), respectively, the surjective ring homomorphism \(f:\mathbb{Z}\times K[X,Y]\to\mathbb{Z}\) defined by \(f((a,P))=a\) and the ideal \(J:=3\mathbb{Z}\) of \(\mathbb{Z}\). Let \(S=S_{1}\times S_{2}\) be a multiplicative set of \(A\) and \(S^{\prime}:=S\bowtie^{f}0\) be a multiplicative set of \(A\bowtie^{f}J\). Then:_
1. \(A\bowtie^{f}J\) _is an_ \(S^{\prime}\)_-Bezout ring._
2. \(A\bowtie^{f}J\) _is not a Bezout ring._
Proof.: (1) First note that \(A:=\mathbb{Z}\times K[X,Y]\) is an \(S_{1}\times S_{2}\)-Bezout ring (by Proposition 2.13, since \(\mathbb{Z}\) is an \(S_{1}\)-Bezout ring and \(K[X,Y]\) is an \(S_{2}\)-Bezout ring). On the other hand, \(f(S)\cap J=S_{1}\cap J\neq\emptyset\). Hence, by assertion (2) of Theorem 2.21, \(A\bowtie^{f}J\) is an \(S^{\prime}\)-Bezout ring.
(2) \(A\bowtie^{f}J\) is not a Bezout ring, since \(A\) is not a Bezout ring (as \(K[X,Y]\) is not a Bezout domain).
The next example shows that the class of \(S\)-Bezout rings and the class of almost Bezout rings are distinct in general and we use Theorem 2.21 to construct a new original class of \(S\)-Bezout rings which are not almost Bezout.
**Example 2.25**.: _Let \(A\) be any integral domain which is not a field, \(f=id_{A}\) be the identity ring homomorphism and let \(J:=m\) be a maximal ideal of \(A\). Consider the multiplicative closed subset \(S_{0}:=A-\{0\}\) of \(A\). Then:_
1. \(A\bowtie^{f}J\) _is an_ \(S^{\prime}\)_-Bezout ring._
2. \(A\bowtie^{f}J\) _is not an almost Bezout ring._
Proof.: (1) Using similar argument as in Example 2.4, it follows that \(A\) is an \(S^{\prime}\)-Bezout ring. On the other hand \(f(S_{0})=f(A-\{0\})\cap J=A-\{0\}\cap m\neq\emptyset\). Hence, by assertion (2) of Theorem 2.21, it follows that \(A\bowtie^{f}J\) is an \(S^{\prime}\)-Bezout ring.
(2) From [26, Corollary 2.5], \(A\bowtie^{f}J\) is not an almost Bezout ring, as \(J\neq 0\).
Now, we introduce the concept of nonnil \(S\)-Bezout ring.
**Definition 2.26**.: _Let \(A\in\mathcal{H}\) be a ring and \(S\subseteq A\) a multiplicative set. A is called a nonnil \(S\)-Bezout ring if for any pair of finitely generated nonnil ideals \(I\) and \(P\) of \(A\) such that \(I\subseteq P\) and \(P\) is an \(S\)-principal ideal, we have \(I\) is \(S\)-principal ideal._
Next, we establish a characterization of nonnil \(S\)-Bezout property. For every multiplicative set \(S\) of ring \(A\), set \(S^{\prime}=\frac{S}{Nil(A)}=\{s+Nil(A)\mid s\in S\}\). It is easy to see that \(S^{\prime}\) is a multiplicative set of \(\frac{A}{Nil(A)}\).
**Theorem 2.27**.: _Let \(A\in\mathcal{H}\) be a ring and \(S\subseteq A\) a multiplicative set. Then \(A\) is a nonnil \(S\)-Bezout ring if and only if \(\frac{A}{Nil(A)}\) is an \(S^{\prime}\)-Bezout domain._
Proof.: Assume that \(A\) is a nonnil \(S\)-Bezout ring and let \(\frac{I}{Nil(A)}\subseteq\frac{P}{Nil(A)}\) be a pair of finitely generated ideals of \(\frac{A}{Nil(A)}\) such that \(\frac{P}{Nil(A)}\) is an \(S^{\prime}\)-principal ideal of \(\frac{A}{Nil(A)}\). We may assume that \(\frac{I}{Nil(A)}\neq 0\). Hence, \(I\subseteq P\) is a pair of finitely generated nonnil ideals of \(A\). Therefore, there exist \(s^{\prime}\in S^{\prime}\) and \(\bar{a}\in\frac{A}{Nil(A)}\) such that \(s^{\prime}\frac{P}{Nil(A)}\subseteq\bar{a}\frac{A}{Nil(A)}\subseteq\frac{P}{Nil (A)}\) with \(s^{\prime}=s+Nil(A)\) for some \(s\in S\). So, \(\bar{a}\frac{A}{Nil(A)}=(ab+Nil(A))\) for some \(b\in A\). We have \(\bar{a}\frac{A}{Nil(A)}\cong\frac{J}{Nil(A)}\). Our aim is to show that \(J\) is a principal ideal of \(A\) generated by \(a\). Consider a nonnilpotent element \(x\) of \(J\). Then \(x+Nil(A)=ab+Nil(A)\) in \(\frac{A}{Nil(A)}\) for some \(b\) in \(A\). Therefore, there exists \(w\in Nil(A)\) such that \(x+w=ab\) in \(A\). Since \(x\) is nonnilpotent, \(w=xk\) for some \(k\in Nil(A)\). So, \(x+w=x+xk=x(1+k)=ab\). Using the fact that \(k\in Nil(A)\) and so \(1+k\in U(A)\), it follows that \(x\in aA\) and so \(J\) is a principal ideal of \(A\) generated by \(a\). One can easily check that \(sP\subseteq aA=J\subseteq P\) and so \(P\) is an \(S\)-principal ideal of \(A\). Consequently, \(I\) is \(S\)-principal ideal of \(A\) as \(A\) is a nonnil \(S\)-Bezout ring. It follows that \(\frac{I}{Nil(A)}\) is an \(S^{\prime}\)-principal ideal of \(\frac{A}{Nil(A)}\). Therefore, \(\frac{A}{Nil(A)}\) is an \(S^{\prime}\)-Bezout domain. Conversely, let \(I\subseteq P\) be a pair of finitely generated nonnil ideals of \(A\) such that \(P\) is an \(S\)-principal ideal of \(A\). Then \(\frac{I}{Nil(A)}\subseteq\frac{P}{Nil(A)}\) is a pair of nonzero finitely generated ideals of \(\frac{A}{Nil(A)}\) and it is easy to see that \(\frac{P}{Nil(A)}\) is an \(S^{\prime}\)-principal ideal of \(\frac{A}{Nil(A)}\). Therefore, \(\frac{I}{Nil(A)}\) is \(S^{\prime}\)-principal since \(\frac{A}{Nil(A)}\) is an \(S^{\prime}\)-Bezout domain. Finally, \(I\) is \(S\)-principal, making \(A\), a nonnil \(S\)-Bezout ring.
As a consequence of Theorem 2.27, we establish the following characterization of nonnil \(S\)-Bezout rings.
**Corollary 2.28**.: _Let \(A\in\mathcal{H}\) be a ring and \(S\subseteq A\) be a multiplicative set. Then \(A\) is a nonnil \(S\)-Bezout ring if and only if \(\phi(A)\) is a nonnil \(\phi(S)\)-Bezout ring._
Proof.: Assume that \(A\) is a nonnil \(S\)-Bezout ring. By Theorem 2.27, \(\frac{A}{Nil(A)}\) is an \(S^{\prime}\)-Bezout domain with \(S^{\prime}=\frac{S}{Nil(A)}\). From [11, Lemma 2.1], \(\frac{A}{Nil(A)}\cong\frac{\phi(A)}{Nil(\phi(A))}\) and therefore \(\frac{\phi(A)}{Nil(\phi(A))}\) is a \(\frac{\phi(S)}{Nil(\phi(S))}\)-Bezout domain. Hence, \(\phi(A)\) is a nonnil \(\phi(S)\)-Bezout ring by Theorem 2.27. The converse holds using similar argument as previously.
The following example illustrates Theorem 2.27 by generating a new original class of nonnil \(S\)-Bezout ring which is not Bezout.
**Example 2.29**.: _Let \(A\) be any non-Bezout \(S\)-Bezout domain which is not a field with quotient field \(K\) and let \(S\subseteq A\) be a multiplicative set. Let \(K\) be
the quotient field of \(A\). Then \(A\propto K\) is a nonnil \(S\)-Bezout ring which is not Bezout._
**Proof.** Set \(R=A\propto K\). First, \(Nil(R)=0\propto K\) is a divided prime ideal of \(R\). Indeed, let \((0,e)\in Nil(R)\) and \((a,f)\in R\setminus Nil(R)\). Then \((0,e)=(a,f)(0,\frac{e}{a})\) and thus \(R\in\mathcal{H}\). Since \(\frac{R}{Nil(R)}\) is ring-isomorphic to \(A\), \(R\) is a nonnil \(S\)-Bezout ring by Theorem 2.27. Furthermore, \(R\) is not a Bezout ring. \(\square\)
The following theorem establishes a characterization of nonnil \(S\)-Bezout rings in a special setting of pullback.
**Theorem 2.30**.: _Let \(A\in\mathcal{H}\) and \(S\subseteq A\) a multiplicative set. Then \(A\) is a nonnil \(S\)-Bezout ring if and only if \(\phi(A)\cong R\), where \(R\) is obtained from the following pullback diagram:_
_where \(T\) is a zero-dimensional quasilocal ring with maximal ideal \(M\), \(B:=R/M\) is a \(S_{1}\)-Bezout subring of \(T/M\) with \(S_{1}=\alpha(\phi(S))/M\) such that \(\alpha\) is the ring isomorphism from \(\phi(A)\) to \(R\), the vertical arrows are the usual inclusion maps, and the horizontal arrows are the usual surjective maps._
**Proof.** Assume that \(\phi(A)\cong R\) obtained from the given diagram. Then \(R\in\mathcal{H}\) and \(Nil(R)=Z(R)=M\). Since \(R/M\) is an \(S_{1}\)-Bezout domain, \(R\) is a nonnil \(S_{2}\)-Bezout ring by Theorem 2.27, where \(S_{2}=\alpha(\phi(S))\), and so \(\phi(A)\) is a nonnil \(\phi(S)\)-Bezout ring. Hence, \(A\) is a nonnil \(S\)-Bezout ring. Conversely, assume that \(A\) is a nonnil \(S\)-Bezout ring. Set \(T=A_{Nil(A)}\), \(M=Nil(A_{Nil(A)})\), and \(R=\phi(A)\) yields the desired pullback diagram. \(\square\)
The next theorem establishes a result showing that the class of nonnil Prufer rings and the class of nonnil \(P\)-Bezout rings coincide. Recall that an integral domain \(R\) is local Bezout if and only if it is a valuation domain.
**Theorem 2.31**.: _Let \(R\in\mathcal{H}\). Then the following assertions are equivalent:_
1. \(R\) _is a nonnil_ \(P\)_-Bezout ring for each prime ideal_ \(P\) _of_ \(R\)_._
2. \(R_{P}\) _is a nonnil chained ring for each prime ideal_ \(P\) _of_ \(R\)_._
3. \(R_{M}\) _is a nonnil chained ring for each maximal ideal_ \(M\) _of_ \(R\)_._
4. \(R\) _is a nonnil Prufer ring._
**Proof.** (1) \(\Rightarrow\) (2) Assume that \(R\) is a nonnil \(P\)-Bezout ring for each prime ideal \(P\) of \(R\). Let \(P\) be a prime ideal of \(R\). Then \(R\) is a nonnil \(R-P\)-Bezout ring. By Theorem 2.27, \(\frac{R}{Nil(R)}\) is an \(\frac{R-P}{Nil(R)}\)-Bezout domain and so by assertion (1) of Proposition 2.7, \(\left(\frac{R}{Nil(R)}\right)_{\frac{P}{Nil(R)}}\) is a local Bezout domain and so is a
valuation domain. On the other hand, \(\left(\frac{R}{Nil(R)}\right)_{\frac{P}{Nil(R)}}\simeq\frac{R}{Nil(R)R_{P}}\simeq \frac{R_{P}}{Nil(R_{P})}\) and \(R_{P}\in\mathcal{H}\). Therefore, by [3, Theorem 2.7], it follows that \(R_{P}\) is a nonnil chained ring.
\((2)\Rightarrow(1)\) Assume that \(R_{P}\) is a nonnil chained ring for each prime ideal \(P\) of \(R\). Observe that \(R_{P}\in\mathcal{H}\) and by [3, Theorem 2.7], \(\left(\frac{R}{Nil(R)}\right)_{\frac{P}{Nil(R)}}\simeq\frac{R_{P}}{Nil(R_{P})}\) is a valuation domain and so is a local Bezout domain. So, by Theorem 2.10, \(\frac{R_{P}}{Nil(R_{P})}\) is \((\frac{R_{P}}{Nil(R_{P})}-\frac{PR_{P}}{Nil(R_{P})})\)-Bezout. Using the fact that \((\frac{R_{P}}{Nil(R_{P})}-\frac{PR_{P}}{Nil(R_{P})})\subseteq U(\frac{R_{P}}{Nil (R)})\), then by Proposition 2.8, it follows that \(\frac{R}{Nil(R)}\) is a \(\frac{P}{Nil(R)}\)-Bezout domain and so by Theorem 2.27, \(R\) is a nonnil \(P\)-Bezout ring, as desired.
\((2)\Leftrightarrow(3)\Leftrightarrow(4)\) This follows from [3, Theorem 2.9].
|
2310.11250
|
Effect of Counterion Size on Polyelectrolyte Conformations and
Thermodynamics
|
We present a theoretical model to study the effect of counterion size on the
effective charge, size, and thermodynamic behavior of a single, isolated, and
flexible polyelectrolyte (PE) chain. We analyze how altering counterion size
modifies the energy and entropy contributions to the system, including the
ion-pair free energy, excluded volume interactions, entropy of free and
condensed ions, and dipolar attraction among monomer-counterion pairs, which
result in competing effects challenging intuitive predictions. The PE self
energy is calculated using Edwards-Muthukumar Hamiltonian, considering a
Gaussian monomer distribution for the PE. The condensed ions are assumed
confined within a cylindrical volume around the PE backbone. The dipolar and
excluded volume interactions are described by the second and third virial
coefficients. Assumption of freely-rotating dipoles results in a first-order
coil-globule transition of the PE chain. A more realistic weaker dipolar
attraction, parameterized in our theory, shifts it to a second-order continuous
transition. We calculate the size scaling-exponent of the PE and find exponents
according to the relative dominance of the electrostatic, excluded volume, or
dipolar effects. We further identify the entropy- and energy-driven regimes of
the effective charge and conformation of the PE, highlighting the interplay of
free ion entropy and ion-pair energy with varying electrostatic strengths. The
crossover strength, dependent on the counterion size, indicates that
diminishing sizes favor counterion condensation at the expense of free ion
entropy. The predictions of the model are consistent with trends in
simulations, and generalize findings of the point-like counterion theories.
|
Souradeep Ghosh, Arindam Kundagrami
|
2023-10-17T13:18:54Z
|
http://arxiv.org/abs/2310.11250v1
|
# Effect of Counterion Size on Polyelectrolyte Conformations and Thermodynamics
###### Abstract
We present a theoretical model to study the effect of counterion size on the effective charge, size, and thermodynamic behavior of a single, isolated, and flexible polyelectrolyte (PE) chain. We analyze how altering counterion size modifies the energy and entropy contributions to the system, including the ion-pair free energy, excluded volume interactions, entropy of free and condensed ions, and dipolar attraction among monomer-counterion pairs, which result in competing effects challenging intuitive predictions. The PE self energy is calculated using Edwards-Muthukumar Hamiltonian, considering a Gaussian monomer distribution for the PE. The condensed ions are assumed confined within a cylindrical volume around the PE backbone. The dipolar and excluded volume interactions are described by the second and third virial coefficients. Assumption of freely-rotating dipoles results in a first-order coil-globule transition of the PE chain. A more realistic weaker dipolar attraction, parameterized in our theory, shifts it to a second-order continuous transition. We calculate the size scaling-exponent of the PE and find exponents according to the relative dominance of the electrostatic, excluded volume, or dipolar effects. We further identify the entropy- and energy-driven regimes of the effective charge and conformation of the PE, highlighting the interplay of free ion entropy and ion-pair energy with varying electrostatic strengths. The crossover strength, dependent on the counterion size, indicates that diminishing sizes favor counterion condensation at the expense of free ion entropy. The predictions of the model are consistent with trends in simulations, and generalize findings of the point-like counterion theories.
## I Introduction
The conformational behavior of flexible uncharged polymers in different solvents is well-understood. In general, good solvents lead to extended conformations and bad solvents result in collapsed globules. For flexible polyelectrolytes (PE), however, in the presence of counterions, conformations may undergo a coil-globule transition, irrespective of the solvent type.[1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16] This transition depends on the extent of counterion adsorption, influenced by the interplay of the electrostatic energy for the formation of monomer-counterion bound ion-pairs and the translational entropy of the free counterions and salt ions. The monomer-counterion bound pairs come at the cost of the translational entropy of the free ions, and results in a train of dipoles along the chain backbone. Solvent quality, aided by the short-range dipolar attraction, leads to the coil-globule transition in a poor solvent. In a good solvent, however, the transition is caused by the latter. It has been shown that larger (bulky) counterions[17; 18] or surfactant-like counterions[19] can prevent the coil-globule transition. Such counterion specificity also plays a role in controlling bulk properties such as viscosity and conductivity of PE systems[20; 21; 22; 23; 24; 25]. However, the influence of the counterion size on the interaction between the localized ions and the PE and on the equilibrium behavior of the PE and the counterions has largely been absent in early theoretical models,[2; 3; 4; 5; 6; 7; 8; 9; 11] even though its importance was recognized earlier.[1] Early computer simulations typically accounted for counterion specificity through finite size, fixed at equal to or smaller than a monomer.[26; 27; 28] The effect of the counterion nature on the conformations of a single flexible PE and related thermodynamics has only recently been explored theoretically[12; 14], despite a wealth of experimental and molecular simulation data being available for decades on the subject.[12; 14; 17; 18; 20; 21; 22; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38] The effect of counterion size on single polyelectrolyte molecules has in detail been studied using molecular dynamics simulations.[12; 14; 17; 18; 37] Additionally, there have been theoretical models focusing on the influence of counterion size in PE gels.[34; 35; 36; 25]
The swelling behavior of polyelectrolyte gels depends on the size and type of counterions, which affect the ion association, the counterion condensation process, and osmotic pressure of 'free' counterions within the gel.[34; 36; 25; 37; 39] The volume transition theory of PE gels with charge regularization includes a variable dielectric mismatch parameter that implicitly accounted for the salt ion diameter.[35] Reentrant swelling with intermediate counterion size are found to be caused by dipolar attraction and excluded volume interactions.[36] Small counterions cause gel collapse, while large counterions prevent it, by suppressing ion pairing and increasing swelling.[34; 36; 25] The solvent specificity of PE gel collapse has also been observed in experiments.[39] The viscosity and conductivity of PE gels are also reported to be dependent upon ion size, where small ions condense more to decrease free ion concentration, reducing conductivity.[25]
In polymer solutions, the bulk viscosity is influenced by the conformation of the polymer chains. The presence of counterions of specific sizes and solvation characteristics affect the PE conformations, and in turn the viscosity. For example, in entangled PE solutions (Xanthan gum), larger salt counterions, both monovalent and divalent,
lead to higher viscosities.[23] Studies on other polymer solutions, such as PSS and grafted PAA, have shown that viscosity is nearly proportional to the hydrodynamic size of the counterions.[22; 24] The choice of solvent also impacts the viscosity behavior of polyelectrolyte solutions in the presence of salt. For instance, in a PAA solution in methanol, Li\({}^{+}\) was found to induce higher viscosity compared to Na\({}^{+}\).[20; 21] The drastic drop in viscosity for Na\({}^{+}\) suggests a collapse driven by dipolar interactions.[11] Moreover, in aqueous methanol, the molar conductivity was found inversely proportional to the hydrodynamic size of the counterions, indicating loosely bound hydrated ions[21]. However, at higher methanol concentrations, the trend reverses, suggesting strengthened ion-pair formations with decreased solvation effects. The conductivity in water is found to be higher compared to methanol due to a lesser condensation or association of counterions in water that has a high dielectric constant.[20; 21]
For strongly charged polyelectrolytes in dilute solutions, simulations[17; 18; 37] have demonstrated that the size of counterions and the strength of Coulomb interaction significantly influence the conformational behavior of the PE chains. Bulky counterions lead to swollen conformations of PE chains, where the counterions are loosely bound to the chain backbone and move freely around the PE and in the solution.[17; 18] The conformational behavior of a dipolar polymer chain was suggested to be influenced by the interplay between electrostatic and excluded volume interactions,[37] which can potentially be controlled in experiments by varying the solvent composition or temperature.
Regarding the counterion distribution near the chain backbone, neutron scattering has shown that a compact double layer around the ionene backbone is formed when Br\({}^{-}\) ions are present, in contrast to F\({}^{-}\) counterions[31]. Mixtures of counterions induce a selectivity in condensation of small ions against large ions, resulting in intra-polymer micro-phase separation and core-shell microstructure formation within polyelectrolyte globules, observed in simulations.[17; 18] Additionally, in PE brushes, bridging interactions due to monovalent ions have been reported[40]. The smaller ions, such as Li\({}^{+}\), bridge more strongly than the larger ions, such as Cs\({}^{+}\).
These detailed studies on polyelectrolyte systems with finite-size counterions, as compared to point-like counterions, reveal their richness and motivate further investigation. A few previous works[3; 11; 26; 28; 41; 5] have theorized the PE chain collapse due to dipolar attraction, even considering the orientational restrictions of the dipoles.[41] However, these studies did not consider the finite size effects or excluded volume interactions of the counterions, which recent simulations and experiments have suggested to play a significant role.[17; 18; 25; 35; 36; 37] Specifically, a virial expansion model[14] focused on regimes of Coulomb strengths (or Bjerrum length, \(\ell_{B}\)) leading to collapsed conformations, and pointed out that inclusion of only till the third virial term is sufficient for such purposes.[12] However, the thermodynamic aspects resulting from condensation of finite size counterions, and related conformational transitions, for the entire range of the Coulomb strength have not been looked into yet, to the best of our knowledge. A theory that incorporates explicit calculations of the PE self-energy from an interaction Hamiltonian, accounts for dipolar interactions and also finite size effects of counterions through the virial coefficients to the lowest order, and remains applicable for all physically accessible \(\ell_{B}\) values, shows potential for a more comprehensive understanding of the system.
To this end, we aim to build a general, minimalistic theoretical model for a single, isolated, and flexible PE chain with finite-size counterions, investigating the effect of the counterion size on the PE's effective charge, size, and thermodynamics. Our analytical model focuses on the size variation of counterions, which results in modifications to the energy and and entropy components of the system including the ion-pair energy, excluded volume interactions, volume entropy of free ions, volume entropy of condensed ions assumed confined to a cylindrical volume conformal to the PE backbone, dipolar interactions captured through the second virial coefficient, and the third virial coefficient required to stabilize the collapse of the chain. The increase in counterion size reduces the gain in free energy due to both ion-pair formation and the volume entropy of free ions. As a result, it leads to non-monotonic thermodynamic effects making intuitive predictions challenging. To construct the theory, we use the Edwards-Muthukumar interaction Hamiltonian,[42; 43; 5] which captures the self-energy of the PE chain through segment-segment electrostatic and excluded volume interactions, including dipolar interactions, and also the conformational entropy of the PE chain. The derived generic free energy is extremized through a Gaussian trial Hamiltonian, following Flory.[44]
The use of freely rotating dipolar-pair interactions and short-range repulsions through the second and third virial coefficients in the free energy obviates the need of any new parameter in the theory, in addition to the three major ones - the Bjerrum length, Debye screening length, and dielectric mismatch parameter. The dipolar interactions lead to a first-order coil-globule transition of the PE at reasonably high Coulomb strengths, which shifts to a continuous and second order transition with increasing counterion size, that opposes the chain collapse progressively. The parameterization of the dipolar interaction, assuming an over-estimation from the use of freely rotating dipoles, also shifts the transition to be second order. In addition, we also calculate the size scaling exponents, and observe a variety of scaling behavior as an interplay of electrostatic (monopolar), excluded volume, and dipolar interactions. We further derive the thermodynamics by identifying the enthalpy- and entropy-driven regimes, as functions of Coulomb strength and counterion size, both in the absence and presence of a moderate salt concentration.
Theory
For the theoretical model, we consider a linear and flexible polyelectrolyte (PE) chain composed of \(N\) identical ionizable groups as repeat units or monomers, each of diameter \(\ell\) carrying a monovalent negative charge, with counterions of finite size (of diameter \(r_{c}\)), in a dilute solution with volume \(\Omega\). With total \(N\) counterions, the degree of counterion condensation \(\alpha=M/N\) is defined as the ratio of the number of charge-compensated monomers (the monomer on which a counterion has condensed), \(M\), to the total number of monomers in the chain. Degree of ionization of the chain is defined as (\(f=1-\alpha\)).
The number density of the externally added monovalent salt, which is assumed to fully dissociate into \(n_{+}\) cations and \(n_{-}\) anions, is given by \(c_{s}=n_{+}/\Omega\equiv n_{-}/\Omega\). The dimensionless monomer density in the solution, denoted as \(\bar{\rho}\), can be expressed as \(\bar{\rho}=N/(\Omega/r_{c}^{3})\). Similarly, we define \(\bar{c}_{s}=n_{s}/(\Omega/r_{c}^{3})\), where \(n_{s}=n_{+}=n_{-}\).
The radius of gyration (\(R_{g}\)) characterizes the size of the PE chain. The free energy (\(F\)) of the system comprises the self-energy - conformal, electrostatic, and excluded volume - of the chain, and the entropic and enthalpic contributions of the condensed and mobile counterions. The free energy depends on two independent variables, namely \(M\) (or, equivalently, \(\alpha\) or \(f\)) and \(R_{g}\). The objective of the theory [5; 9] is to self-consistently evaluate the equilibrium values of \(M\) and \(R_{g}\) by minimizing the free energy \(F\), which is also a function of the electrostatic and other parameters (say, \(N,\Omega,\ell_{B},n_{s},r_{c}\) etc.), with respect to these variables, and to find the effect of counterion size (\(r_{c}\)) on the thermodynamic and conformational properties.
The free-energy of the polymer chain is formulated by using the Edwards-Muthukumar Hamiltonian [5; 16; 42; 43], and applying required additions and modifications as mentioned below. The variational free energy of the system obtained from such Hamiltonian and free ion contributions allows the size effects of counterions to manifest in larger scales, through the size and overall charge of the PE chain, the analysis of which is the main aim of this work. The total free energy is obtained from the following contributions.
**A. Entropy of condensed counterions:** It is often assumed that there are \(\binom{N}{M}\) ways to distribute \(M\) counterions over \(N\) monomers of a PE chain. The condensed counterions are, however, mobile along the chain contour, and monomer-counterion pairs typically do not form frozen dipoles, but show thermal fluctuations [45]. Therefore, to account for the entropy of the condensed counterions,it is reasonable to consider a volume, \(\Omega_{c}\), for which the outer boundary is a cylinder of radius \(d_{c}=\ell/2+r_{c}\), and the inner boundary is set by the monomer length (\(\ell\)), conformal with the chain backbone (Fig. 1). Within this volume, \(M\) counterions are randomly adsorbed (condensed) along the chain backbone. The translational volume entropy of such condensed counterions confined to a volume of \(\Omega_{c}\) is given by
\[S_{1}=k_{B}\log\left(\frac{\bar{\Omega}_{c}!}{(\bar{\Omega}_{c}-M)!M!}\right), \tag{1}\]
leading to the free energy \(F_{1}=-TS_{1}\) given by
\[\frac{F_{1}}{k_{B}T}=\bar{\Omega}_{c}\left[\left(1-\frac{M}{\bar{\Omega}_{c}} \right)\log\left(1-\frac{M}{\bar{\Omega}_{c}}\right)+\left(\frac{M}{\bar{ \Omega}_{c}}\right)\log\left(\frac{M}{\bar{\Omega}_{c}}\right)\right], \tag{2}\]
where, \(\bar{\Omega}_{c}\equiv\Omega_{c}/r_{c}^{3}=N\left[\left(\pi(0.5+\tilde{r}_{c}) ^{2}-1\right)\right]/\tilde{r}_{c}^{3}\), and \(\tilde{r}_{c}=r_{c}/\ell\) is the dimensionless diameter of the counterions.
**B. Entropy of free ions:** The free ion entropy associated with \((N-M+n_{+})\) number of the uncondensed counterions and the salt cations and \(n_{-}\) number of coions (i.e., the free mobile ions in the solution) in the volume \(\Omega\) is \(k_{B}\log(\Omega^{N-M+n_{+}}/(N-M+n_{+})!n_{-}!)\), and \(F_{2}\), the free energy due to such entropy, is given by
\[\frac{F_{2}}{k_{B}T}=N\left[\left(f+\frac{\bar{c}_{s}}{\bar{\rho}}\right)\log \left(f\bar{\rho}+\bar{c}_{s}\right)+\frac{\bar{c}_{s}}{\bar{\rho}}\log\bar{c}_ {s}-\left(f+\frac{2\bar{c}_{s}}{\bar{\rho}}\right)\right]. \tag{3}\]
The dimensionless monomer density in the solution is given by, \(\bar{\rho}=N/(\Omega/r_{c}^{3})=(N\ell^{3}/\Omega)(r_{c}^{3}/\ell^{3})=\tilde{ \rho}\tilde{r}_{c}^{3}\) and \(\bar{c}_{s}=n_{s}/(\Omega/r_{c}^{3})=(n_{s}\ell^{3}/\Omega)(r_{c}^{3}/\ell^{3}) =\tilde{c}_{s}\tilde{r}_{c}^{3}\), where \(\tilde{\rho}=\rho\ell^{3}\) and \(\tilde{c}_{s}=c_{s}\ell^{3}\).
**C. Free energy of ion density fluctuation:** In the limit of low salt, that is \(\kappa\ell\to 0\), the Helmholtz free energy due to counterion density fluctuations approaches [47; 48]
\[\frac{F_{3}}{k_{B}T}=-\frac{\Omega\kappa^{3}}{12\pi}=\frac{N\sqrt{4\pi}\tilde{ \ell}_{B}^{3/2}}{3\bar{\rho}}(f\bar{\rho}+2\bar{c}_{s})^{3/2}, \tag{4}\]
where the free ions considered for the expression is the same ones as in \(F_{2}\). Here, \(\widetilde{\kappa}=\sqrt{4\pi\tilde{\ell}_{B}(f\bar{\rho}+2\bar{c}_{s})/\tilde{r }_{c}^{3}}\) is the dimensionless inverse of Debye screening length, and
Figure 1: **Cylindrical volume for condensed counterions around the PE chain:** To calculate the entropy of condensed counterions and electrostatic free energy of ion-pairs, motivated by evidence from simulations that the counterion density sharply peaks near the chain backbone [28; 46], we consider the counterions inside a hypothetical cylindrical volume of radius \(d_{c}=\ell/2+r_{c}\) around the chain contour.
\(\widetilde{\kappa}=\kappa\ell\). With finite size of the counterions, and when the size becomes significant, comparable, or even bigger than the monomers, Eq. 4, which is only a limiting result for \(\kappa r_{c}\to 0\), ideally needs to be replaced by the full expression [47, 49] of the free energy given by
\[\frac{F_{3}}{\Omega k_{\mathrm{B}}T}=-\frac{1}{4\pi}\left[\log(1+\widetilde{ \kappa}\widetilde{r}_{c})-\widetilde{\kappa}\tilde{r}_{c}+\frac{1}{2}( \widetilde{\kappa}\tilde{r}_{c})^{2}\right], \tag{5}\]
where the finiteness of the counterions is accounted for by taking them as spheres of diameter \(r_{c}\).
However, for low salt, \(\kappa r_{c}\ll 1\), even if the counterion size becomes large. In this work, we have taken very low amounts of salts for a few results, and even with \(r_{c}/\ell=4\), Eq. (4) remains sufficient. For an analysis with high salt and large counterions, Eq. (5) needs to be used.
**D. Free energy of ion-pair formation:** The accumulation of oppositely charged counterions near the PE chain can be characterized by both counterion condensation and a localized ionic atmosphere, and both result in qualitatively similar thermodynamic effects [50]. The free energy contribution from the electrostatic attraction between the charged monomers and 'condensed' counterions can be calculated exactly, if we have the knowledge of the bound counterion density profile. However, for our analysis, we make the assumption that such profile, or the related pair correlation function, is sharply peaked near the chain backbone [51, 28, 46], and the monomers and respective counterions form dipoles with the shortest possible dipole length [52, 5, 26, 41]. The gain in free energy due to the formation of an ion pair associated with the adsorption of one counterion to a charged segment is, therefore, \(-e^{2}/(4\pi\epsilon_{0}\epsilon_{l}d_{mc})\), where \(\epsilon_{l}\) is the local dielectric constant and \(d_{mc}\) is the dipole length between the charge of the monomer and the counterion.
The adsorption free energy gain due to \(M\) number of counterion-monomer pairs is then given by
\[\frac{F_{4}}{k_{B}T}=-(1-f)N\delta\tilde{\ell}_{B}, \tag{6}\]
where \(\delta=(\epsilon\ell/\epsilon_{l}d_{mc})\), and \(\widetilde{d}_{mc}=d_{mc}/\ell=(\ell+r_{c})/2\ell\). The presence of a local binding constant is modeled by the phenomenological parameter \(\delta\)[52, 53, 54, 5, 5, 5, 56] which in a coarse-grained way qualitatively captures the drop in the local dielectric constant close to the organic PE (or protein) chain bakcbone, compared to its bulk value in a polar solvent. \(\delta\) as a parameter (although inadequately, in the absence of a microscopic theory [57, 58, 16] addresses the fact that the local dielectric environment is significantly different for PEs and proteins from that of isolated small ions [57, 59, 60, 61], the effect recognized in early investigations [62, 63, 64]. The limited accessibility and disorientation of polar solvent dipoles close to to chain backbone result in a continuous rate of increase of \(\epsilon_{l}\) with distance from the chain backbone [62], but in this model, for simplicity, a single value of \(\epsilon_{l}\), lower than the bulk value \(\epsilon\), is taken as a parameter. The electrostatic interaction of two like-charged counterions condensing on two adjacent monomers is accounted for, to some extent, by the dipolar interaction, that is discussed later.
**E. Self energy of the PE chain:** In terms of a general Hamiltonian \(H\) that comprises the potentials for the monomer-monomer interactions of the PE chain, the free energy \(F_{5}\) of the chain originating from such Hamiltonian will be given by
\[e^{-\beta F_{5}}=\int D\mathbf{R}\left(s\right)\exp(-\beta H), \tag{7}\]
where \(\beta=1/k_{B}T\). The integral \(D\mathbf{R}\left(s\right)\) is a conformational integral of the canonical partition function of the polyion. \(\mathbf{R}\left(s\right)\) is the position vector of the chain at the arc length variable \(s\left(0\leq s\leq N\right)\), where \(N\) is the number of monomers in the chain. The interaction Hamiltonian \(H\), developed by Edwards and Singh [42], and extended by Muthukumar for charged systems [5, 43], can be expressed in terms of the following interactions between monomers: a) their connectivity (\(H_{0}\)), b) the short range interactions among monomers or condensed counterions (\(H_{ex}\)), whereas the monomers may be separated by a large distance (long-range) along the contour of the PE, and the interactions include repulsive, non-electrostatic excluded volume interactions or attractive dipolar interactions, and c) the screened repulsive electrostatic interaction between charge uncompensated monomers (\(H_{el}\)). Hence
\[H=H_{0}+H_{ex}+H_{el}, \tag{8}\]
where the components (\(H_{0},\ H_{ex}\) and \(H_{el}\)) are given by
\[H_{0} =\frac{3}{2\ell^{2}}\int_{0}^{N}ds\left(\frac{\partial\mathbf{R} \left(s\right)}{\partial s}\right)^{2} \tag{9}\] \[H_{ex} =\omega\ell^{3}\int_{0}^{N}ds\int_{0}^{N}ds^{\prime}\delta( \mathbf{R}(s)-\mathbf{R}(s^{\prime})),\ \text{and}\] (10) \[H_{el} =\frac{f^{2}\ell_{B}}{2}\int_{0}^{N}ds\int_{0}^{N}ds^{\prime} \frac{\exp\left(-\kappa\left|\mathbf{R}(s)-\mathbf{R}(s^{\prime})\right| \right)}{\left|\mathbf{R}(s)-\mathbf{R}(s^{\prime})\right|}. \tag{11}\]
The arguments of the \(\delta\)-functions in the above integrals (Eq. 10) denote the difference in contour vectors corresponding to the monomer pair involved in the short range interactions (excluded volume or dipolar), where \(w\) is the interaction strength. The interactions among charge uncompensated monomers are governed by a screened Coulomb electrostatic potential, with the screening parameter \(\kappa\). The counterion condensation has been addressed at the mean-field level, resulting in the coefficient \(f^{2}\) in Eq. 11.
Directly evaluating the partition sum using the aforementioned Hamiltonian (Eqs. 8, 9, 10, and 11) can be a rather intricate task. Instead, a variational procedure [43], that involves a trial Hamiltonian achieved by redefining the Hamiltonian of Eq. 8 as
\[H= H_{\mathrm{trial}}+(H-H_{\mathrm{trial}}), \tag{12}\]
where
\[H_{\rm trial}=\frac{3}{2\ell\ell_{1}}\int_{0}^{N}ds\left(\frac{\partial{\bf R} \left(s\right)}{\partial s}\right)^{2}, \tag{13}\]
can be employed. Here, \(\ell_{1}\) represents the variational parameter that characterizes the effective expansion factor of the polyion in comparison to its Gaussian size [43; 5; 16; 42]. The mean-field assumption is based on the (Gibbs-Bogoliubov) inequality,
\[\left\langle{\rm e}^{-\beta H}\right\rangle_{H_{trial}}\geq{\rm e}^{-\beta \left\langle H\right\rangle_{H_{trial}}}, \tag{14}\]
which implies that the free energy,
\[\widetilde{F}_{5}=\left\langle\beta(H_{0}-H_{trial})\right\rangle_{H_{trial}}+ \left\langle\beta H_{ex}\right\rangle_{H_{trial}}+\left\langle\beta H_{el} \right\rangle_{H_{trial}}, \tag{15}\]
needs to be extremized with respect to the charge (\(f\)) and size (expansion factor, \(\ell_{1}\)) of the polyelectrolyte. If one shifts to the polymer coordinate (the spatial coordinate \({\bf r}\)) the Hamiltonian can be approximately recast in terms of the monomer density profile of the PE chain [44; 56; 65; 66]. If one assumes a spherically symmetric Gaussian distribution, the monomer density centered at \({\bf r}_{0}\) and positioned at \({\bf r}\) can be expressed as
\[\rho_{n0}({\bf r})=N\left(\frac{3}{4\pi R_{g}^{2}}\right)^{3/2}\exp\left[- \frac{3(|{\bf r}-{\bf r}_{0}|)^{2}}{2R_{g}^{2}}\right]. \tag{16}\]
Under the assumption of uniform expansion of the PE chains [43; 5; 16; 42], the average dimensionless radius of gyration of the chain can be obtained as
\[\widetilde{R}_{g}=\sqrt{\frac{N\widetilde{\ell}_{1}}{6}}, \tag{17}\]
where \(\widetilde{R}_{g}=R_{g}/\ell\) and \(\tilde{\ell}_{1}=\ell_{1}/\ell\).
Using the Fourier transform (in \({\bf k}\)-space) of the monomer density profile (Eq. 16) and integrating the averaged interaction of monomers (Eq. 15), the total free energy contribution due to the polymer degrees of freedom included in the Hamiltonian (Eq. 8) can be obtained in the form
\[\frac{F_{5}}{k_{B}T} =F_{51}+F_{52}+F_{53}\] \[=\frac{3}{2}\left[\widetilde{\ell}_{1}-1-\log\widetilde{\ell}_{1 }\right]+\left(\frac{9}{2\pi}\right)^{3/2}\frac{w\sqrt{N}}{\widetilde{\ell}_ {1}^{3/2}}\] \[+\frac{f^{2}N^{2}\widetilde{\ell}_{B}}{2}\Theta_{s}\left(a \right), \tag{18}\]
where,
\[\Theta_{s}\left(a\right)=\frac{2}{\pi}\left[\sqrt{\frac{\pi\widetilde{\kappa}^ {2}}{4a}}-\frac{\widetilde{\kappa}\pi}{2}\exp\left(a\right)\!{\rm erfc}\left( \sqrt{a}\right)\right], \tag{19}\]
and \(a=\widetilde{\kappa}^{2}\widetilde{R}_{g}^{2}/3=\widetilde{\kappa}^{2}N \widetilde{\ell}_{1}/18\).
The effective two-body interaction parameter \(w\) (Eq. 18) is the most important quantity in this work. We note that the short-range \(\delta\)-function interactions (Eq. 10) can be of several type - the repulsive excluded volume interaction, attractive charge-dipole, and dipole-dipole interactions etc.. The short-range attractive interactions, in this case involving the dipoles, effectively modify the excluded volume parameter [5; 11; 55; 67]. The size and number of dipoles formed on the chain backbone play a critical role in determining the strength of such interactions and, in turn, in the equilibrium behavior of polyelectrolytes. Considering the counterion adsorption, one determines that \((1-f)N\) out of \(N\) monomers are paired with counterions, and the remaining \(fN\) monomers are charge uncompensated. The overall excluded volume parameter in the mean-field can thus be written as, [48; 67]
\[w=f^{2}w_{mm}+(1-f)^{2}w_{dd}+f(1-f)w_{md}, \tag{20}\]
where \(w_{mm}\), \(w_{dd}\), and \(w_{md}\) are the strengths of the short-range two-body interactions arising from, respectively, the usual, non-electrostatic excluded volume interaction between uncompensated monomers, electrostatic attraction between a pair of ion-pairs (a dipole pair), and the electrostatic attraction between an uncompensated monomer and an ion-pair (a monopole-dipole pair). The limiting cases are as follows. In the absence of any condensed counterion, \(f=1\), and \(w=w_{mm}\). Conversely, when all the counterions are condensed, \(f=0\), and \(w=w_{dd}\). We may note that for an extended chain \(w\), which is a two-body short-range interaction parameter, is not effective. Therefore, we may assume the first term containing \(w_{mm}\) negligible compared to the second term in our analysis.
In this model it turns out that \(w_{md}\) for the monopole-dipole interaction consists of coefficients and have dependency on the electrostatic parameters (\(\widetilde{\ell}_{B}\) and \(\delta\)) similar to that of \(w_{dd}\) for the dipole-dipole interaction, and both are attractive. In addition, for a first order collapse of the chain a significant amount of counterion condensation occurs (\(f\to 0\)), leading to the dipole-dipole pair interaction being dominant over the monopole-dipole interaction. Hence, in the subsequent calculations, we ignore the effects arising from monopole-dipole interactions and the last term in Eq. 20.
As discussed before, the short-range attractive interaction between dipoles embedded on the chain backbone can be represented by a \(\delta\)-function potential [5; 11; 41; 55] with the two-body strength parameter as \(w_{dd}\). \(w_{dd}\) can be calculated the usual way using the Mayer's function [68] once the actual interaction potential is known. Considering the dielectric mismatch near the chain backbone, and assuming that counterions may adsorb at random directions perpendicular to the local chain axis (freely rotating dipoles), and the chain being flexible, the interaction energy \(U_{dd}(r)\) between a pair of dipoles separated by a
distance of \(r\) can be expressed by
\[\frac{U_{dd}(r)}{k_{B}T}=\begin{cases}+\infty&\text{ if }r\leq\sigma\\ -\left(4\pi/3\right)\left(\widetilde{d}_{mc}^{3}\delta\widetilde{\ell}_{B}/ \widetilde{r}^{3}\right)^{2}&\text{ if }r>\sigma\end{cases} \tag{21}\]
where \(\widetilde{d}_{mc}=d_{mc}/\ell\), \(\widetilde{r}=r/\ell\), and \(\sigma\) corresponds to the hard sphere contact distance. In addition, when the counterions are of a similar size to the monomers or larger, the short-range repulsion (usual non-electrostatic excluded volume interaction) in the dipole-pair interaction needs to be considered as well. Putting back the potential, \(U_{dd}(r)/k_{B}T\), into the Mayer function, we get,
\[f(r)=\begin{cases}-1,&r\leq\sigma\\ -\beta U_{dd}(r),&r\geq\sigma.\end{cases} \tag{22}\]
Hence, the second virial coefficient (in units of volume) can be obtained as,
\[w_{dd}^{\prime} =-\int_{0}^{\infty}4\pi r^{2}f(r)dr\] \[=-\int_{0}^{\sigma}4\pi(-1)r^{2}dr-\int_{\sigma}^{\infty}4\pi(- \beta U_{dd}(r))r^{2}dr. \tag{23}\]
Assuming \(\sigma\simeq d_{mc}\), the dimensionless strength parameter of dipole-dipole interactions that contributes to the the excluded volume interaction can be obtained as,
\[w_{dd}=\frac{w_{dd}^{\prime}}{\ell^{3}}\equiv\frac{4\pi\widetilde{d}_{mc}^{3 }}{3}-\frac{16}{9}\pi^{2}\widetilde{d}_{mc}^{3}(\delta\widetilde{\ell}_{B})^{ 2}. \tag{24}\]
The first term arises from repulsive interactions between pairs of dipoles (for \(r\leq\sigma\), Eq. 21). Such excluded volume contributions increase with the increasing number of bound counterions (Eq. 20), as pointed out in earlier literature.[1] Furthermore, since a polymer chain cannot be more compact than a sphere with \(R_{g}\sim N^{1/3}\), to ensure a physically realistic result for a collapsed chain, one needs to consider the three-body interaction through the third virial coefficient,[9; 11; 69; 70; 71] denoted by \(w_{3}\). We include it in our calculation with the additional free energy term given by
\[\frac{F_{6}}{k_{B}T}=\frac{w_{3}}{\widetilde{\ell}_{1}^{3}}. \tag{25}\]
Instead of taking \(w_{3}\) as a parameter, as was previously done,[5; 9; 11; 15; 1] we calculate it explicitly with the equation,
\[w_{3}=-\frac{1}{3\ell^{6}}\int_{0}^{\infty}\int_{0}^{\infty}f_{12}f_{13}f_{23 }d^{3}r_{12}d^{3}r_{13}, \tag{26}\]
where, in general notation, \(f_{ij}\) and \(r_{ij}\) are the Mayer function and the distance between particle \(i\) and \(j\), respectively. To evaluate this integral, we first fix the positions of particles \(1\) and \(2\) (such that \(r_{12}<d_{mc}\) ) and let particle \(3\) take all possible positions so that we can effectively integrate over the variable \(r_{13}\).[68] To achieve an analytical form, we assume hard sphere potential in our case, with the hard sphere contact being the dipole length \(d_{mc}\). The third virial coefficient will then become
\[w_{3}=\frac{5\pi^{2}d_{mc}^{6}}{18\ell^{6}}\equiv\frac{5\pi^{2}\widetilde{d}_ {mc}^{6}}{18}. \tag{27}\]
As constructed, the total free energy, \(F=\sum_{i}F_{i}\), \(i=1\) to \(6\), depends on two variables: \(\widetilde{\ell}_{1}\), which represents the effective expansion factor of the mean square end-to-end distance of the PE chain compared to its Gaussian size, and the degree of ionization \(f\) of the PE chain.
It should be noted that the free energy described above is applicable only for a single polyelectrolyte (PE) chain in a dilute solution. It remains valid for all degrees of ionization or ionizability of the PE, as well as all temperatures. However, it is only applicable for salt concentrations that are not too high, such that \(\kappa^{-1}\geq\ell_{B}\) or \(c_{s}\leq(8\pi\ell_{B}^{3})^{-1}\) for a monovalent salt.
## III Results and Discussion
The system consists of a solution containing one polyelectrolyte chain with finite-size counterions and also small molecular salt. Several interactions such as Coulomb energy of ion-pairs, screened Coulomb repulsion among charge-uncompensated monomers, density fluctuations of the mobile ions in the solution, excluded volume interactions among monomers as well as counterions, and dipolar attractions between monomer-counterion ion-pairs are present, and have been described by the free energy components in our model [Eqs. (2), (3), (4), (6), (18), and (25)]. The equilibrium total free energy \(F\) (expressed as \(\sum_{i=1}^{6}F_{i}\)) is determined by a self-consistent minimization with respect to the size, given by the effective expansion factor of the PE chain, \(\widetilde{\ell}_{1}\), and the degree of counterion condensation, \(\alpha\) (or, equivalently, degree of ionization, \(f=1-\alpha\)). The most important parameter of this study is the counterion size, \(\widetilde{r}_{c}\). The temperature, \(T\), and the bulk dielectric constant, \(\epsilon\), in terms of the dimensionless Bjerrum length, \(\widetilde{\ell}_{B}\), the degree of polymerization, \(N\), the monomer density, \(\bar{\rho}\), the monovalent salt density, \(\bar{c}_{s}\), and the dielectric mismatch parameter, \(\delta\), (a function of both \(r_{c}\) and the local dielectric constant \(\epsilon_{l}\)) are the other parameters of the problem. We set the monomer density by fixing the dimensionless volume of the system, \(\Omega/\ell^{3}=2\)x\(10^{6}\), where \(\ell\) represents the size of a monomer. For a chain of length \(N=1000\), this results in the dimensionless monomer density, \(\bar{\rho}=0.0005\)\(\widetilde{r}_{c}^{3}\). Both monomers and counterions are taken monovalent.
Our primary focus is on the effect of counterion specificity, through its size, on the equilibrium configurations of the PE chain, degree of counterion condensation, the size scaling exponents, and system thermodynamics, through evaluation of individual free energy components.
Effect of local dielectric constant (\(\epsilon_{l}\)) on the conformational behavior of the PE chain
We first benchmark the general problem of counterion condensation by the known results for a fully ionizable PE, taking the counterion size equal to the monomer size as is traditionally done [10; 26; 27; 28; 3; 5; 9; 12; 26; 28; 67]. There are two major differences in the formulation of our model compared to the previous ones. First, a volume entropy instead of combinatorial entropy for the condensed counterions, confined to a cylinder conformal to the chain backbone, is considered (Fig. 1 and Eq. 2) and second, electrostatic self-energy of the PE chain has been calculated differently, in a simpler way (Eqs. 7 to 18). Here we briefly note the key results, obtained by the minimization of the total free energy with respect to the thermodynamic variables size and charge, \(\widetilde{\ell}_{1}\) and \(f\), respectively, of the PE chain for a set of values of the local dielectric constant (\(\epsilon_{l}\)), represented by \(\delta\). Importantly, to benchmark and compare with the previous results of charge interactions, the excluded volume and dipolar interactions are ignored for the time being, and no additional salt is taken (\(w=0.0\), \(w_{3}=0.0\), and \(\widetilde{c}_{s}=0.0\)).
The degree of counterion condensation (\(\alpha=1-f\)) and the size of the PE chain (\(\widetilde{\ell}_{1}=6\widetilde{R}g^{2}/N\)) are obtained as a function of the Bjerrum length (\(\widetilde{\ell}_{B}\)), as shown in Fig. 2(A-B). As expected, in the weak electrostatic regime (low \(\widetilde{\ell}_{B}\) or high temperatures - note that the bulk dielectric constant does not affect the product \(\delta\widetilde{\ell}_{B}\)), the counterion adsorption is minimal (\(\alpha\sim 0\), \(f\sim 1\)), and the thermalized chain is Gaussian with \(\widetilde{\ell}_{1}\sim 1\). As the Coulombic effect strengthens (higher \(\widetilde{\ell}_{B}\) or lower temperature), first the electrostatic repulsion among charged monomers expands the chain. With further increasing \(\widetilde{\ell}_{B}\), counterions start to adsorb onto the chain backbone, reducing such repulsion, that leads to deswelling of the chain. At very high \(\widetilde{\ell}_{B}\) values, all the counterions get adsorbed (\(\alpha\sim 1\), \(f\sim 0\)), and in the absence of electrostatic repulsion the chain assumes Gaussian configuration once more (\(\widetilde{\ell}_{1}\sim 1\)), provided that the excluded volume interaction and dipolar attraction are both ignored. Therefore, \(\widetilde{\ell}_{1}\) exhibits non-monotonic variation with \(\widetilde{\ell}_{B}\), while \(\alpha\) increases monotonically. These results are qualitatively very similar to Ref. [5], but in our model the free energy of the PE chain is calculated by assuming a Gaussian segment distribution, resulting in a simpler self-energy expression (Eq. 18).
The variation in local dielectricity (\(\epsilon_{l}\)) affects \(\alpha\), and in turn \(\widetilde{\ell}_{1}\). By considering the closest distance between the condensed counterion and monomer allowed by a hard sphere contact, the expression for \(d_{mc}\) simplifies to \(d_{mc}=\ell/2+r_{c}/2\equiv\ell\). As the counterions have the same size of the monomers in this case (\(r_{c}=\ell\)), we can further simplify to \(\delta=\epsilon\ell/\epsilon_{l}d_{mc}=\epsilon/\epsilon_{l}\).
Furthermore, for a fixed \(\widetilde{\ell}_{B}(\geq 1)\), a lower value of the local dielectric constant (\(\epsilon_{l}\)) leads to a greater accumulation of counterions near the chain backbone [Fig. 2(B)], due to a higher electrostatic energy gain, quantified by, \(-\delta\widetilde{\ell}_{B}=-e^{2}/4\pi\epsilon_{0}\epsilon_{l}k_{B}T\). In essence, the degree of ionization is highly sensitive to the dielectric mismatch \(\delta\), analogous to \(\widetilde{\ell}_{B}\). [5; 6; 7; 5; 9; 16; 52; 53; 54; 55; 56]
### Counterions with finite size
In this section, we include the effective two-body interaction parameter (Eq. 20), and consider the effect of counterion size on counterion adsorption, chain conformations, and thermodynamics. As the counterions can have large sizes, higher than the monomers, one must consider the excluded volume interactions among them. More importantly, the adsorption of counterions forms dipoles on the PE chain, the interactions among which (as described in the second and third terms of Eq. 18) will have significant effect on the conformations and thermodynamics of the chain.
With an increasing size of the counterions, both the Coulomb free energy gain of counterion-monomer pairs (\(F_{4}\), Eq. 6) and free ion entropy (\(F_{2}\), Eq. 3) decrease. These two competing and nonlinear thermodynamic contributions (\(F_{2}\) and \(F_{4}\)) effectively set the degree of counterion condensation, which in turn dictates the size of the PE chain. Therefore, in general it is hard to predict the trends with changing counterion size, just on the physical or intuitive grounds. Given the modest set of parameters we have used, however, it is apparent that with decreasing counterion size the electrostatic gain in ion-pair free energy (\(F_{4}\)) wins over the loss in entropy, due to the loss of a freely roaming counterion to counterion-monomer pair formation (\(F_{2}\)). This we find violated in a few cases, as we shall see later. Furthermore, the counterion size af
Figure 2: **Size and charge of a single, isolated, flexible PE chain (ignoring excluded volume and dipolar interactions):** (A) The size (\(\widetilde{\ell}_{1}\)) and (B) the degree of counterion condensation (\(\alpha=1-f\)) of the PE chain are plotted as functions of the Bjerrum length \(\widetilde{\ell}_{B}\) for different local dielectric constants (\(\delta=\epsilon/\epsilon_{l}\) with counterions having the same size of monomers, i.e., \(\widetilde{r}_{c}=1.0\)). Excluded volume and dipolar interactions are ignored (\(w=w_{3}=0.0\)). The PE charge and size decrease significantly with a decreasing local dielectric constant, \(\epsilon_{l}\). The other parameters are: \(N=1000\), \(\widetilde{c}_{s}=0.0\), and \(\bar{\rho}=0.0005\)\(\widetilde{r}_{c}^{3}\).
fects the length of the monomer-counterion dipole. The pairwise dipolar interaction is captured through the second virial coefficient \(w_{dd}\) (Eq. 20), whereas the three-body interaction is incorporated via the third virial coefficient \(w_{3}\) (Eq. 25), the latter being required to provide stability against collapse, as detailed in Eq. 27. It is notable that the introduction of \(w_{dd}\) and \(w_{3}\) above, in their current form (Eq. 18 and 25), does not lead to any new parameter to the analysis.
After such introduction of the second and third virial coefficients, we continue the minimization of the free energy with respect to size and charge, \(\widetilde{\ell}_{1}\) and \(\alpha\) (or \(f\)), respectively. As we vary the counterion size in this section, \(\epsilon/\epsilon_{l}=2\) is kept constant, equivalent of taking the same pair of PE backbone and the polar solvent, for this part of the analysis with variable counterion size.
#### iii.2.1 Effect of counterion size on the conformational behavior of the PE chain
As before, equilibrium values of the chain size \(\widetilde{\ell}_{1}\) and the degree of counterion condensation \(\alpha\) are plotted as functions of \(\widetilde{\ell}_{B}\), but this time for different counterion sizes, in Fig. 3(A-B). With increasing \(\widetilde{\ell}_{B}\), counterions condense onto the chain backbone, forming dipoles. The attractive interaction among dipoles, which is taken as a two-body short-range attraction effectively increasing the solvent poorness (Eqs. 18 and 20), induces a coil-to-globule transition for a sufficiently high electrostatic strength. For example, the transition occurs for \(\widetilde{r}_{c}=1.0\) at \(\widetilde{\ell}_{B}\sim 4\). The collapse occurs when the dipolar attraction overcomes the electrostatic repulsion among charge uncompensated monomers, resulting in a negligible total charge within the globule due to counterion adsorption that minimizes the electrostatic energy penalty.
At lower values of \(\widetilde{\ell}_{B}\), there is no effect of counterion
Figure 3: **Size and charge of a single, isolated, flexible PE chain (including excluded volume and dipolar interactions):** (A) The size (\(\widetilde{\ell}_{1}\)) and (B) the degree of counterion condensation (\(\alpha=1-f\)) of the PE chain are plotted as functions of the Bjerrum length \(\widetilde{\ell}_{B}\) for different counterion sizes (\(\widetilde{r}_{c}=0.5,1,2,3\)), which affect \(\delta=\epsilon\ell/\epsilon_{l}d_{mc}\). Excluded volume and dipolar interactions, assuming freely rotating dipoles, along with three-body interactions are included (\(w,w_{3}\) are calculated). \(\epsilon/\epsilon_{l}\) is fixed at 2.0. The PE chain undergoes first-order coil-globule transition for higher \(\widetilde{\ell}_{B}\), but remains relatively swollen in the collapsed state for bulkier counterions. In (C) and (D) same plots are made with a parametrically reduced dipolar attraction (\(w_{1}=0.05\), instead of 1.0), which show, with bulkier counterions, the PE chain remains relatively swollen, and the transition becomes second-order. The other parameters are: \(N=1000\), \(\widetilde{c}_{s}=0.0\), and \(\bar{\rho}=0.0005\)\(\widetilde{r}_{c}^{3}\).
Figure 4: **Effect of Counterion Size on PE Chain Size and Scaling Exponent at high Coulomb Strengths:** The scaling exponent \(\nu\), calculated as \(\nu=\log(\widetilde{R}_{g})/\log(N)\), is plotted as a function of \(\widetilde{r}_{c}\) for different values of dipolar interaction strength parameter, \(w_{1}\), for two different Coulomb strengths, \(\widetilde{\ell}_{B}=8\) (A) and \(\widetilde{\ell}_{B}=16\) (B) [lines are guides to the eye]. (C) The size (\(\widetilde{\ell}_{1}\)) and (D) the degree of ionization (\(f\)) are plotted as functions of \(\widetilde{r}_{c}\) for different values of \(w_{1}\), at \(\widetilde{\ell}_{B}=8\) and \(N=1000\). The scaling exponent indicates the conformational behavior of the PE chain, 1/3 for a collapsed state due to dipolar attractions of counterion-monomer pairs, \(\sim 3/5\) for a swollen state due to excluded volume of bulky counterions, and \(\sim 0.7\) for a swollen state due to like-charge repulsions. The chain may remain swollen at high \(\widetilde{\ell}_{B}\)’s if the counterions are bulkier. The other parameters are: \(\widetilde{c}_{s}=0.0\), and \(\bar{\rho}=0.0005\)\(\widetilde{r}_{c}^{3}\).
size on the chain's conformational behavior (the value of \(\widetilde{\ell}_{1}\) remains the same up to \(\widetilde{\ell}_{B}\sim 1.5\)), as shown in Fig. 4(A), due to the absence of counterion condensation. Counterion size was found not to impact the conformation of PE chains at low \(\widetilde{\ell}_{B}\) in recent simulations [18; 38] too. However, as the electrostatic interactions become more significant, counterions start to condense, and the size of the counterions begins to play a crucial role in determining the chain's behavior. In particular, the chain collapses at a higher value of \(\widetilde{\ell}_{B}\) for bulkier counterions as the contribution from the excluded volume increases, as well as there is less overall dipolar attraction for the chain due to less number of dipoles formed. The collapse of the PE chain due to attractive dipole-dipole interactions to a compact globule for smaller counterions has been observed in previous [26; 27] and recent [17; 18] simulations. Additionally, in Fig. 3(B) we note that the PE chain collapses with a slightly lesser degree of condensation for bulkier counterions. As the dipole length is bigger for larger counterions, it results in a stronger dipolar attraction that suppresses the short-range repulsion effects, but it requires a larger \(\widetilde{\ell}_{B}\) to do that.
The collapse and related size scaling of the PE chain can be analyzed noting that for \(w>0\) one may ignore the \(w_{3}\) term (\(F_{6}\) in Eq. 25), take the first derivative of \(F_{5}\) with respect to \(\widetilde{\ell}_{1}\) (Eq. 18), and use the definition \(\widetilde{R}_{g}=\sqrt{N\widetilde{\ell}_{1}/6}\) (from Eq. 17). For \(w<0\), however, \(F_{6}\) needs to be included in the derivative. These lead to
\[\widetilde{R}_{g}\sim\begin{cases}\left[\left(9/2\pi\right)^{3/2}/\sqrt{6} \right]^{1/5}w^{1/5}N^{3/5},&w>0\\ \left(2\pi/9\right)^{1/2}(2w_{3}/w)^{1/3}\,N^{1/3}.&w<0.\end{cases} \tag{28}\]
Substitution of the above equations in the free energy of the PE chain (\(F_{5}+F_{6}\)) in the limits of \(w>0\) and \(w<0\) gives
\[\frac{F_{5}+F_{6}}{k_{B}T}\sim\begin{cases}(9/2\pi)^{3/5}\ w^{2/5}N^{1/5},&w>0 \\ (9/2\pi)^{3}(w^{2}/2w_{3})N,&w<0.\end{cases} \tag{29}\]
Hence, for \(w_{dd}<0\), Eq. 18 shows that the chain may undergo coil-to-globule transition with respective size scaling exponents, depending on the degree of counterion condensation. This will be validated by results obtained below (in Fig. 4).
Till now we have considered the conformational and counterion adsorption thermodynamics of the PE chain without parameterizing the dipole-dipole interaction. In our theory, we assume freely rotating dipoles, which are progressively more valid at higher temperatures. [72] Considering axially restricted dipolar rotations [41] can lead to significantly reduced attractive interactions, and short-range repulsive potentials may prevent the complete chain collapse in simulations. [18; 41] Moreover, due to the presence of a polar solvent, the spatial dielectric behavior may also alter the dipolar interaction strength. In this context, we explore the effects of altered interaction strengths by phenomenologically parameterizing the dipole-dipole interaction using the parameter \(w_{1}\), taking it as a coefficient of the second term in Eq. 24. \(w_{1}\), ideally, may have a temperature dependency [5].
With \(w_{1}=0.05\), which corresponds to a significantly reduced dipolar attraction, the discrete jump in the coil-globule transition is suppressed [Fig. 3(C-D)]. For bulky counterions (\(\widetilde{r}_{c}>1\)), the chain ceases to undergo a first-order coil-globule transition, even at high values of \(\widetilde{\ell}_{B}\), and the size reduction is continuous. This behavior may be attributed to weaker dipolar attractions and comparatively strong excluded volume repulsion. In other words, the effective solvent poorness due to dipolar attraction has significantly reduced with \(w_{1}=0.05\). As a result, the first-order transition is prevented, and the chain remains in a relatively swollen conformation [of order Gaussian size, see for \(\widetilde{r}_{c}=2.0\) or \(3.0\) in Fig. 3(C)] even at high electrostatic strengths. Similar trends are visible in simulations [17; 18; 19].
To gain further insight into the chain statistics at different \(\widetilde{\ell}_{B}\), we calculated the scaling exponent \(\nu\), defined as \(\nu=\log(\widetilde{R}_{g})/\log(N)\), for different counterion sizes. In Figs. 4(A) and 4(B) we present the results for \(\nu\) as a function of counterion size at \(\widetilde{\ell}_{B}=8\) and \(\widetilde{\ell}_{B}=16\), respectively, for different values of \(w_{1}\). At \(\widetilde{\ell}_{B}=8\), \(\nu\) is approximately \(1/3\) for small counterions, as all of them condense onto the chain, and dipolar attraction collapses the chain to a compact globule. \(\nu\) increases to approximately \(3/5\) as the counterion size increases, because larger counterions offer higher excluded volume repulsion which eventually overcomes the collapse. With lower values of \(w_{1}\), the chain remains swollen even with counterions of the same size as of the monomers, because dipolar attractions are not strong enough to collapse the chain.
At \(\widetilde{\ell}_{B}=16\), the scaling exponent \(\nu\) is approximately \(1/3\) for all ion sizes except for very low \(w_{1}\) values, because the dipolar attraction becomes strong enough to overcome the excluded volume repulsion even for large counterions. For lower values of \(w_{1}\), such attractions become weak enough to allow excluded volume interactions swell the chain which leads to \(\nu=3/5\) for bulkier counterions, as argued in Eq. 28. Within the collapsed globule, where the charge becomes negligible, the interplay between two short-range interactions - dipolar attraction and excluded volume repulsion - becomes crucial. Increasing the counterion size increases the dipole length, and enhances the attractive contributions from the dipole pairs. However, for low values of the strength parameter \(w_{1}\), such gain is limited. The collapse of the chain is then constrained by relatively stronger excluded volume interactions with large counterions, preventing a coil-to-globule transition, whereas such transitions and the collapsed state remain energetically favorable for smaller counterions. Such examples of excluded volume effects competing with electrostatic attractions are available in the literature. Long surfactant tails are found to prevent PE chains from forming collapsed globules [19]. Bulky counterions lead to swelled conformations of PE chains [18] and dendrimers [38]
even at high \(\widetilde{\ell}_{B}\). This behavior contradicts earlier theoretical predictions of collapsed states at high electrostatic regimes [11] but aligns with recent simulations [37], which attribute the anomaly to the absence of consideration of steric hindrance in dipole pair interactions.
In Fig. 4(C-D), the size (\(\widetilde{\ell}_{1}\)) and charge (\(f\)) of the chain are plotted for different \(w_{1}\) values at \(\widetilde{\ell}_{B}=8\), for \(N=1000\). Both the size and charge increase with \(\widetilde{r}_{c}\) as mentioned earlier [shown in Fig. 3(C-D)]. However, a discontinuous behavior in \(\widetilde{\ell}_{1}\) is observed for \(w_{1}=0.10\) due to the cooperative effects of charge and excluded volume interactions [\(f=0.039\) for \(\widetilde{r}_{c}=3\) while \(f=0.0\) for \(\widetilde{r}_{c}=2.5\), as shown in Fig. 4(D)]. Consequently, the scaling exponent \(\nu\) changes from \(\sim 1/3\) to more than \(3/5\) with \(\widetilde{r}_{c}\) going from \(2.5\) to \(3.0\) [Fig. 4(A)]. Note that the scaling parameter is higher than \(3/5\), due to the presence of like charge repulsion from uncompensated monomers, as seen before in simulations. [26]
In Fig. 4(A) too, we note that the scaling exponent rises to \(\sim 0.70\), for example in the case of \(w_{1}=0.05\) for which \(f=0.01\) for \(\widetilde{r}_{c}=2.5\), due to the like charge repulsion of charge uncompensated monomers. In some simulations with highly charged PEs, the scaling exponent is found to reach as high as \(~{}1.0\) and collapse to \(0.33\). [10; 26]. However, in Fig. 4(A), the exponent comes down to \(\sim 3/5\) for \(\widetilde{r}_{c}>2.5\), due to increased excluded volume interactions from condensed bulkier ions (note that \(f=0.04\), which is not zero and even higher for \(\widetilde{r}_{c}=3.0\)). The reversal of the trend in the exponent \(\nu\) with ionic size can be attributed to the relative influences of like charge repulsion of uncompensated monomers and excluded volume repulsion of the condensed counterions.
Note that for \(w_{1}=1\), in collapsed states [Fig. 3(A), (C)], the value of \(\widetilde{\ell}_{1}\) increases with \(\widetilde{r}_{c}\). The competitive effects of the two-body and three-body interactions [the second term of Eq. 18 and Eq. 25, and also as argued in Eq. 28] keeps the polyelectrolyte size larger for bulkier counterions. Despite the size variation of the PE chain due to the presence of counterions of variable sizes, the size scaling exponent remains the same (\(\nu\sim 1/3\)) for all collapsed cases, as expected from Eqs. 20 and 28 [shown in Fig. 4(B)]. [17]
iii.1.2 Thermodynamics: effect of counterion size on the entropy-enthalpy interplay in counterion condensation
As discussed before, the PE chain collects counterions from the solution, primarily driven by the competing free energy contributions from the translational entropy of the free ions (\(F_{2}\)) and ion-pair formation (\(F_{4}\)). Both contributions decrease (which is a gain in free energy) with a decreasing counterion size. Therefore, it is not straightforward to predict the outcome of a change in size of counterions intuitively. However, for most results studied in this work, and for the set of modest parameter values used, a smaller counterion size is found to result in a higher degree of counterion condensation. This implies that the ion-pair free energy gain overcomes the entropic loss due to the resultant depletion of free ions. A part of the ion-pair free energy can be entropic, one may note, due to reorganization of solvent dipoles, [58] but how significant that part is for a PE chain with organic backbone and fractal geometry is a matter of discussion. [55; 56] We consider this free energy (\(F_{4}\)) enthalpic in this work, although it must be taken a nominal quantity [73].
To analyze this interplay of enthalpy and entropy, in Fig. 5, we look at the thermodynamics by plotting \(F_{2}\) and \(F_{4_{1}}\) in units of \(Nk_{B}T\), as functions of the Bjerrum length \(\widetilde{\ell}_{B}\) (proportional to \(1/\epsilon T\)). First, in Fig. 5(A), we take the counterions of the same size as the monomers (\(\widetilde{r}_{c}=1\)). At low \(\widetilde{\ell}_{B}\), the PE is unable to collect counterions from the solution due to weak electrostatic corre
Figure 5: **Effect of Counterion Size on Thermodynamics and Interplay of Free Energy Components:** (A) The variation of entropic free energy of free ions (\(F_{2}\)) and electrostatic free energy of counterion-monomer pair formation (\(F_{4}\)), in units of \(Nk_{B}T\), are plotted as functions of \(\widetilde{\ell}_{B}\) (keeping \(\widetilde{r}_{c}=1\) and \(\widetilde{c}_{s}=0.0\)). \(F_{4}\) gains at the loss of \(F_{2}\) with increasing \(\widetilde{\ell}_{B}\). (B) The crossover value, \(\widetilde{\ell}_{B}\), where \(F_{2}=F_{4}\), increases with \(\widetilde{r}_{c}\) (salt, \(\widetilde{c}_{s}=0.0\)), indicating that smaller counterions condensed more, which is confirmed with (C) the ionization degree \(f\) increasing with \(\widetilde{r}_{c}\) for different values of \(\widetilde{\ell}_{B}\) and \(\widetilde{c}_{s}\). Higher salt induced more condensation. Panels (D)-(F) exhibit \(F_{2}\) and \(F_{4}\), in units of \(Nk_{B}T\), as functions of \(\tilde{r}_{c}\) for different values of \(\widetilde{\ell}_{B}\) and \(\widetilde{c}_{s}\). Specifically, panel (D) shows the results for \(\widetilde{\ell}_{B}=3.0\) and \(\widetilde{c}_{s}=0.0\), (E) for \(\widetilde{\ell}_{B}=4.2\) and \(\widetilde{c}_{s}=0.0\), and (F) for \(\widetilde{\ell}_{B}=4.2\) and \(\widetilde{c}_{s}=\bar{\rho}/2\). Non-monotonic trends of \(F_{2}\) are seen due to competition between entropy and enthalpy components. The other parameters are: \(N=1000,~{}\bar{\rho}=0.0005~{}\widetilde{r}_{c}^{3},~{}\epsilon/\epsilon_{l}=2\).
lations compared to thermal fluctuations. Consequently, \(F_{2}\) is higher than \(F_{4}\). As \(\widetilde{\ell}_{B}\) increases, more counterions are attracted from the solution at the cost of their translational entropy. This leads to an energy gain with a decreased \(F_{4}\) and increased counterion condensation, reducing the number of counterions in the solution as well as \(F_{2}\) [also see Fig. 2(B) and Fig. 3(B), (D)].
At an intermediate value of \(\widetilde{\ell}_{B}\), which we define as a crossover point denoted as \(\widetilde{\ell}_{B}^{\star}\), \(F_{2}\) becomes equal to \(F_{4}\). The crossover parameter \(\widetilde{\ell}_{B}^{\star}\) is found to increase monotonically with the ionic size, as depicted in Fig. 5(B). Here too, bulkier counterions condense less with less gain in electrostatic energy, that requires a higher \(\widetilde{\ell}_{B}^{\star}\) to win over the translational entropy (although, note again, that the translational entropy is also less for larger counterions). This leads to the monotonic increase of \(\widetilde{\ell}_{B}^{\star}\) with \(\widetilde{r}_{c}\). The state boundary we defined with \(\widetilde{\ell}_{B}^{\star}\) is a unique line where the charge of the chain (degree of counterion condensation) is found to be a constant quantity irrespective of the counterion size (results not shown).
In addition to the effects of counterion size and the Bjerrum length, introduction of salt, expectedly, induces more counterion condensation (higher \(\alpha\)), or a reduction in the degree of ionization, \(f\) [Fig. 5(C)]. \(F_{2}\) and \(F_{4}\) are plotted for two values of \(\widetilde{\ell}_{B}=3\) and 4.2, in Fig. 5(D) and (E), respectively. The chosen values of \(\widetilde{\ell}_{B}\) are such that one is less than \(\widetilde{\ell}_{B}^{\star}\) and the other corresponds to the minimum value of \(\widetilde{\ell}_{B}\) where all counterions of the size of the monomer are condensed.
As smaller counterions condense more [Fig. 5(C)], fewer remain to roam free in the bulk solution, contributing less entropically to the free energy [Fig. 5(D)]. This has been experimentally observed in PE gels, having decreasing conductivity with small counterions indicating a decrease in free ion concentration [25]. It is indeed a counterintuitive result that the bulkier counterions being less condensed have more translational entropy collectively [Fig. 5(D)]. However, once most of the counterions become free with increasing counterion size (\(\widetilde{r}_{c}\sim 2\)), then with further increase of size the translational entropy decreases, albeit slightly, as expected [Fig. 5(D)]. For a high electrostatic strength (here, \(\widetilde{\ell}_{B}=4.2\)), all smaller counterions are condensed [Fig. 5(C)], and \(F_{2}\) approaches zero [Fig. 5(E)]. However, at that \(\widetilde{\ell}_{B}\), not all bulkier counterions are condensed, leading to an increase in the entropic contribution to the free energy (\(F_{2}\)) from the free ions as their size increases, ultimately surpassing the enthalpic contribution from ion-pair formation (\(F_{4}\)). Note that the crossover value of \(r_{c}\) for \(\widetilde{\ell}_{B}=4.2\) is higher than that of \(\widetilde{\ell}_{B}=3.0\). This is due to the increased electrostatic energy gain with smaller ions. Therefore, decreasing \(r_{c}\) and increasing \(\widetilde{\ell}_{B}\) have similar thermodynamic effects on the system, as shown in Fig. 5(D-E).
In the case of added salt, the counterion adsorption is enhanced [Fig. 5(C)]. It results in a change in the crossover value of \(\widetilde{\ell}_{B}\). The gain in the electrostatic energy of the ion-pairs (\(F_{4}\)) remains similar (but slightly increased with slightly more counterion condensation), but the presence of salt provides significant entropic free energy (negative \(F_{2}\)) even for smaller counterions, and even when all monomeric charges are compensated by condensed counterions [Fig. 5(F)]. Such a contribution diminishes with an increased size of counterions, but only to some extent. \(F_{2}\) exhibits such a non-monotonic dependence on \(r_{c}\), due to the interplay between electrostatics and excluded volume effects. Weaker electrostatic correlations cause more bulky counterions to stay in the solution, resulting in a gain in entropy. However, due to the ionic size the available volume reduces, leading to a non-monotonic behavior in \(F_{2}\) [Fig. 5(D), (F)]. The nonlinearity and unpredictability of the thermodynamics in the presence of salt, for a varying size of counterions, is truly manifest in Fig. 5(F).
## IV Summary
We present a theory to investigate the influence of counterion size on the effective charge, size, and thermodynamics of a single, isolated, and flexible polyelectrolyte chain. Our analysis takes into account the effects of counterion size on various factors, including the ion-pair energy, excluded volume effect, volume entropy of free ions, volume entropy of condensed ions, dipolar interactions captured through the second virial coefficient, and the third virial coefficient. The increase in ion size reduces the gain in free energy due to both ion pair formation and the entropy of free ions. As a result, it leads to non-monotonic effects in the system, making it difficult to predict the consequences of variable ion sizes intuitively.
In the model, we apply the Edwards-Muthukumar interaction Hamiltonian, which captures the self-energy of the PE chain through segment-segment electrostatic and excluded volume interactions, including dipolar interactions, and also the conformational entropy of the PE chain. We assume a Gaussian monomer density profile, following Flory, which simplifies the analysis. We consider the finite size of counterions in calculating the entropy of condensed counterions assumed confined within a cylindrical volume surrounding the PE chain backbone. Importantly, we incorporate counterion size effects in the dipole pair interactions, which include short-range repulsions in addition to the dipolar attractions. Minimization of the total free energy, that treats the Bjerrum length, dielectric mismatch, and counterion size as parameters, determines the PE's effective charge and size. Our primary focus has been to understand conformational and thermodynamic aspects in the presence of finite-size counterions, offering a direct means to evaluate how experimental factors like electrostatic strength and salt influence chain conformations, PE chain scaling exponent, and thermodynamics.
We benchmarked the results against previous estab
lished studies and found that the size of the PE chain varies non-monotonically with the electrostatic strength \(\delta\widetilde{\ell}_{B}\), the product of dielectric mismatch and Bjerrum length, as expected. At modest electrostatic strengths, the chain is swollen due to like-charge repulsion, but it forms a globule at higher strengths due to counterion condensation. The attractive interaction of the freely rotating dipoles leads to an abrupt, first order coil-to-globule transition, which occurs when the short-range dipolar attraction overcomes both the electrostatic repulsion between monomers and excluded volume effects. This dramatic collapse has been absent in simulations where it is found to be a continous, second order transition. Considering a potential overestimation of the dipolar interactions (because freely rotating dipoles are valid at high temperatures and in polar solvents), we introduced a phenomenological strength parameter \(w_{1}\) to moderate the dipolar attraction. Reduced values of \(w_{1}\) lead to continuous transitions as expected. The size scaling exponent \(\nu\) is found to be approximately \(1/3\) for small counterions, as the chain is collapsed to a globule with inadequate excluded volume repulsion. However, \(\nu\) increases to \(3/5\) for large counterions with increased excluded volume repulsion. If electrostatic repulsion between charge-uncompensated monomers is present, the exponent increases to values greater than \(3/5\), up to \(0.70\) in some cases. In addition, we also analyzed the thermodynamic interplay between the free ion entropy (\(F_{2}\)) and ion-pair formation energy (\(F_{4}\)), considering the latter to be nominal ignoring contributions from orientation of solvent dipoles. The point of crossover where \(F_{2}\) equals \(F_{4}\), denoted as \(\widetilde{\ell}_{B}^{*}\), is found to monotonically increase with ionic size. Larger counterions tend to stay more in the bulk solution due to weaker electrostatic attraction. But due to their larger ionic size, the available volume also reduces, so does the entropy, and it leads to the non-monotonic behavior of \(F_{2}\).
## V Acknowledgment
The authors acknowledge financial support from IISER Kolkata, Ministry of Education, Government of India. They also thank Soumik Mitra, Aritra Chowdhury, and Benjamin Schuler for discussions and other collaborative work which helped better understand the role of counterions in polyelectrolyte systems.
|
2305.15280
|
A Unique Approach to Classify Inflationary Potentials
|
Inflationary cosmology has made significant strides in understanding the
physics driving the rapid expansion of the early universe. However, many
inflation models with diverse potential shapes present analysis, comparison,
and classification challenges. In this paper, we propose a novel approach to
tackle this issue. We introduce a general potential formula encompassing all
inflationary potentials, whether single-field or multi-field, into a single
mathematical framework. This formula establishes a unified framework for
systematically classifying inflation models based on their potential functions.
We showcase the efficacy of the general potential formula by successfully
reproducing well-known inflation models, such as the Starobinsky potential and
the Valley Hybrid Inflation model. Moreover, we derive general inflationary
parameters, including the slow-roll parameters and power spectra, using the
proposed formula. Our approach provides a versatile tool for classifying and
studying various inflationary scenarios, simplifying the analysis and
comparison of different models in the field of inflationary cosmology.
|
Somnath Das, P K Suresh
|
2023-05-24T16:03:32Z
|
http://arxiv.org/abs/2305.15280v1
|
# A Unique Approach to Classify Inflationary Potentials
###### Abstract
Inflationary cosmology has made significant strides in understanding the physics driving the rapid expansion of the early universe. However, many inflation models with diverse potential shapes present analysis, comparison, and classification challenges. In this paper, we propose a novel approach to tackle this issue. We introduce a general potential formula encompassing all inflationary potentials, whether single-field or multi-field, into a single mathematical framework. This formula establishes a unified framework for systematically classifying inflation models based on their potential functions. We showcase the efficacy of the general potential formula by successfully reproducing well-known inflation models, such as the Starobinsky potential and the Valley Hybrid Inflation model. Moreover, we derive general inflationary parameters, including the slow-roll parameters and power spectra, using the proposed formula. Our approach provides a versatile tool for classifying and studying various inflationary scenarios, simplifying the analysis and comparison of different models in the field of inflationary cosmology.
## 1 Introduction
The study of inflationary cosmology has revolutionized our understanding of the early universe, providing a compelling framework to explain the observed isotropy, homogeneity, and flatness [1]. Inflationary models posit a period of rapid expansion in the early universe, driven by a homogenous scalar field known as inflaton [2]. These models have successfully addressed the shortcomings of the standard Big Bang cosmology, such as the horizon and flatness problems, and have provided predictions that align with a wide range of cosmological observations [2]. While inflationary cosmology has made remarkable progress, the field remains rich with many inflation models, each characterized by different potentials for inflation. The diversity of these potentials arises from the underlying physics and the dynamics of the early universe, giving rise to many inflationary scenarios. This variety poses challenges in analyzing, comparing, and classifying different models, hindering our ability to gain deeper insights into the fundamental physics driving inflation. Inflationary cosmology has witnessed numerous efforts to classify and categorize inflation models based on their underlying physics and potential shapes. These classification schemes aim to capture the diversity of inflationary potentials and provide a systematic framework for understanding the range of inflation scenarios proposed in the literature.
One common approach to classifying inflationary models is based on the shape of the inflation potential. This method categorizes inflation potentials into power-law potentials, exponential potentials, hybrid potentials, [2] etc. Each class represents a specific functional form for the potential, often motivated by specific underlying physics or symmetry considerations. While this approach provides a straightforward categorization, it can be limited in its ability to capture the full diversity of inflation models, as it may overlook subtle variations within each class. Another classification method involves characterizing inflationary models based on their observational predictions. This approach considers inflation predictions for various cosmological observables, such as the spectral index of primordial fluctuations, the tensor-to-scalar ratio, and the non-Gaussianity parameter [2]. Models that yield similar predictions for these observables are grouped, forming a classification based on the resulting observational signatures [3]. This method is valuable for connecting inflation models to empirical data such as the CMB observations. Still, it may need to fully capture the potential shapes that give rise to these predictions.
In addition to the classification schemes based on the shape of the inflation potential and observational predictions, another important categorization in the field of inflationary cosmology is based on the energy scales involved in the inflationary process. This classification distinguishes high-field and low-field inflation models [3]. Along this line, various physical properties might be considered to classify the inflationary potentials. In this paper, we propose a novel approach to address this challenge by presenting a general potential formula that encompasses all inflationary potentials. By allowing the functional form of the potential to be chosen, our approach offers flexibility in capturing the diversity of inflationary scenarios. It unifies all the inflationary potentials, single-field or multi-field, under one functional form.
General Potential
The following comprehensive general formula encapsulates the dynamics of inflationary models with multiple fields. Our formula offers a unified approach, enabling the seamless incorporation of any inflationary model into a single mathematical framework.
\[V(\varphi_{m})=M^{4}\bar{A}(\varphi_{m})\left[\bar{B}(\varphi_{m})+\sum_{i=0}^{n} a_{i}\bar{C}_{i}(\varphi_{m})\right]^{p}. \tag{1}\]
Here, \(a_{0}=0\). \(M\) represents a characteristic energy scale associated with the inflationary dynamics in the inflation potential. It sets the overall magnitude of the potential energy during inflation. Equation (1) offers a unified framework for systematically classifying inflation models. The potential, denoted as \(V(\varphi_{m})\), is expressed in terms of a set of functions \(\bar{A}(\varphi_{m})\), \(\bar{B}(\varphi_{m})\), and \(\bar{C}_{i}(\varphi_{m})\), where \(i\) ranges from \(0\) to \(n\) and \(p\) is any real number. Here, \(m=0,1,2..t\) where \(t\) is any integer. \(m=0\) corresponds to the inflaton field \(\phi\); other \(m\) values can represent other fields. If \(m\) is single-valued, it represents single-field inflation, and multi-valued \(m\) represents multifield models. We may classify inflationary models based on the value of \(n\), i.e., up to the number \(i\) runs in the summation. We can imagine a series of potentials with various possible functions for each value. In equation (1), particular \(n\) values correspond to the number of terms linearly associated with the potential, \(n=0\) corresponds to single-termed potentials, \(n=1\) corresponds to double-termed potentials, and so on. For the single field inflation model, \(m=0\), and the potential is a function of \(\varphi_{o}=\phi\) alone.
\[V(\phi)=M^{4}A(\phi)\left[B(\phi)+\sum_{i=0}^{n}a_{i}C_{i}(\phi)\right]^{p}. \tag{2}\]
To demonstrate the effectiveness of the general potential formula for a single field (2), we will show how a specific choice of the functions and parameters within the formula reproduces the well-known Starobinsky inflation model. By selecting \(n=1\), \(A(\phi)=B(\phi)=1\), \(a_{1}=-1\), \(C_{1}(\phi)=e^{-\sqrt{\frac{\pi}{3}}\frac{\phi}{Mp_{1}}}\), and \(p=2\), we can obtain the Starobinsky potential [3],[4].
\[V(\phi)=M^{4}\left[1-e^{-\sqrt{\frac{\pi}{3}}\frac{\phi}{Mp_{1}}}\right]^{2}. \tag{3}\]
Further, in the equation (1) if we choose \(n=4\) with \(\bar{A}(\varphi_{m})=1,\bar{B}(\varphi_{m})=\frac{\lambda_{e}}{4},\bar{C}_{1} (\varphi_{m})=\sigma^{4},\bar{C}_{2}(\varphi_{m})=\sigma^{2},\bar{C}_{3}( \varphi_{m})=\phi^{2},\bar{C}_{4}(\varphi_{m})=\phi^{2}\sigma^{2}\) with coefficients \(a_{1}=\frac{\lambda_{e}}{4M^{*}},a_{2}=-\frac{\lambda_{e}}{2M^{*}},a_{3}=\frac {m_{e}^{2}}{2M^{*}},a_{4}=\frac{\lambda}{2M^{*}}\), \(m=0,1\) with \(\varphi_{0}=\phi\) and \(\varphi_{1}=\sigma\), it reproduces the Valley Hybrid Inflation model [5], [6].
\[V(\phi,\sigma)=\frac{\lambda_{e}}{4}(\sigma^{2}-M^{2})^{2}+\frac{1}{2}m_{ \phi}^{2}\phi^{2}+\frac{1}{2}\lambda\phi^{2}\sigma^{2}. \tag{4}\]
Also, in equation (1), if we consider \(n=1\), \(A(\phi)=-\frac{3}{8l^{2}}\phi^{2}\), \(B(\phi)=1\), \(a_{1}=-\frac{4\kappa l^{2}}{9}\), \(C_{1}=\phi^{4}\), and \(p=1\), it is easy to see that one attains the effective potential that comes from the AdS Swampland conjectures [7].
\[V(\phi)=-\frac{3}{8l^{2}}\phi^{2}+\frac{\kappa}{6}\phi^{6}. \tag{5}\]
It's apparent that the general potential formulation is highly versatile and accommodates a wide range of inflationary potentials. It provides a flexible framework to describe various inflationary models and allows for exploring different inflationary dynamics.
## 3 Inflationary Parameters
This section explores the implications of the proposed general potential formula for deriving general inflationary parameters that characterize inflationary models, as outlined in equation (2). By analyzing the functional form of the potential and its corresponding dynamics, we can extract essential parameters that quantify the behavior and observational predictions of inflation. We drop the scalar field \(\phi\) from the expression to ensure better readability.
The general form of the first slow-roll parameter is
\[\epsilon=\frac{p^{2}}{16\pi}\bigg{[}A^{\frac{1}{p}}B+\sum_{i=0}^{n}a_{i}C_{i}A ^{\frac{1}{p}}\bigg{]}^{-2}\bigg{[}\frac{1}{p}A^{\frac{1}{p}-1}A^{{}^{\prime}}B +A^{\frac{1}{p}}B^{{}^{\prime}}+\sum_{i=0}^{n}a_{i}C_{i}^{{}^{\prime}}A^{\frac{ 1}{p}}+\frac{1}{p}\sum_{i=0}^{n}a_{i}C_{i}A^{\frac{1}{p}-1}A^{{}^{\prime}} \bigg{]}^{2}. \tag{6}\]
Here, \(X^{{}^{\prime}}=\frac{dX}{d\phi}\). The tensor-to-scalar ratio can be expressed using the first slow roll parameter using the relation, \(r=16\epsilon\). This provides a dynamic approach for choosing the functions by reflecting the latest bound on the tensor-to-scalar
ratio, a crucial observational quantity in inflationary cosmology. Recent measurements and cosmic microwave background radiation data analysis have constrained the tensor-to-scalar ratio. The current bound is \(r<0.035\)[8], which might get as low as \(r<0.004\) with future observations [9][10]. Incorporating these bounds on \(r\) is a valuable guideline while constructing inflationary models using the proposed general potential formula, allowing us to explore the range of viable scenarios within the context of current observational constraints. The general form of the second slow-roll parameter is,
\[\begin{split}\eta=\frac{p}{8\pi}\bigg{[}& A^{\frac{1}{p}}B+\sum_{i=0}^{n}a_{i}C_{i}A^{\frac{1}{p}} \bigg{]}^{-2}\times\bigg{[}(p-1)\Big{(}\frac{1}{p}A^{\frac{1}{p}-1}A^{{}^{ \prime}}B+A^{\frac{1}{p}}B^{{}^{\prime}}+\sum_{i=0}^{n}a_{i}C_{i}A^{\frac{1}{p }}+\frac{1}{p}\sum_{i=0}^{n}a_{i}C_{i}A^{\frac{1}{p}-1}A^{{}^{\prime}}\Big{)}^ {2}\\ &+\bigg{(}A^{\frac{1}{p}}B+\sum_{i=0}^{n}a_{i}C_{i}A^{\frac{1}{p} }\bigg{)}\bigg{[}\frac{1}{p}\big{(}\frac{1}{p}-1\big{)}A^{\frac{1}{p}-2}(A^{{ }^{\prime}})^{2}+\frac{1}{p}A^{\frac{1}{p}-1}A^{{}^{\prime\prime}}B+\frac{2}{ p}A^{\frac{1}{p}-1}A^{{}^{\prime}}B^{{}^{\prime}}+A^{\frac{1}{p}}B^{{}^{ \prime\prime}}\\ &+\sum_{i=0}^{n}a_{i}C_{i}^{{}^{\prime\prime}}A^{\frac{1}{p}}+ \frac{2}{p}\sum_{i=0}^{n}a_{i}C_{i}^{{}^{\prime}}A^{\frac{1}{p}-1}A^{{}^{ \prime}}+\frac{1}{p}\big{(}\frac{1}{p}-1\big{)}\sum_{i=0}^{n}a_{i}C_{i}A^{ \frac{1}{p}-2}(A^{{}^{\prime}})^{2}+\frac{1}{p}\sum_{i=0}^{n}a_{i}C_{i}A^{ \frac{1}{p}-1}A^{{}^{\prime\prime}}\bigg{]}\bigg{]}.\end{split} \tag{7}\]
Let's consider the Starobinsky potential (3) for testing the general slow roll parameters. Considering the particular functions mentioned in section 2 applicable for generating Starobinsky potential, we use equation (6) to produce the first slow roll parameter \(\epsilon_{s}\).
\[\begin{split}\epsilon_{s}=\frac{1}{6\pi M_{pl}^{2}}e^{-\sqrt{ \frac{7}{3}}\frac{2\epsilon}{M_{pl}}}\bigg{[}1-e^{-\sqrt{\frac{7}{3}}\frac{ \epsilon}{M_{pl}}}\bigg{]}^{-2}.\end{split} \tag{8}\]
Using the general formula for the second slow roll parameter (7), one can find the second slow roll parameter for the Starobinsky potential \(\eta_{s}\).
\[\begin{split}\eta_{s}=\frac{1}{6\pi M_{pl}^{2}}e^{-\sqrt{\frac{7 }{3}}\frac{\rho}{M_{pl}}}\bigg{[}4e^{-\sqrt{\frac{7}{5}}\frac{\rho}{M_{pl}}}-1 \bigg{]}.\end{split} \tag{9}\]
In this way, the general slow roll parameters can be used to find the slow roll parameters of any model. The potential (2) can also generalize other inflationary parameters, such as the scalar power spectrum.
\[\begin{split} P_{S}=&\frac{128M^{4}}{3p^{2}}\bigg{[} A^{\frac{1}{p}}B+\sum_{i=0}^{n}a_{i}C_{i}A^{\frac{1}{p}}\bigg{]}^{p-2}\bigg{[} \frac{1}{p}A^{\frac{1}{p}}A^{{}^{\prime}}B+A^{\frac{1}{p}}B^{{}^{\prime}}+\sum _{i=0}^{n}a_{i}C_{i}^{{}^{\prime}}A^{\frac{1}{p}}+\frac{1}{p}\sum_{i=0}^{n}a_{ i}C_{i}A^{\frac{1}{p}-1}A^{{}^{\prime}}\bigg{]}^{2}.\end{split} \tag{10}\]
The tensor power spectrum can be generally expressed as
\[\begin{split} P_{T}=&\frac{128A}{3}\bigg{[}B+\sum_{i= 0}^{n}a_{i}C_{i}\bigg{]}^{p}.\end{split} \tag{11}\]
The ratio of \(P_{T}\) and \(P_{S}\) gives us the tensor-to-scalar ratio \(r\). We have derived the general forms of the inflationary parameters considering \(m=0\), which essentially considers the single-field inflationary models. In the inflationary parameters, if we consider the functions of inflaton field \(A(\phi),B(\phi)\) and \(C_{i}(\phi)\) as multi-field functions such as \(\bar{A}(\varphi_{m}),\bar{B}(\varphi_{m})\) and \(\bar{C}_{i}(\varphi_{m})\) then these parameters will be valid for multi-field potentials as well. In such a scenario, the derivatives of the functions will be partial depending on the model in context.
Various other inflationary parameters can be obtained from the general potential, such as the other slow-roll parameters, the scalar spectral index, the running of the indices, and so on.
## 4 Conclusion
The general potential and associated parameters offer a flexible and comprehensive framework for analyzing various inflationary models. By employing the general potential, it becomes possible to classify existing inflationary models within a unified framework. Moreover, this framework provides a systematic approach to constructing new inflationary models based on functional considerations. One of the strengths of this general potential lies in its ability to generate an infinite number of inflationary potentials. The general form allows for a broad range of functional forms for \(A(\phi)\), \(B(\phi)\), and \(C_{i}(\phi)\), enabling a vast exploration of potential shapes and behaviors. This flexibility is crucial for accommodating diverse inflationary dynamics from different underlying physics. It should be noted that the general potential is focused on inflation with scalar fields.
However, given the infinite possibilities, it becomes necessary to constrain the parameter space by selecting a subset of inflationary potentials that align with current observational data. The aim is to identify those potentials that successfully reproduce the observed cosmic microwave background radiation, primordial density fluctuations, and other relevant
cosmological observables. Selecting a subset of inflationary potentials consistent with data narrows the range of viable models. Through this systematic approach, the general potential provides a powerful tool for refining existing inflationary models and discovering new, relevant inflationary scenarios. Focusing on a functional basis allows us to explore unexplored regions of the parameter space and discover novel inflationary models that may exhibit unique features or address current theoretical or observational challenges.
## Acknowledgement
S. D. acknowledges the financial support the Govt of India through the Prime Minister's Research Fellowship (PMRF).
|
2301.08040
|
Characterising fast-time variations in the hard X-ray time profiles of
solar flares using Solar Orbiter's STIX
|
Aims: The aim of this work is to develop a method to systematically detect
and characterise fast-time variations ($\gtrsim 1$s) in the non-thermal hard
X-ray (HXR) time profiles of solar flares using high-resolution data from Solar
Orbiter's Spectrometer/Telescope for Imaging X-rays (STIX).
Methods: The HXR time profiles were smoothed using Gaussian Process (GP)
regression. The time profiles were then fitted with a linear combination of
Gaussians to decompose the time profile. From the Gaussian decomposition, key
characteristics such as the periodicity, full width at half maximum (FWHM),
time evolution, and amplitude can be derived.
Results: We present the outcome of applying this method to four M and X
GOES-class flares from the first year of Solar Orbiter science operations. The
HXR time profiles of these flares were decomposed into individual Gaussians and
their periods were derived. The quality of fit is quantified by the standard
deviation of the residuals (difference between observed and fitted curve,
normalised by the error on the observed data), for which we obtain $\leq 1.8$
for all flares presented. In this work, the first detection of fast-time
variations with Solar Orbiter's STIX instrument has been made on timescales
across the range of 4-128s.
Conclusions: A new method for identifying and characterising fast-time
variations in the non-thermal HXR profiles of solar flares has been developed,
in which the time profiles are fit with a linear combination of Gaussian
bursts. The opportunity to study time variations in flares has greatly improved
with the new observations from STIX on Solar Orbiter.
|
Hannah Collier, Laura A. Hayes, Andrea F. Battaglia, Louise K. Harra, Säm Krucker
|
2023-01-19T12:33:45Z
|
http://arxiv.org/abs/2301.08040v1
|
Characterising fast-time variations in the hard X-ray time profiles of solar flares using Solar Orbiter's STIX
###### Abstract
Context:
Aims:The aim of this work is to develop a method to systematically detect and characterise fast-time variations (\(\gtrsim 1\)s) in the non-thermal hard X-ray (HXR) time profiles of solar flares using high-resolution data from Solar Orbiter's Spectrometer/Telescope for Imaging X-rays (STIX).
Methods:The HXR time profiles were smoothed using Gaussian Process (GP) regression. The time profiles were then fitted with a linear combination of Gaussians to decompose the time profile. From the Gaussian decomposition, key characteristics such as the periodicity, full width at half maximum (FWHM), time evolution, and amplitude can be derived.
Results:We present the outcome of applying this method to four M and X GOES-class flares from the first year of Solar Orbiter science operations. The HXR time profiles of these flares were decomposed into individual Gaussians and their periods were derived. The quality of fit is quantified by the standard deviation of the residuals (difference between observed and fitted curve, normalised by the error on the observed data), for which we obtain \(\leq 1.8\) for all flares presented. In this work, the first detection of fast-time variations with Solar Orbiter's STIX instrument has been made on timescales across the range of 4-128s.
Conclusions:A new method for identifying and characterising fast-time variations in the non-thermal HXR profiles of solar flares has been developed, in which the time profiles are fit with a linear combination of Gaussian bursts. The opportunity to study time variations in flares has greatly improved with the new observations from STIX on Solar Orbiter.
## 1 Introduction
Solar Orbiter is a solar and heliospheric mission led by the European Space Agency (ESA) in partnership with NASA. The scientific payload of Solar Orbiter includes four in situ and six remote sensing instruments, including the Spectrometer/Telescope for Imaging X-rays (STIX). STIX is a hard X-ray imaging spectrometer that detects photons with energies in the range of 4-150 keV, with a 1 keV energy resolution (at 6 keV) (Krucker et al., 2020). STIX measures bremsstrahlung emission from solar flares and therefore provides diagnostics on the hottest (\(\gtrsim 8\) MK) flare plasma (Krucker et al., 2020; Battaglia et al., 2021). This means that STIX is equipped to provide information on the accelerated electrons producing such bremsstrahlung emissions upon Coulomb collisions with ambient ions. STIX has a high time resolution, with the ability to sample down to 0.1 s. It also has a stable background and continuously observes the full solar disc. Thanks to these capabilities, STIX is an excellent instrument for measuring time-varying signatures on short timescales (\(\gtrsim 1\)s) in the X-ray emission of solar flares.
Such time variations are of great interest because they are related to the fundamental timescales occurring in solar flares, such as energy release and particle acceleration processes, as well as magnetohydrodynamic (MHD) waves and oscillations in or around the flare site. It is imperative to understand the origin and nature of the observed time-varying behaviour in order to achieve a unified solar flare model.
Fast time variations, which are sometimes classified as quasi-periodic pulsations (QPPs), have been observed in the emission from solar flares over the past 50 years, with some of the earliest studies identifying time-varying signatures in the hard X-ray (HXR) energy range (Parks & Winckler, 1969). The particular variations of interest in this work are modulations in the flare intensity-time profiles. The ones classified as QPPs typically have periodicities ranging from a few seconds to several minutes (Zimovets et al., 2021). Sub-second spikes in the HXR flare emission have also been detected (Roberts et al., 1983; Qiu, J. et al., 2012; Knuth & Glesener, 2020). In some cases, multiple periods of oscillations have been identified alongside amplitude modulation (Kolotkov et al., 2015; Van Doorsselaere et al., 2016; McLaughlin et al., 2018). Furthermore, Hayes et al. (2016) identified these time-varying signatures in both the impulsive and decay phases, thus challenging our understanding of the flare model. Additionally, studies relating active region and flare properties to QPP periodicities have also been performed (Pugh et al., 2017; Hayes et al., 2020). For example, Hayes et al. (2020) found a positive correlation between QPP period and duration. Despite this, QPP period was found to be independent
of flare magnitude. Finally, a significant correlation between QPP period and various ribbon properties was found.
Statistical studies have found a wide range of probabilities that a given \(>\)\(M5\) GOES class flare will contain a QPP, with estimates ranging from 30-90% (Simoes et al., 2015; Inglis et al., 2017; Dominique et al., 2018; Hayes et al., 2020). It has been made clear that time-varying signatures (whether quasi-periodic or not) are commonly observed phenomena which occur in flare emission across the entire electromagnetic spectrum, from radio waves to \(\gamma\)- rays (Nakariakov and Melnikov, 2009; Zimovets et al., 2021). These statistics rely on QPP classifications that search for a statistically significant periodic component in the data and thus exclude all fast-time-varying structures that do not display any significant periodicity. Recent reviews of QPP properties include Zimovets et al. (2021); Kupriyanova et al. (2020); McLaughlin et al. (2018); Van Doorsselaere et al. (2016); Nakariakov and Melnikov (2009).
A number of models have been built to attempt to explain the observed time-varying and oscillatory phenomena in solar flare emission, with at least fifteen different models proposed at present (Zimovets et al., 2021). For a recent review of current models and their observational signatures, we refer to Zimovets et al. (2021) and McLaughlin et al. (2018). According to Kupriyanova et al. (2020), these models can generally be divided into three categories: (1) models in which the observed emission is directly modulated by MHD and electromagnetic (EM) waves; (2) models in which the efficiency of energy release is modulated by MHD waves; and (3) models in which the original energy release process is itself quasi-periodic in nature. Despite the many observations of time-varying behaviour in flares, there are few cases where the underlying mechanism(s) behind the variations are unambiguous. This is often due to observational constraints. In addition to this, many mechanisms have non-unique signatures, making the disambiguation more challenging (McLaughlin et al., 2018; Zimovets et al., 2021).
One of the challenges in the study of time-varying signatures and QPPs in solar flares is the identification of the characteristic timescale or period. Broomhall et al. (2019) analysed the strengths and weaknesses of various state-of-the-art techniques for detecting time-varying signatures classified as QPPs, based on a series of tests. The tests involved a simulated dataset of flare time profiles, some of which include a periodic component. From this analysis, a list of eight recommendations for detection methods was set out. One of these recommendations relates to a common pre-processing step, namely: time-series detrending. Broomhall et al. (2019) showed that detrending can be challenging and if it must be done, it should be performed manually, with a time-dependent smoothing window; otherwise, a bias may be introduced. Furthermore, there is the added complexity of the underlying background trend when analysing coronal time series. Auchere et al. (2016) showed that if the power-law dependency of the Fourier power spectra (also known as red noise) is not considered in detection models, this can lead to false detections of a significant periodic component. Inglis et al. (2015) showed the effect of this power-law component in the Fourier power spectrum and took this red noise component into consideration when detecting QPPs with methods such as AFINO (Inglis et al., 2017). Many time-varying signatures in flares also show non-stationary properties, whereby the amplitude and period change as a function of time (Nakariakov et al., 2019). This is a further challenge to detection methods, for which typical Fourier-based approaches do not work well.
The method developed in this work leverages the recent data from Solar Orbiter's STIX and takes a new approach to analysing fast-time-varying signatures in solar flare HXR time profiles. Since the modulation depth in the HXR signature is large, we assume that detrending is not a necessary step. Instead, the fast-time-varying behaviour is considered here as a linear combination of individual Gaussian contributions to the total observed signal. To achieve this, first the HXR time series are smoothed using Gaussian Process (GP) Regression. Next, the time profiles are fitted with a linear combination of Gaussians. From the Gaussian decomposition, key characteristics such as the waiting time distribution, time evolution of the peak full width at half maximum (FWHM), and amplitude can be derived. Additionally, this method can detect non-stationary (time-evolving) signatures as the method is agnostic to whether the underlying driver is periodic. The derived timing information can be used to spatially resolve sources and to perform time-dependent spectral analyses of the individual peaks. An important distinction between this method and those mentioned in this section is that the method does not rely on the assumption that the observed oscillatory behaviour is periodic. This method can be applied to HXR time profiles which exhibit fast-time variations that may or may not be identified as quasi-periodic by existing analysis techniques. This aspect broadens the scope of our analysis.
The analysis given in this work is based on four M and X GOES-class flares observed by STIX as of September 2021. The opportunity to study time variations in flares has greatly improved thanks to the new observations from STIX on Solar Orbiter, which include hundreds of flares that demonstrate a fast-time variability.
## 2 Observations
The data set used in this work is from the STIX imaging spectrometer onboard Solar Orbiter. STIX creates images by an indirect Fourier based imaging technique (Krucker et al., 2020). It has a movable attenuator which is inserted during large flares to limit exposure to high count rates from low-energy X-ray photons and to avoid instrument saturation (Krucker et al., 2020). STIX is capable of quantifying the location, spectrum, and energy content of flare-accelerated thermal and non-thermal electrons (Krucker et al., 2020).
STIX has an onboard algorithm which performs the dynamic time binning. This means that the time resolution of data taken by STIX can vary throughout the duration of the flare. The highest possible time cadence is 0.1 s. To date, the highest cadence tested is 0.3 s. It is not possible to take data at 0.1s for a long time (\(\gtrsim\) 1 hour) since this fills up the onboard memory. However, it is possible to run at 0.5s cadence for as long as desired. As such, the time resolution of the data used for analysis in this work is 0.5s. The overview plots shown in this work are made using lower latency data, with a dynamically-adjusted temporal resolution, which typically ranges from 2 to 20 s. The energy ranges used vary depending on the individual flare profile and are selected such that they contain mainly non-thermal HXR emission.
STIX has a relatively stable non-solar background and is a full Sun imager. It observes the Sun continuously, unlike its
predecessor, the Reuven Ramaty High Energy Solar Spectroscopic Imager (RHESSI), which rotated about its own axis with a period of 4s (Lin et al. 2002). This rotational period sometimes introduced artificial periodicities in the data, which were difficult to disentangle from true flare fluctuations (Inglis et al. 2011). Due to its orbit, RHESSI had a day-night cycle, which meant that certain sections of a flare were often not observed. These constraints limited RHESSI's capability for analysing time-varying structures on short timescales. In contrast, STIX is an excellent instrument suited to the task of fast-time variations analysis due to its stable background, high temporal resolution, imaging and spectral capabilities. Furthermore, STIX is on board the Solar Orbiter spacecraft, which is in a unique elliptical orbit about the Sun, observing solar dynamics from a different vantage point than Earth.
## 3 Methodology
The motivation for this work is to develop a method that systematically identifies and characterises the modulation observed on short timescales in HXR non-thermal emission from solar flares. This characterisation is useful because it enables us to gain information about the timing, shape, and origin of HXR pulsations in a systematic way. The methodology involves two main steps: 1) data pre-processing by means of time series normalisation and signal smoothing and 2) Fitting a linear combination of Gaussians to the pre-processed time series. Each step and its motivation is explained in more detail in the following subsections1.
Footnote 1: The code used here is publicly available at [https://github.com/hamahc243/Gaussian_Decomp](https://github.com/hamahc243/Gaussian_Decomp).
### Data pre-processing
In this work, a Gaussian process (GP) regression is used to smooth the HXR time profiles. This is a non-parametric, Bayesian approach to regression that uses machine learning techniques to fit the data.
#### 3.1.1 Normalization
To perform GP regression, the first step is normalising the data. To do so, we re-scale the original time profile using scikit-learn's StandardScalar class (Pedregosa et al. 2011) by subtracting the mean, \(\mu\) and dividing by the standard deviation, \(\sigma\), such that the new time profile is characterised by \(\mu^{\prime}=0\) and \(\sigma^{\prime}=1\). Figure 1 (A and B) demonstrates this for the STIX 25-76 keV time series of the SOL2022-05-04 flare.
#### 3.1.2 Gaussian process (GP) regression
Gaussian processes (GPs) are a powerful supervised machine learning tool that provide a means to make predictions about data by incorporating prior knowledge. The most obvious area of application is regression problems, such as time-series forecasting. Overall, GPs are not limited to regression and can also be extended to classification and clustering tasks. In this work, GP regression is used to smooth STIX time profiles. This is necessary to identify the local maxima and minima for fitting. Since GP regression is a non-parametric approach, there is no initial assumption about the functional form of the time profile. In contrast, when performing polynomial regression, we assume that the functional form is a polynomial and then estimates the exact coefficients. Furthermore, GP regression uses Bayesian statistics, meaning that the model begins with an initial guess of the fit and iteratively updates its fit as more data (in this case, the intensity of HXR emission at a given time) is fed to the algorithm. Unlike traditional regression methods, which learn the exact values for each function parameter, GP regression models estimate a probability distribution over all possible fits.
A common issue with traditional methods for smoothing data in search of QPPs is the choice of window size. This choice is typically ambiguous and can lead to erroneous results if a window size is not selected manually for individual flares (Broomhall et al. 2019). Since GP regression is based on Bayesian statistics, the model parameters can be chosen systematically by optimising a loss function, which estimates how well the model fits to the data. Also, GP regression has the added benefit of giving an uncertainty estimate on the fitted smooth curve and, unlike most methods involving smoothing windows, GPs can be applied to unevenly sampled data. This is particularly important for STIX data, as it is binned onboard in a dynamic way to optimise the amount of down-linked data (Krucker et al. 2020). Of course, time series can be interpolated and re-sampled at even cadence, but this introduces additional errors; thus, it is preferable to have a method that does not require such pre-processing.
#### 3.1.3 GP kernel & hyperparameter optimisation
A GP is a collection of random variables such that any finite collection of those random variables has joint Gaussian distributions (Rasmussen 2004). A GP can be entirely described by its mean and covariance function (Rasmussen 2004). This is similar to a Gaussian distribution, which is defined by a mean vector and covariance matrix; however, in a GP the distribution is taken over functions (Rasmussen 2004). In other words, the final best fit obtained using GP regression is given by the maximum likelihood of a multidimensional probability distribution over all possible fits. This distribution is specified by a covariance matrix (also known as a kernel).
In this work, a simple radial basis function (RBF) kernel was chosen because an RBF kernel is a common choice for smooth time series data. No inference is made based on the kernel parameters themselves; the GPs are solely used as a smoothing technique. Hubner et al. (2022) have discussed assigning a physical meaning to kernel parameter choice in the context of QPPs. For that case, it would be important to consider various types of kernels. However, for our purpose, a simple choice of kernel was deemed sufficient.
The kernel chosen for this work has two components: a constant kernel component and a radial basis function kernel (squared exponential kernel) component. It takes the following form:
\[k(x_{i},x_{j})=Ae^{\frac{-d(x_{i},x_{j})^{2}}{2^{2}}}, \tag{1}\]
where A is a constant value, \(d(x_{i},x_{j})\) is the Euclidean distance between two feature vectors \(x_{i},x_{j}\), and \(l\) is the length scale.
For this kernel choice, there are two hyperparameters to optimise, \(A\) and \(l\); here, \(l\) roughly describes the length of the (time) scale over which data points are related. For example, for a flare profile with subsecond variations, the length scale parameter is expected to be a lot smaller than for a profile
with longer (on the order of seconds) variations. Then, \(A\) is the amplitude of the kernel. In addition, there is a parameter \(\alpha\), which is added to the diagonal of the kernel matrix to help with fitting and can be considered as Gaussian measurement noise on the training data.
The choice of kernel and its parameters can be optimised by performing an exhaustive search over a wide range of values and computing the performance of the model using cross validation. This is known as hyperparameter optimisation. In cross-validation, the data is split between training and test data. The training data is used to fit model parameters and the test data is used to assess model performance. In this work the split between training and test data is 80:20. In other words, the model is fit using the training data, and the "goodness of fit" is assessed by comparing the model prediction with the known measured value from the test set. The error on the fit is computed based on a chosen loss function - in this case,this is
Figure 1: Various steps involved in the application of this method for the example M1.2 GOES class flare of SOL2022-05-04. The times shown are in seconds from 2022-05-04 15:16:12 (Earth time).
the mean squared error (MSE). An average MSE value on all test data is used to assess overall model performance. Once the model hyperparameters have been optimised, a smooth time profile is obtained (as shown in Fig. 1C).
### Fitting a linear combination of Gaussians to the HXR profile
In many flare cases, the shape of each HXR pulsation can be aptly fit with a Gaussian function, which gives a symmetric rise and fall. The use of a simple functional form such as a Gaussian is preferable because we can easily derive insightful properties from the mean and standard deviation of each individual peak, including the waiting time between peaks, amplitude, and full width at half maximum (FWHM). To a large degree, the form of a Gaussian accurately models the sudden impulse of non-thermal electrons reaching the chromosphere and interacting to produce non-thermal bremsstrahlung emission. Of course, there are cases in which this form may not be a suitable choice; for instance, when there is clear particle trapping in the coronal loop, leading to asymmetric HXR emission profiles and a longer decay time. However, to a large extent, the choice of this functional form describes the observed HXR emission extremely well. Other reasonable choices would include a triangular pulse or a double exponential (a symmetric exponential rise and decay). In order to account for particle trapping, it could also be useful to consider an asymmetric function, such as one with an exponential rise but with a slower decay rate.
In this step, the smoothed signal is fit by a linear combination of Gaussian functions. The gradient of the time profile is computed and the locations where the gradient is zero are identified as peaks and troughs (see Fig. 1D). Each pair of local maxima and minima are considered as a single Gaussian contribution. From these values, initial parameters for the fitting routine are derived. The time of the peak value is considered to be the mean of the Gaussian and the time between peak and trough is taken as a rough estimate of the FWHM. The height of the curve at the peak time is taken as an initial estimate of amplitude. Scipy's Curve Fit (Virtanen et al. 2020) routine is used with the derived initial values as input and with upper and lower bounds on the possible parameter space to fit the time profile. The range of reasonable parameter values varies with each flare. A large GOES class flare will have a smaller possible range in peak height for fitting, as the counting statistics are better. In general, the range used is quite large to allow for the curve fitting routine to work effectively, yet it excludes wide Gaussian fits. Without putting a restriction on the range of peak widths, the curve fitting routine may return Gaussian contributions with large FWHM. This is demonstrated in Fig. 10.
## 4 Results
In Sect. 4.1, we present the detailed results of each step of the method for the SOL2022-05-04 flare, for the purposes of demonstration. The process has been tested on four M- and X-class solar flares observed with STIX since September 2021. In Sect. 4.2, we give an overview of all flares analysed and the results obtained. Finally, Sect. 4.3 presents an analysis of QPP identification and characterisation for each event.
### Method results detailed for the SOL2022-05-04 event
#### 4.1.1 Gaussian process regression
GP regression was applied to the SOL2022-05-04 flare of M1.2 GOES class. A grid search was performed over the hyperparameter space shown in Table 1. For this flare, the optimal solution was found to be \(A=1.8\), \(\alpha=0.007\), and \(l=0.08\). The smoothed GP prediction for the optimised hyper-parameters is shown in Fig. 1C with a 95% confidence interval.
#### 4.1.2 Gaussian fitting
The peaks and troughs of the smoothed SOL2022-05-04 time profile are identified (as shown in Fig. 1D) and used as input into the fitting routine. The fitting routine fits a linear combination of individual Gaussians to the smoothed time profile. Constraints are given on the possible range of Gaussian characteristics, and for the case of SOL2022-05-04, the bounds applied are those shown in Table 2. The resulting fit is shown in Fig. 1E. Overall, a good fit to the data is obtained. However, at \(t\approx 60\)s, several peaks are not well fit. This is because the smoothing step (see Fig. 1C) has removed some of these peaks from the time series as they are very short-lived. One of the short peaks remains in the smooth profile, although it has a smaller amplitude and, thus, it is also not very well fit in the final Gaussian decomposition. This demonstrates an important drawback of the smoothing step.
### Flare overview
#### 4.2.1 Sol2021-09-23 M1.9 GOES class flare
The SOL2021-09-23 flare was an early impulsive M1.9 GOES class flare observed by STIX when Solar Orbiter was at 0.60 AU from the Sun and at an angle of \(\sim 33^{\circ}\) to the Sun-Earth line. A case study of this flare by Stiefel et al., 2023 (accepted) reveals four non-thermal HXR sources observed by STIX, with two inner sources from the traditional flare loop and outer footpoints which may be associated with the early onset of a filament eruption; however, the evidence on the latter is inconclusive. A Gaussian decomposition for this flare was performed and the resulting fit (shown in Fig. 2) has been assessed by calculating the standard deviation of the normalised residual. The normalised residual is calculated as the difference between the fit and the
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline
**Hyperparameter** & **Min** & **Max** & **Step** & **Solution** \\ \hline \(A\) & 1 & 5 & 0.1 & 1.8 \\ \hline \(l\) & 0.01 & 0.1 & 0.01 & 0.08 \\ \hline \(\alpha\) & 0.001 & 0.015 & 0.001 & 0.007 \\ \hline \end{tabular}
\end{table}
Table 1: Model hyperparameter grid search domain for the SOL2022-05-04 flare.
\begin{table}
\begin{tabular}{|c|c|} \hline
**Parameter** & **Range** \\ \hline Mean (s) & \(\pm 10\) \\ \hline Amplitude (Counts\(s^{-1}\)) & \(\pm 200\) \\ \hline FWHM (s) & \(\pm 20\) \\ \hline \end{tabular}
\end{table}
Table 2: Bounds applied to the fitting parameters for SOL2022-05-04 flare. The range used is quite large to allow for a reasonably large parameter space to be searched, but while excluding non-physical results.
observed STIX time profile normalised by the error on the measured STIX time profile. The error on the STIX observed time profile is a combination of the compression error and counting statistics error. The fit gives \(\sigma_{R}=1.83\). The residuals are quite high, particularly at the beginning of the impulsive phase because there are fast-time variation on timescales smoothed out by the Gaussian process regression. However, the fit improves in the later phase.
#### 4.2.2 Sol2021-10-09 M1.6 Goes class flare
The SOL2021-10-09 flare was an M1.6 GOES class flare observed when Solar Orbiter was at a distance of 0.68 AU from the Sun with the spacecraft-Sun-Earth angle of \(\sim 15^{\circ}\). The Gaussian decomposition method gives a fit with standard deviation of the residual \(\sigma_{R}=0.62\), shown in Fig. 3. In this case, the measured data is noisier than other example flares because the flare is smaller. As a result, it is easier for the fit to appear to perform well on global structures, but the variance among the fits is high.
#### 4.2.3 Sol2022-03-30 X1.4 GOES class flare
The X1.4 GOES class flare observed on 30-03-2022 was the first X class flare to be observed by STIX during perihelion. This observation was taken at a distance of 0.33 AU from the Sun. Furthermore, the angle between Solar Orbiter and the Sun-Earth line at this time was \(\sim 95^{\circ}\). Due to the large flux incident on the instrument during this flare, the aluminium alloy attenuator was automatically inserted when a certain trigger threshold was reached. The periods when the attenuator was inserted are shown in Fig. 4. The attenuator mainly blocks low energy X-ray photons and allows high energy photons to be transmitted. The energy dependent response of the attenuator can be found in Krucker et al. (2020). As the counts incident on the detector increase, the live-time decreases and vice versa when the attenuator is inserted. Therefore, to correctly show the time profile of such a flare, we should carefully account for changes in detector live-time as well as the energy and spectrum dependent attenuator transmission. This correction is non-trivial as it requires detailed knowledge of the intricacies of such an instrument. However, for the background detector (BKG), the pixels with large apertures are turned off when flux is high. This means that the attenuator does not cover this detector when inserted. As such, there is no effect exerted by the attenuator motion on the 4-10 keV background detector time profile shown. The background detector was used for the thermal 4-10 keV profile shown in Fig. 4, as it requires no correction for the attenuator motion.
This flare is an early impulsive flare which shows time-varying structures. In particular, there are variations on the timescale of several seconds. Later in the flare, during the thermal peak, there are further variations with significantly different fluctuation timescales (\(\sim 14\)s) and, later on, even longer periods of \(\sim 35\)s. The pulsation timescales are changing significantly over the course of the flare and so, it is challenging to smooth the time profile using our simple choice of kernel, since the length-scale parameter has no time dependence. To account for this, the time profile was split into three different sections: the early phase (phase 1), middle phase (phase 2), and late phase (phase 3). Figure 4 shows the Gaussian decomposition for each phase with the standard deviation of the residuals being \(\sigma_{R,1}=1.25\), \(\sigma_{R,2}=1.58\) & \(\sigma_{R,3}=1.11\) for phases 1, 2, and 3, respectively. In particular, phases 1 and 3 appear to be well fitted with a linear combination of Gaussians, but the residuals for phase 2 show sinusoidal variation, indicating the data are not well fit by the model.
#### 4.2.4 Sol2022-05-04 M1.2 Goes class flare
Finally, we present the M1.2 GOES class flare on May 4th 2022. At this time, Solar Orbiter was observing the far side of the Sun from the Earth, namely, the spacecraft-Sun-Earth angle was \(\sim 163^{\circ}\) and the distance to the Sun was 0.73 AU. As such, the flare was not observed by Earth based observatories. The GOES class of a flare which was not observed from Earth is estimated using a model that fits STIX 4-10 keV counts to the GOES flux2. Similarly to the SOL2022-03-30 flare, the attenuator motion is shown in Fig. 5 and the background detector was used for the thermal 4-10 keV profile. The standard deviation of the residual \(\sigma_{R}=1.39\). Overall, the data is well fit with a linear combination of Gaussians. The fastest fluctuations in time are not well fitted (e.g. Fig. 1E at \(t\approx 60\)s) because the time profile output by the GP regression smooths out some very short variations. Therefore, the Gaussian fitting procedure doesn't fit these shorter peaks. This is a limitation of the method that is discussed further in Sect. 5.2.
Footnote 2: [https://datacenter.stix.i4ds.net/wiki/index.php?title=GOES_Flux_vs_STIX_counts](https://datacenter.stix.i4ds.net/wiki/index.php?title=GOES_Flux_vs_STIX_counts)
Figure 2: Overview of the SOL2021-09-23 flare. The 18-28 keV HXR non-thermal Gaussian decomposition is shown with a standard deviation of the residual \(\sigma_{R}=1.83\). The first few peaks have substructure on timescales smoothed out in the GP regression output. As a result, variations on these timescales are not well fit.
### DPP identification and characterisation
An estimate of signal (quasi-) periodicity can be derived from individual Gaussian components. Figure 7 shows the time of each Gaussian component against the peak number for the SOL2022-05-04 flare. The slope of the line of best fit gives an estimate for the period, \(\sim 5.55\)s. This shows that STIX can detect variations on short timescales, which is something that was not feasible with its predecessor RHESSI. This result is then compared with the AFINO analysis method (Inglis et al. 2017). AFINO fits the Fourier power spectrum of a flare signal with four different models: 1) a power law + constant; 2) power law + constant + Gaussian A broken power law + constant; 3) power law + 2 Gaussians. Should a QPP component be present in the signal, the second or fourth model would be favoured, that is, the Gaussian peak fits the enhanced power due to the QPP that is present. The goodness of fit is estimated from a Bayesian information criterion (BIC), given by:
\[BIC=-2ln(L)+kln(n), \tag{2}\]
where \(L\) is the maximum likelihood, \(k\) is the number of free parameters, and \(n\) is the number of data points in the Fourier power spectrum. Thus, the BIC score penalises over-fitting. A large negative value for a given model indicates a strong fit to the data. One model is said to be strongly preferred over another if \(|\Delta BIC|>10\). In the case of the SOL2022-05-04 flare, the BIC score for model 1 versus model 2 is \(|\Delta BIC_{12}|=4.3\), whereas model 2 gives a larger negative BIC score. Thus, the AFINO method gives a slight preference for the QPP model with single period, \(P=4.96^{+0.66}_{-0.54}\), over a simple power law model. This is largely consistent with the period obtained from the Gaussian decomposition method, which is slightly higher since smoothing the time profile suppresses the fastest time variations. We notice that when assessed based on the standard AFINO criteria (\(|\Delta BIC|>10\)), this flare would not be marked as a QPP detection, although there is clearly enhanced power at \(f_{0}=0.202\pm 0.024\) Hz (\(P=4.96^{+0.66}_{-0.54}\)s). From the line fitting method, there is the added benefit of obtaining the exact timing information of each pulsation, which can be used to spatially resolve each peak.
By fitting two lines, a test can be performed to check for non-stationarity (changes in quasi-periodicity over time) in the flare signal. The derived slopes are 4.48s and 6.78s for the early and late stages, respectively. However, the Pearson correlation co-efficients obtained indicate that a single line fit or single periodicity in the signal gives a better fit to the data. This is, of course, biased by the number of data points in each fit. Further analysis was done where the signal was split into two time ranges, corresponding to the time ranges of the two line fits. AFINO analysis was performed on these two time ranges. In this case, the QPP model was only favoured in the first time range. Therefore, the AFINO method favours a single QPP model over two periods in this particular case. This agrees with the apparent preference for a single periodicity given by the line fit.
It has been demonstrated that this new method is capable of accurately detecting and characterising fast-time-varying structures and QPPs in the non-thermal HXR emission from flares. The results from this method are consistent with Fourier based analysis methods such as AFINO and in addition, allow for the extraction of important information, for instance, the time between peaks and pulsation duration. Further, this method can be used to detect frequency drifts and non-stationarity in time series.
#### 4.3.1 Errors on QPP analysis
The Gaussian decomposition fits are non-unique and multiple solutions can be derived for a given time series. To determine the effect of this, error analysis was performed for determining the QPP period. For this, an array of errors was added to each STIX time series, which is some random integer multiple of the calculated measurement error (that includes the uncertainty introduced due to compression and counting statistics). The random integer is sampled from a normal white noise distribution with zero mean and unit standard deviation. The entire fitting procedure is then performed 50 times, with different initial error arrays added to the time profile. This gives many unique Gaussian decomposition fits for a given flare profile. As previously, the periodicity of the signal is estimated from a single line fit and an average period for each flare is obtained, with a standard deviation over 50 iterations. The mean periods and standard deviation are as shown in Table 3. An estimation of the range of periods that reasonably describe the variations observed is also given.
These results indicate that the period estimation for flares SOL2021-09-23, SOL2022-30-03, and SOL2022-05-04 in Fig.
Figure 3: Overview of the SOL2021-10-09 flare and the Gaussian decomposition of its 20-25 keV HXR non-thermal time profile. The fit gives a standard deviation of the residual \(\sigma_{R}=0.62\). This is a strong fit. The noise level on the measured values of the counts is higher for this flare since it is smaller. This makes it easier to fit and thus the residuals are smaller than those of a large flare with higher signal-to-noise ratio.
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**Flare** & **Mean period (s)** & **Range (s)** \\ \hline SOL2021-09-23 & \(18.5\pm 0.7\) & \(17.5-19.9\) \\ \hline SOL2021-10-09 & \(100.9\pm 9.5\) & \(82.6-129.4\) \\ \hline SOL2022-03-30 P1 & \(7.2\pm 0.4\) & \(6.9-8.0\) \\ \hline SOL2022-03-30 P2 & \(18.4\pm 3.0\) & \(13.6-28.2\) \\ \hline SOL2022-03-30 P3 & \(35.8\pm 3.6\) & \(27.9-44.5\) \\ \hline SOL2022-05-04 & \(6.0\pm 0.4\) & \(5.3-7.0\) \\ \hline \end{tabular}
\end{table}
Table 3: Table showing the mean and range of periods derived for each flare over 50 iterations, where random error has been added to the initial time series.
Figure 4: Overview plot of the SOL2022-03-30 flare and the Gaussian decomposition of the 32-45 keV HXR profile for three different phases: phases 1, 2, and 3 from left to right. The standard deviation of the residual of each phase is \(\sigma_{\rm g,1}=1.25\), \(\sigma_{\rm g,2}=1.58\) & \(\sigma_{\rm g,3}=1.11\).
Figure 5: Overview plot of the SOL2022-05-04 flare and the Gaussian decomposition of the impulsive phase 25-76 keV profile. The standard deviation of the residual for the fit is \(\sigma_{\rm g}=1.39\).
Figure 6: Power spectral density of signal shown in Fig.1A. The AFINO QPP model best fit is shown. AFINO analysis gives a period of \(\sim 4.96^{+0.66}_{-0.58}\), with a moderate preference for the QPP model with \(|\Delta BIC|=4.3\)
2, 4 P1, and 5, respectively, are strong fits, since the estimated periods are within \(1.1\sigma\) of the means obtained from the error analysis. However, the SOL2021-10-09 flare has high variance when noise is added to the time profile. This is because the HXR counts are low and as a result the measurement error is relatively high; therefore, when additional error is added, the Gaussian decomposition fits vary significantly. The fit obtained in Fig. 3 is thus unreliable, as the derived period is \(2.9\sigma\) from the mean period over 50 iterations. We notice that for the SOL2022-03-30 flare, phase 1 gives a strong fit; however, phases 2 and 3 have a wide range of periods - thus, they are not as well constrained.
## 5 Discussion and conclusions
### First analysis of fast time-varying structures and QPP detection with STIX data
In this work, we present the first analysis of fast-time-varying structures in the non-thermal HXR emission from flares using Solar Orbiter's STIX instrument. We have developed a new method for identifying and quantifying fast-time-varying structures in the HXR emission from flares. This method decomposes the non-thermal HXR emission from flares in the impulsive phase into a linear combination of Gaussian pulses. For a sample of four M- and X-class flares, the standard
Figure 8: Time of each peak from the Gaussian decomposition against peak number for the SOL2021-09-23 & SOL2021-10-09 flare shown in Figs. 2 and 3, respectively. The fit gives an estimate of periodicity in the SOL2021-09-23 non-thermal HXR flare signal of \(\sim 19\)s. For the SOL2021-10-09 flare, a periodicity of \(\sim 128\)s is derived.
Figure 7: Mean time of each Gaussian component against peak number. The slope of the line fit gives an estimate of periodicity in the signal. A stronger fit with multiple lines would indicate that there is non-stationarity in the signal.
deviation of the normalised residual is \(\leq 1.8\). This indicates that the model is a good fit to the data for this selection of events.
From these four flares, fast-time variations on timescales ranging from 4-128 seconds have been characterised. Furthermore, the first detection of solar flare QPPs from STIX observations has been made. It has been shown that QPPs with timescales down to the order of \(\sim 4\)s can be detected with STIX, which was not possible with it predecessor RHESSI due to the spacecraft rotational period of \(\sim 4\)s. This work demonstrates that STIX is an instrument well suited to the detection of fast-time variations in the HXR emission from solar flares in the timescale range of seconds to minutes, due to its high time resolution and relatively constant non-solar background.
### Drawbacks
In this section, we present and discuss the drawbacks and subtleties associated with the method and how they may impact the derived results:
1. One important drawback of this method is that GP regression smooths time variations on timescales that are shorter than the optimal length scale obtained from hyperparameter optimisation. This means that some peaks that are very short-lasting are smoothed out and, hence, they are not fit by the Gaussian fitting routine. Importantly, the timescales that are suppressed are a function of the sampling rate, since with a higher time cadence, shorter variations will likely be more prominent and vice versa. As such, the drawback of this method is that variations that occur at the limit of the instrument's sampling cadence are smoothed out. In this case with STIX, variations on short timescales such as second or sub-second are smoothed out. While QPPs can occur on short timescales such as second or sub-second periodicities, particularly in the radio band (Tan and Tan, 2012; Nakariakov et al., 2018; Carley et al., 2019), this work focuses on QPPs that have timescales on the order of seconds to minutes, which are very commonly reported (see also McLaughlin et al. (2018); Hayes et al. (2020); Zimovets et al. (2021)). We have shown here that the method works well for these types of QPPs.
2. It is well known that in the case of significant particle trapping in a flare loop, the HXR time profile becomes asymmetric. In this case, fitting Gaussian curves to the time profiles of a trapped particle population is no longer physically meaningful. We should consider fitting other functional forms with asymmetric profiles.
3. A subtlety of this method is that although the Gaussian decomposition step is good at characterising non-stationarity in a signal, GP regression with our choice of kernel, is unable to model large frequency drifts in a flare since the length scale is not time-dependent. This gives some limitation to the method when identifying non-stationarity. This effect is particularly pertinent to the case of the SOL2022-03-30 flare, whereby in the early impulsive phase we observe fluctuations on the order of \(\sim 7\)s and later during the thermal peak there are fluctuations on a \(\sim 15\)s and then a \(\sim 35\)s timescale, as shown in Fig. 9. One way to rectify this is to split the time series into sections and perform the smoothing on different time ranges separately, as was done for the SOL2022-03-30 flare. Another possibility is to consider a more complex kernel choice which has some time dependency. This method was also suggested by Hubner et al. (2022).
4. Another important note to consider is that the Gaussian decomposition fit is non-unique and influenced by the bounds applied to the curve fitting routine. Furthermore, the routine fits a fixed number of Gaussians based on the number of local maxima and minima. As such, several closely separated peaks in quick succession may not be fit by this method, if the gradient remains non-zero. Figure 10 demonstrates the effect of applying boundary conditions on the fitting parameters for the SOL2022-05-04 flare.
### Applications and future work
The potential applications of this method are wide and varied. Obtaining timing information on individual non-thermal HXR pulses in a systematic way enables a wide-scale, in-depth study of fast-time-varying structures. In particular, the timing information obtained in this method can be used to image individual pulse contributions to the HXR flare profile. With such information, we can begin to understand the spatial structure (location, morphology) of the oscillatory source, as suggested by Zimovets et al. (2021), and make a comparison with those expected for various models. For example, should sausage mode oscillations be responsible for fast-time variations in HXR profiles, we would expect the source morphology to change over time and the location to remain fixed. In contrast to this, if repeated reconnection were to be responsible for
Figure 9: Line fits to the peak time over peak number for three phases of the SOL2022-03-30 flare shown in Fig. 4. Since the pulsation frequency is increasing significantly in time, each phase was analysed separately. Phase 1 is very well fitted by a line indicating that the timing of peaks is quasi-periodic, phases 2 and 3 are not as well fit by a line. This agrees with the error analysis presented in Table 3, where phases 2 and 3 have high variance. Between phases 1 and 3, the periodicity has increased by a factor of \(\sim 5\). This indicates that the driver of these pulsations could be different in the later stages of the flare, compared to the initial impulsive phase.
such modulations, we would expect the HXR source spatial locations to change in time since the reconnecting field lines must change. The time evolution of such structures can be investigated with STIX's imaging capabilities. Although it should be noted that imaging fast-time-varying HXR structures can be challenging with an indirect image such as STIX due to its limited dynamic range; in particular, in cases where there is one source that is much brighter than the others. Furthermore, one can perform time dependent spectral analysis with STIX. This can help to identify and investigate the underlying particle population behind such oscillatory signatures. Additionally, time-dependent imaging spectroscopy can be used to localise particle populations.
This method adds to the set of current QPP detection techniques, with the additional ability of characterising non-QPP, fast-time-varying signatures. The quantities and relationships derived by this method are important as they feed back into modeling efforts. For example, large-scale waiting time distributions could be used to assess whether avalanche models can accurately describe the observed fast-time variation phenomenon.
Future work will focus on using the derived timing information and shape to determine over which interval HXR images should be made in order to understand the source origin. As STIX is an indirect Fourier imager, another interesting avenue of investigation will be to analyse the evolution of Fourier components (visibilities) over time. We will focus on combining the analysis with other datasets at various wavelength of emission. This method will be applied to other HXR datasets such as FERMI Gamma-Ray Space Telescope (Meegan et al., 2009) and RHESSI (Lin et al., 2002), and to future instruments including ASO-S/HXI (Zhang et al., 2019) and Aditya-L1/HELIOS (Seetha & Megala, 2017). This method will also be applied to other wavelengths of emission such as pulsations observed in radio by the Expanded Owens Valley Solar Array (EOVSA) (Gary et al., 2018), which are often correlated with those seen in HXR emission (Aschwanden et al., 1990).
In conclusion, a new method for identifying and characterising fast-time variations in the non-thermal HXR time profiles of solar flares has been developed, in which the signals are decomposed into individual Gaussian contributions. The fits obtained have a standard deviation of the normalised residual of \(\leq 1.8\). The first characterisation and detection of fast-time variations in the HXR profile of solar flares with STIX has been made on timescales between 4-128s.
The opportunity to study time variations in flares has greatly improved with new observations from STIX on Solar Orbiter and its observations of numerous flares that demonstrate fast-time variability over a wide range of timescales.
###### Acknowledgements.
Solar Orbiter is a space mission of international collaboration between ESA and NASA, operated by ESA. The STIX instrument is an international collaboration between Switzerland, Poland, France, Czech Republic, Germany, Austria, Ireland, and Italy. HC, AFB and SK are supported by the Swiss National Science Foundation Grant 200021L\({}_{-}\)189180 for STIX. L.A.H is supported by an ESA Research Fellowship.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.